# Regressors

### <mark style="color:blue;">Supported Regressors</mark>

<mark style="color:blue;">Linear Regression</mark>

* Simple and interpretable
* Suitable for linear relationships between features and targets
* Includes Logistic Regression for classification tasks

<mark style="color:blue;">Random Forest</mark>

* Ensemble method using multiple decision trees
* Handles non-linear relationships well
* Provides feature importance rankings

<mark style="color:blue;">Gradient Boosting</mark>

* Builds an ensemble of weak learners sequentially
* Often provides high accuracy

<mark style="color:blue;">AdaBoost</mark>

* Adaptive Boosting algorithm
* Focuses on hard-to-classify instances
* Works well with weak learners

<mark style="color:blue;">Neural Networks</mark>

* Flexible architecture for complex patterns
* Supports both regression and classification
* Includes options for deep learning

***

### <mark style="color:blue;">Multi-output Support</mark>

All our regressors support multi-output scenarios, allowing prediction of multiple targets simultaneously.

***

### <mark style="color:blue;">Auto Mode</mark>

Each regressor includes an "auto mode" that performs automated hyperparameter tuning to optimize model performance.

***

### <mark style="color:blue;">Usage</mark>

To use a specific regressor, set the `regressor` option in your model configuration.

For more detailed information about each regressor, including its specific parameters, strengths, and use cases, please refer to the individual documentation.

***

### <mark style="color:red;">"Guess the Candies"</mark>&#x20;

Imagine we're trying to guess how many candies are in a jar. We have information about the jar's height, width, and weight.&#x20;

Here's how each regressor might approach this problem:

1. <mark style="color:blue;">**Linear Regression**</mark><mark style="color:blue;">:</mark> This is like drawing a straight line through our data points. It might say, "For every inch taller the jar is, add 10 candies. For every inch wider, add 15 candies. For every ounce heavier, add 5 candies." It's simple but might miss some complex relationships.
2. <mark style="color:blue;">**Random Forest**</mark><mark style="color:blue;">:</mark> This is like asking a bunch of friends to guess, each using slightly different rules, then taking the average of all their guesses. One friend might focus more on the height, another on the weight, and so on. By combining all these guesses, we often get a pretty good estimate.
3. <mark style="color:blue;">**Gradient Boosting**</mark><mark style="color:blue;">:</mark> This is like guessing, then looking at where we went wrong, and making a new rule to fix those mistakes. We keep doing this, making new rules to fix the remaining errors, until our guesses get really good.
4. <mark style="color:blue;">**AdaBoost**</mark><mark style="color:blue;">:</mark> This is similar to Gradient Boosting, but it pays special attention to the jars we guessed really badly on. It's like saying, "Oops, we were way off on that tall, skinny jar. Let's make sure we have a special rule for jars like that."
5. <mark style="color:blue;">**Neural Networks**</mark><mark style="color:blue;">:</mark> This is like having a super-smart friend who looks at all the jars and candies, and comes up with their own complex method for guessing. We don't always know exactly how they're doing it, but their guesses are often very accurate, especially if we have lots of jars to learn from.

Each method has its strengths, and the best choice often depends on how many jars we've seen before, how complex the relationship between jar features and candy count is, and how much time we have to make our guesses.
