Neural Network (LTSM)
Overview
Neural Networks, particularly deep learning models, have gained significant traction in financial applications due to their ability to capture complex, non-linear relationships in data. They're especially powerful for time series prediction tasks.
How It Works
Input Layer: Receives the initial data (e.g., financial indicators)
Hidden Layers: Process the data through a series of neurons with activation functions
Output Layer: Produces the final prediction (e.g., token price, risk score)
Workflow Components
Initialization
The Neural Network model is initialized in the initialize_regressor
method:
Key Components
Model Creation:
A custom function
create_nn_model
is used to create the neural network architecture.
Multi-output Support:
The model can handle multiple outputs, both for regression and classification tasks.
Hyperparameter Tuning:
When
auto_mode
is enabled, we use a customTuneableNNRegressor
class for automated hyperparameter tuning.
Hyperparameters
The main hyperparameters for the Neural Network include:
epochs
: Number of training epochs.batch_size
: Number of samples per gradient update.units1
: Number of units in the first hidden layer.units2
: Number of units in the second hidden layer.dropout_rate
: Dropout rate for regularization.l2_reg
: L2 regularization factor.optimizer
: Choice of optimizer ('adam' or 'rmsprop').learning_rate
: Learning rate for the optimizer.
Training Process
The training process is handled in the fit_regressor
method:
The method prepares the target variables based on their types (numeric or categorical).
It sets up appropriate loss functions and metrics for each output.
If using
TuneableNNRegressor
, it performs hyperparameter tuning.Otherwise, it creates and trains a single model with the specified parameters.
Model Serialization
After training, the model is serialized and stored:
Auto Mode and Hyperparameter Tuning
When auto_mode
is enabled:
A
TuneableNNRegressor
object is created with a range of hyperparameters to try.It performs a randomized search over the specified parameter distributions.
The best parameters found are saved and used for the final model.
The TuneableNNRegressor
class:
Tries different hyperparameter combinations.
Uses early stopping to prevent overfitting.
Allows for interruption of the training process.
Multi-output Scenario
The Neural Network naturally handles multi-output scenarios:
The model's output layer is adjusted based on the number and type of target variables.
Appropriate loss functions are used for each output (e.g., MSE for regression, categorical crossentropy for classification).
Advantages and Limitations
Advantages:
Can capture complex non-linear relationships in data.
Flexible architecture suitable for various types of data and prediction tasks.
Handles multi-output scenarios naturally.
Limitations:
Can be computationally expensive to train, especially with large datasets.
Requires careful tuning of hyperparameters for optimal performance.
Less interpretable compared to simpler models.
Considerations when using LSTMs
Data Preprocessing: LSTMs typically require normalized input data. Ensure your financial time series data is properly scaled.
Sequence Length: Choose an appropriate sequence length that captures relevant patterns without introducing unnecessary noise.
Hyperparameter Tuning: The performance of LSTMs can be sensitive to hyperparameters. Key parameters to tune include the number of LSTM units, dropout rate, and learning rate.
Computational Resources: LSTMs can be computationally intensive, especially for long sequences or large datasets. Ensure you have adequate computational resources.
By leveraging LSTMs in our neural network architecture, we can create powerful models capable of capturing complex temporal dependencies in financial time series data, leading to more accurate predictions and insights for Inverse Finance DAO contributors.
Last updated