Inverse Watch Docs
AppLanding
  • Overview
    • Home
    • Governance
      • Proposal 7
      • Proposal 25
      • Proposal 52
      • Proposal 107
      • Proposal 147 - S1
      • Proposal 189 - S2
  • Products
    • Inverse Alerts
      • See on Twitter
    • Inverse Chatbot
      • /doc
      • /imagine
      • /data
      • /graph
    • Inverse Subgraphs
      • See inverse-subgraph on Mainnet
      • See inverse-governance-subgraph on Mainnet
    • Inverse Watch
      • Go to App
  • User Guide
    • Quickstart
    • Alerts
      • Setting Up an Alert
      • Adding New Alert Destinations
      • Customize Alert Template
      • Multiple Column Alert
    • Queries
      • Creating and Editing Queries
      • Querying Existing Query Results
      • Query Parameters
      • How to Schedule a Query
      • Favorites & Tagging
      • Query Filters
      • How To Download / Export Query Results
      • Query Snippets
    • Visualizations
      • Cohort Visualizations
      • Visualizations How-To
      • Chart Visualizations
      • Formatting Numbers in Visualizations
      • How to Make a Pivot Table
      • Funnel Visualizations
      • Table Visualization Options
      • Visualizations Types
    • Dashboards
      • Creating and Editing Dashboards
      • Favorites & Tagging
      • Sharing and Embedding Dashboards
    • Data Sources
      • CSV & Excel Files
      • Google Sheets
      • JSON (API)
      • Python
      • EVM Chain Logs
      • EVM Chain State
      • GraphQL
      • Dune API
    • Machine Learning
      • Data Engineering
      • Regressors
        • Linear Regression
        • Random Forest
        • Ada Boosting
        • Gradient Boosting
        • Neural Network (LSTM)
      • Training and Predicting
      • Metrics & Overfitting
      • Examples
        • Price Prediction
          • Data Preprocessing
          • Model Creation & Training
          • Metrics Evaluation
          • Back Testing
          • Visualizing
        • Liquidation Risk
  • Admin & Dev Guide
    • Setup
    • Redash
    • Integrations & API
    • Query Runners
    • Users
      • Adding a Profile Picture
      • Authentication Options
      • Group Management
      • Inviting Users to Use Redash
      • Permissions & Groups
    • Visualizations
  • Cheat Sheets
    • Snippets
    • Contracts
  • More
    • Deprecated Apps
    • Github : inverse-flaskbot
    • Github : inverse-subgraph
    • Github : inverse-watch
Powered by GitBook
On this page
  • How it works
  • LSTMs
  • Key Components of LSTM Architecture:
  • Why LSTMs Are Suited for Financial Time Series
  • Initialization
  • Key Components
  • Hyperparameters
  • Training Process
  • Auto Mode and Hyperparameter Tuning
  • Multi-output Scenario
  • Advantages and Limitations
  • Considerations when using LSTMs

Was this helpful?

  1. User Guide
  2. Machine Learning
  3. Regressors

Neural Network (LSTM)

PreviousGradient BoostingNextTraining and Predicting

Last updated 6 months ago

Was this helpful?

Neural Networks, particularly deep learning models like LSTMs, have gained significant traction in financial applications due to their ability to capture complex, non-linear relationships in data. They are especially powerful for time series prediction tasks, such as forecasting stock prices or cryptocurrency values, which are crucial for making informed financial decisions.

How it works

LSTMs

LSTMs are a specialized type of recurrent neural network (RNN) designed to handle sequential data and long-term dependencies in time series, making them well-suited for financial data modeling. Unlike traditional RNNs, which struggle with learning patterns over longer time intervals, LSTMs use memory cells and gates to store, update, and retrieve information over time, allowing them to preserve important temporal patterns.

Key Components of LSTM Architecture:

  1. Input Layer: Receives the initial financial indicators or features (e.g., prices, volume, moving averages).

  2. Hidden LSTM Layers: Process the data through memory cells that capture temporal dependencies. These layers allow the model to understand sequences over time (e.g., how today's price depends on previous days).

  3. Output Layer: Produces the final prediction, such as the future price of an asset or a buy/sell signal.

Why LSTMs Are Suited for Financial Time Series

  • Temporal Dependencies: Financial data is sequential in nature. For example, the price of a stock today is influenced by its past values. LSTMs are particularly good at modeling these temporal relationships.

  • Handling Non-Linear Patterns: Financial markets are often driven by non-linear relationships, which LSTMs can capture more effectively than simpler models like linear regression.

  • Dealing with Long-Term Dependencies: LSTMs can remember long-term trends, which is essential for financial time series, where historical data can provide context for future predictions.


Initialization

The Neural Network model is initialized in the initialize_regressor method:

if self.regressor == 'NeuralNetwork':
    input_shape = (n_row, n_features)
    logging.info(f"Initializing NeuralNetwork with input_shape: {input_shape}")

    target_types = json.loads(self.options.get('target_types', '{}'))
    target_encoders = json.loads(self.options.get('target_encoders', '{}'))

    output_shapes = []
    for target, target_type in target_types.items():
        if target_type == 'numeric':
            output_shapes.append(1)
        elif target_type == 'categorical':
            n_classes = len(target_encoders[target]['categories'])
            output_shapes.append(n_classes)

Key Components

  1. Model Creation:

    • A custom function create_nn_model is used to create the neural network architecture.

  2. Multi-output Support:

    • The model can handle multiple outputs, both for regression and classification tasks.

  3. Hyperparameter Tuning:

    • When auto_mode is enabled, we use a custom TuneableNNRegressor class for automated hyperparameter tuning.


Hyperparameters

The main hyperparameters for the Neural Network include:

  • epochs: Number of training epochs.

  • batch_size: Number of samples per gradient update.

  • units1: Number of units in the first hidden layer.

  • units2: Number of units in the second hidden layer.

  • dropout_rate: Dropout rate for regularization.

  • l2_reg: L2 regularization factor.

  • optimizer: Choice of optimizer ('adam' or 'rmsprop').

  • learning_rate: Learning rate for the optimizer.


Training Process

The training process is handled in the fit_regressor method:

  1. The method prepares the target variables based on their types (numeric or categorical).

  2. It sets up appropriate loss functions and metrics for each output.

  3. If using our custom classTuneableNNRegressor, it performs hyperparameter tuning.

  4. Otherwise, it creates and trains a single model with the specified parameters.

After training, the model is serialized and stored.


Auto Mode and Hyperparameter Tuning

When auto_mode is enabled:

  1. A TuneableNNRegressor object is created with a range of hyperparameters to try.

  2. It performs a randomized search over the specified parameter distributions.

  3. The best parameters found are saved and used for the final model.

The TuneableNNRegressor class:

  • Tries different hyperparameter combinations.

  • Uses early stopping to prevent overfitting.

  • Allows for interruption of the training process.


Multi-output Scenario

The Neural Network naturally handles multi-output scenarios:

  1. The model's output layer is adjusted based on the number and type of target variables.

  2. Appropriate loss functions are used for each output (e.g., MSE for regression, categorical crossentropy for classification).


Advantages and Limitations

Advantages:

  • Captures Complex Temporal Dependencies: LSTMs are excellent at understanding how current market conditions are influenced by past events.

  • Robust to Non-Linearities: Financial markets are inherently non-linear. LSTMs capture these relationships better than traditional models.

  • Multi-Output Scenarios: LSTMs handle multiple prediction tasks at once, such as predicting both price direction and volatility, using different outputs in the same model.

  • Flexibility: LSTMs can be used for various types of financial data (e.g., stock prices, cryptocurrency, trading volumes).

Limitations:

  • Computationally Expensive: Training LSTMs can be resource-intensive, especially with large datasets or long input sequences.

  • Hyperparameter Tuning: LSTMs require careful tuning of hyperparameters for optimal performance, which can be time-consuming.

  • Less Interpretable: Compared to simpler models, LSTMs are harder to interpret, making it difficult to understand the reasoning behind predictions.


Considerations when using LSTMs

  1. Data Preprocessing: LSTMs typically require normalized input data. Ensure your financial time series data is properly scaled.

  2. Sequence Length: Choose an appropriate sequence length that captures relevant patterns without introducing unnecessary noise.

  3. Hyperparameter Tuning: The performance of LSTMs can be sensitive to hyperparameters. Key parameters to tune include the number of LSTM units, dropout rate, and learning rate.

  4. Computational Resources: LSTMs can be computationally intensive, especially for long sequences or large datasets. Ensure you have adequate computational resources.

By leveraging LSTMs in our neural network architecture, we can create powerful models capable of capturing complex temporal dependencies in financial time series data, leading to more accurate predictions and insights.

LTSM vs RNN