Data Engineering
The data & feature engineering process is implemented in the feature_engineering
method. The target extraction process is handled by the extract_targets
method, which prepares the target variables for training.
Those two methods handle various data types, encodes categorical variables, scales numeric varibales, and applies dimensionality reduction using an autoencoder.
Data Type Identification
The methods identify the data type of each feature / target:
Unix timestamps
Ethereum addresses
Categorical features
Numeric features
Unix timestamps
✅
❌
Ethereum addresses
✅
✅
Categorical features
✅
✅
Numeric features
✅
✅
Timestamp Feature Extraction
For Unix timestamps, the method extracts several time-based features:
Year, month, day, hour, day of week
Cyclical encoding for month, day, and hour
Example of cyclical encoding:
Categorical Encoding
One-Hot Encoding transforms categorical variables (features and targets) into binary columns representing each category.
Categorical features and targets, including Ethereum addresses, are encoded using One-Hot Encoding:
Numeric Feature Scaling
Numeric features and targets are scaled to ensure consistent model input using StandardScaler:
Autoencoder for Dimensionality Reduction
An autoencoder is used to reduce the dimensionality of the feature space:
An autoencoder is a type of artificial neural network used for unsupervised learning, primarily for the purpose of dimensionality reduction and feature learning. It learns an efficient representation of the input data by training the network to ignore noise and irrelevant data while preserving important features. The architecture consists of two main parts:
Encoder: The encoder compresses the input data into a lower-dimensional space (latent representation), reducing its dimensionality while retaining critical information.
Decoder: The decoder reconstructs the input data from the compressed representation, attempting to generate an output as similar as possible to the original input.
The autoencoder is trained by minimizing the difference between the input and the reconstructed output, often using a loss function like Mean Squared Error (MSE).
How an Autoencoder Works
Input Layer: The raw features from the dataset are fed into the input layer.
Encoding: The encoder, typically composed of fully connected layers, compresses the input data into a smaller representation by learning important features and discarding redundant information. For example, a layer might shrink the number of input features by half.
Latent Space: This compressed representation, also called the latent space, captures the most critical features needed for reconstructing the input.
Decoding: The decoder attempts to expand the latent space representation back to the original input feature size, aiming to recreate the input data as closely as possible.
Training: The network is trained to minimize the reconstruction loss (difference between the original input and the reconstructed output), gradually improving the quality of the compression.
By using an autoencoder, we reduce the dimensionality of the feature space, which helps in retaining only the most relevant features and discarding noise, making downstream tasks like prediction more efficient.
Multi-target Handling
The method can handle multiple targets, combining them into a single 2D numpy array.
Usage
Both methods are called during the model training process:
Advantages
The method handles both training and prediction modes.
It stores necessary information (encoders, scalers, etc.) for consistent feature engineering during prediction.
Sample weights are calculated to give more importance to recent observations.
Handles various data types automatically
Applies appropriate transformations for each data type
Supports multi-target scenarios
Preserves information about feature and target transformations for consistent prediction
Last updated