Skip to main content

12 posts tagged with "AI"

View All Tags

Adapting Stock Forecasts with AI

· 7 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Financial markets are dynamic: price trends, volatility, and patterns constantly change. These shifts in data distribution, commonly called concept drift, pose a serious challenge for AI models trained on historical data. When the market regime changes—such as transitioning from a calm to a volatile environment—a “stale” model can drastically lose predictive power.

DDG-DA (Data Distribution Generation for Predictable Concept Drift Adaptation) addresses this by forecasting how the data distribution might evolve in the future, instead of only reacting to the most recent data. The approach is rooted in meta-learning (via Qlib’s Meta Controller framework) and helps trading or investment models stay ahead of new trends.

By the end of this article, you will understand:

  1. Why concept drift complicates forecasting in stocks and other financial time series
  2. How DDG-DA uses a future distribution predictor to resample training data
  3. How to incorporate this into Qlib-based workflows to improve stock return and risk-adjusted performance

Concept Drift in Stock Markets

Concept drift refers to changes in the underlying distribution of stock market data. These changes can manifest in multiple ways:

  • Trends: Bull or bear markets can shift faster or slower than expected
  • Volatility: Sudden spikes can invalidate models calibrated during calmer periods
  • Patterns: Market microstructure changes or new correlations can emerge, causing old patterns to wane

Traditional methods often react after drift appears (by retraining on recent data). However, if the drift is somewhat predictable, we can model its trajectory—and proactively train models on future conditions before they fully materialize.

Diagram: Concept Drift Overview

Here, a continuous market data stream (A) encounters distribution shifts (B). These can appear as new trends (C), volatility regimes (D), or changed patterns (E). As a result, a previously trained model (F) gradually loses accuracy (G) if not adapted.


DDG-DA: High-Level Approach

The core principle behind DDG-DA is to forecast the distribution shift itself. Specifically:

  1. Predict Future Distributions

    • A meta-model observes historical tasks (for example, monthly or daily tasks in which you train a new stock-prediction model).
    • This meta-model estimates how the data distribution might move in the next period, such as anticipating an uptick in volatility or a shift in factor exposures.
  2. Generate Synthetic Training Samples

    • Using the distribution forecast, DDG-DA resamples historical data to emulate the expected future conditions.
    • It might assign higher weights to periods with similar volatility or market conditions so the final training set reflects what the market might soon become.
  3. Train or Retrain the Forecasting Model

    • Your usual forecasting model (for example, LightGBM or LSTM) is then retrained on these forward-looking samples, aligning better with the next period’s actual data distribution.
    • As a result, the model remains more accurate when concept drift occurs.

Diagram: DDG-DA Core Steps

This process repeats periodically (for example, each month) to keep your forecasting models aligned with upcoming market conditions.


How It Integrates with Qlib

Qlib provides an AI-oriented Quantitative Investment Platform that handles:

  • Data: Collecting and structuring historical pricing data, factors, and fundamentals
  • Modeling: Building daily or intraday forecasts using built-in ML or custom models
  • Meta Controller: A specialized component for tasks like DDG-DA, which revolve around higher-level meta-learning and distribution adaptation

Diagram: Qlib plus DDG-DA Integration

  1. Qlib Data Layer (A): Feeds into a standard ML pipeline for daily or intraday forecasting (B).
  2. DDG-DA sits in the Meta Controller (C), analyzing tasks, predicting distribution changes, and adjusting the pipeline.
  3. Results circle back into Qlib for backtesting and analysis (D).

Example: Monthly Stock Trend Forecasting

  1. Setting the Tasks

    • Suppose you update your stock-ranking model every month, using the last 2 years of data.
    • Each month is a “task” in Qlib. Over multiple months, you get a series of tasks for training and validation.
  2. Train the Meta-Model

    • DDG-DA learns a function that maps old data distribution patterns to new sample weights.
    • This ensures the next month’s training data distribution is closer to the actual conditions that month.
  3. Evaluate

    • Compare the results to standard approaches:
      • Rolling Retrain: Only uses the most recent data, ignoring the predictable drift pattern
      • Gradual Forgetting: Weighted by how recent data is, but no direct distribution forecast
      • DDG-DA: Weighs data by predicted future distribution, leading to stronger alignment when drift is not purely random

Diagram: Monthly Task Workflow


Performance and Findings

Research in the associated DDG-DA paper and Qlib examples shows:

  • Better Signal Quality: Higher Information Coefficient (IC) for stock selection
  • Enhanced Portfolio Returns: Larger annual returns, improved Sharpe Ratio, and lower drawdowns in backtests
  • Versatility: Works with a wide range of ML models (Linear, LightGBM, neural networks)
  • Limitations: If concept drift is completely random or abrupt (no pattern), DDG-DA’s advantages diminish. Predictability is key

Diagram: Performance Improvement


Practical Steps

  1. Install Qlib and ensure you have the dataset (for example, Alpha158) set up
  2. Clone the DDG-DA Example from the Qlib GitHub:
    git clone https://github.com/microsoft/qlib.git
    cd qlib/examples/benchmarks_dynamic/DDG-DA
  3. Install Requirements:
    pip install -r requirements.txt
  4. Run the Workflow:
    python workflow.py run
    • By default, it uses a simple linear forecasting model
    • To use LightGBM or another model, specify the --conf_path argument, for example:
      python workflow.py --conf_path=../workflow_config_lightgbm_Alpha158.yaml run
  5. Analyze Results:
    • Qlib’s recorder logs signal metrics (IC, ICIR) and backtest performance (annual return, Sharpe)
    • Compare with baseline methods (Rolling Retrain, Exponential Forgetting, etc.)

Diagram: Running DDG-DA Workflow


Conclusion

DDG-DA shows how AI can proactively tackle concept drift in stock forecasting. Instead of merely reacting to new data, it anticipates potential distribution changes, producing a more robust, forward-looking training set. When integrated into Qlib’s Meta Controller, it seamlessly fits your existing pipelines, from data ingestion to backtesting.

For practical use:

  • Ensure your market conditions exhibit some predictability. Random, sudden changes are harder to model
  • Combine with conventional best practices (risk management, hyperparameter tuning) for a holistic pipeline
  • Monitor performance: If drift patterns shift, you may need to retrain or retune the DDG-DA meta-model

By forecasting future market states and adapting ahead of time, DDG-DA helps your quantitative strategies remain agile and profitable in evolving financial environments.


Further Reading and References

Happy (adaptive) trading!

Leveraging Qlib and MLflow for Unified Experiment Tracking

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Financial markets present a dynamic environment where active research and experimentation are critical. Qlib offers a complete “AI-oriented” solution for quantitative investment—covering data loaders, feature engineering, model training, and evaluation. Meanwhile, MLflow provides robust functionality for experiment tracking, handling metrics, artifacts, and hyperparameters across multiple runs. You can further enhance your documentation using specialized syntax for highlighting important information, such as notes or warnings, to help readers navigate complex workflows.

This article shows how to integrate Qlib and MLflow to manage your entire workflow—from data ingestion and factor engineering to model storage and versioning—under a single, unified experiment system. It also demonstrates various ways to emphasize notes or warnings to help readers explore the complexities of this setup.

By the end of this article, you will learn:

  1. How Qlib manages data and modeling in a typical quant workflow
  2. How MLflow tracks experiment artifacts, logs metrics, and organizes multiple runs
  3. How to integrate Qlib’s “Recorder” concept with MLflow’s tracking

1. Qlib Overview

Qlib is a powerful open-source toolkit designed for AI-based quantitative investment. It streamlines common challenges in this domain:

  • Data Layer: Standardizes daily or intraday bars, fundamental factors, and alpha signals
  • Feature Engineering: Offers an expression engine (alpha modeling) plus factor definitions
  • Modeling: Easily pluggable ML models (LightGBM, Linear, RNN, etc.) with out-of-the-box training logic
  • Evaluation and Backtest: Includes modules for analyzing signals, computing IC/RankIC, and running trading strategies in a backtest simulator

Diagram: Qlib Architecture

Below is a high-level view of Qlib’s architecture—how data flows from raw sources into Qlib’s data handlers, transforms into features, and ultimately fuels model training.

note

Some Qlib features—like intraday data handling or advanced factor expressions—may require additional configuration. Double-check your data paths and environment setup to ensure all pieces are properly configured.


2. MLflow Overview

MLflow is an experiment-tracking tool that organizes runs and artifacts:

  • Tracking: Logs params, metrics, tags, and artifacts (model checkpoints, charts)
  • UI: A local or remote interface (mlflow ui) for comparing runs side by side
  • Model Registry: Version controls deployed models, enabling easy rollback or re-deployment

Diagram: MLflow Overview

warning

When configuring MLflow on remote servers, remember to secure the tracking server appropriately. Unsecured endpoints may expose logs and artifacts to unintended parties.


3. Combining Qlib and MLflow

In typical usage, Qlib handles data ingestion, feature transformations, and model training. MLflow complements it by capturing:

  1. Run Metadata: Each Qlib “Recorder” maps to an MLflow run
  2. Metrics & Params: Qlib logs metrics like Sharpe Ratio or Information Coefficient (IC); MLflow’s UI centralizes them
  3. Artifacts: Saved model files, prediction results, or charts are stored in MLflow’s artifact repository

Diagram: Qlib + MLflow Integration

Below is a top-down diagram showing how user code interacts with Qlib, which in turn leverages MLflow for run logging.


4. Minimal Example

Here’s a simplified script showing the synergy among the three components:

import qlib
from qlib.workflow import R
from qlib.utils import init_instance_by_config

# 1) Init Qlib and MLflow
qlib.init(
exp_manager={
"class": "MLflowExpManager",
"module_path": "qlib.workflow.expm",
"kwargs": {
"uri": "file:/path/to/mlruns",
"default_exp_name": "QlibExperiment"
},
}
)

# 2) Start experiment and train
with R.start(experiment_name="QlibExperiment", recorder_name="run1"):
# Basic config
model_config = {"class": "LightGBMModel", "kwargs": {"learning_rate": 0.05}}
dataset_config = {...}

model = init_instance_by_config(model_config)
dataset = init_instance_by_config(dataset_config)
model.fit(dataset)

# Evaluate
predictions = model.predict(dataset)

# log some metrics
R.log_metrics(Sharpe=1.2, IC=0.03)

# Save artifacts
R.save_objects(**{"pred.pkl": predictions, "trained_model.pkl": model})
info

The snippet above logs metrics like Sharpe or IC, making them easily comparable across multiple runs. You can further log hyperparameters via R.log_params(...) for more granular comparisons.

Results:

  • A new MLflow run named “run1” under “QlibExperiment”
  • MLflow logs parameters/metrics (learning_rate, Sharpe, IC)
  • Artifacts “pred.pkl” and “trained_model.pkl” appear in MLflow’s artifact UI

5. Best Practices

  1. Organize Qlib tasks: Use Qlib’s SignalRecord or PortAnaRecord classes to store signals/backtest results, ensuring logs are automatically tied to MLflow runs
  2. Parameter Logging: Send hyperparameters or relevant config to R.log_params(...) for easy comparison in MLflow
  3. Artifact Naming: Keep artifact names consistent (e.g., "pred.pkl") across multiple runs
  4. Model Registry: Consider pushing your best runs to MLflow’s Model Registry for versioned deployment
danger

A mismatch between your local Qlib environment and remote MLflow server can cause logging errors. Ensure both environments are in sync (same Python versions, same library versions).


6. Conclusion

By connecting Qlib’s experiment pipeline to MLflow’s tracking features—and documenting everything thoroughly—you get the best of all worlds:

  • Qlib: AI-centric quant platform automating data handling, factor engineering, and modeling
  • MLflow: A robust interface for comparing runs, storing artifacts, and version-controlling the entire process

This synergy simplifies large-scale experimentation—especially when you frequently iterate over factor definitions, hyperparameters, or new trading strategies.


Further Reading and References

Experiment happy!

Qlib’s Nested Execution for High-Frequency Trading with AI

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

High-Frequency Trading (HFT) involves handling large volumes of orders at extremely high speeds—often measured in microseconds or milliseconds. AI (machine learning and reinforcement learning, in particular) has become pivotal in capturing fleeting market opportunities and managing real-time decisions in these ultra-fast trading environments.

In Qlib, the Nested Decision Execution Framework simplifies building multi-level HFT strategies, allowing a high-level (daily or weekly) strategy to nest an intraday (or sub-intraday) executor or sub-workflow. This design enables realistic joint backtesting: daily portfolio selection and intraday HFT execution interact seamlessly, ensuring that real slippage, partial fills, and transaction costs are accurately accounted for.

By the end of this guide, you’ll understand:

  1. How Qlib structures multi-level workflows (daily vs. intraday).
  2. How AI techniques (supervised and reinforcement learning) slot into Qlib’s design.
  3. How to set up an Executor sub-workflow for high-frequency order splitting and real-time decision-making.

Multi-Level Strategy Workflow

Below is an overview diagram (adapted from Qlib’s documentation) depicting how daily strategies can nest intraday sub-strategies or RL agents:

  • Daily Strategy: Generates coarse decisions (e.g., “Buy X shares by day’s end”).
  • Executor: Breaks decisions into smaller actions. Within it, a Reinforcement Learning policy (or any other AI model) can run at minute or sub-minute intervals.
  • Simulator/Environment: Provides intraday data, simulates order fills/slippage, and feeds rewards back to the RL policy.

This nesting allows realistic interaction between daily allocation goals and intraday fill performance.


Key Components

1. Information Extractor (Intraday)

For HFT, Qlib can store data at 1-minute intervals, or even tick/orderbook-level data, using specialized backends (e.g., Arctic). An example below shows how Qlib can manage non-fixed-frequency records:

# Example snippet from qlib/examples/orderbook_data
# Download sample data, then import into your local mongo or Arctic DB
python create_dataset.py initialize_library
python create_dataset.py import_data

Once imported, intraday/tick data can be accessed by Qlib’s normal data APIs for feature engineering or direct RL state representation.


2. Forecast Model (Intraday + Daily)

A single Qlib workflow can hold multiple forecast models:

  • Daily Model: Predicts overnight returns or daily alpha (e.g., LightGBM on daily bars).
  • Intraday Model: Predicts short-term (minutes/seconds) price movements. This might be a small neural net or an RL policy evaluating states like order-book depth, spread, volume patterns, etc.

Qlib’s reinforcement learning interface (QlibRL) can also handle advanced models:

  • Policy: Learns from reward signals (e.g., PnL, transaction costs, slippage).
  • Action Interpreter: Converts policy actions into actual orders.

3. Decision Generator (Daily vs. Intraday)

Daily Decision Generator might produce a target portfolio:

Stock A: +5% allocation
Stock B: -2% allocation

Intraday Decision Generator (within the Executor) can then split these top-level instructions into multiple smaller trades. For example, an RL policy might decide to buy 2% of Stock A during the opening auction, 1% during midday, and 2% near closing, based on real-time microprice signals.


4. Executor & Sub-workflow (Nested)

Executor is where the nested approach truly shines. It wraps a more granular intraday or high-frequency sub-strategy.

This sub-workflow can be as simple as scheduling trades evenly or as advanced as an RL policy that:

  1. Observes short-term price movement.
  2. Acts to minimize slippage and transaction cost.
  3. Receives reward signals from the environment (filled shares, average fill price vs. VWAP, etc.).

5. Environment & Simulator

When applying Reinforcement Learning, Qlib uses an Environment wrapper:

  1. State: Intraday features (latest LOB data, partial fill stats).
  2. Action: The RL agent chooses to place a limit order, market order, or skip.
  3. Reward: Often the negative cost of trading or realized PnL improvement.

You can leverage Qlib’s built-in simulators or customize them for specific market microstructures.


Example Workflow Snippets

Here’s a high-level script illustrating a daily + intraday nested setup. (Pseudocode for demonstration only.)

# daily_intraday_workflow.py

import qlib
from qlib.config import C
from qlib.data import D
from qlib.rl.order_execution_policy import RLOrderExecPolicy
from qlib.strategy.base import BaseStrategy

class DailyAlphaStrategy(BaseStrategy):
"""Generates daily-level decisions (which stocks to buy/sell)."""

def generate_trade_decision(self, *args, **kwargs):
# Imagine we have daily predictions from a model...
scores = self.signal.get_signal() # daily alpha scores
# Then produce a dictionary {stock: weight or shares}
decisions = compute_target_positions(scores)
return decisions

class NestedExecutor:
"""Executor that calls an intraday RL sub-strategy for each daily decision."""

def __init__(self, intraday_policy):
self.intraday_policy = intraday_policy

def execute_daily_decision(self, daily_decision):
# Suppose daily_decision = { 'AAPL': +100 shares, 'MSFT': +50 shares }
# We'll break it into sub-orders via RL
for stock, shares in daily_decision.items():
# RL agent decides how to place those shares intraday
self.intraday_policy.run_execution(stock, shares)

def main():
qlib.init(provider_uri="your_data_path") # local data or remote server

daily_strategy = DailyAlphaStrategy(signal=your_daily_signal)
intraday_policy = RLOrderExecPolicy() # RL policy with QlibRL

executor = NestedExecutor(intraday_policy=intraday_policy)

# Hypothetical daily loop
for date in trading_calendar:
daily_decision = daily_strategy.generate_trade_decision()
executor.execute_daily_decision(daily_decision)

if __name__ == "__main__":
main()

Notes:

  • DailyAlphaStrategy uses a daily alpha model for stock scoring.
  • NestedExecutor calls RLOrderExecPolicy, which runs intraday steps.
  • Real code will handle position objects, trade calendars, and backtest frameworks in more detail.

Practical Tips for HFT + AI

  1. Data Freshness: HFT signals must be updated almost in real-time. Ensure your Qlib data pipeline is either streaming or as close to real-time as possible.
  2. Latency Considerations: Real HFT in production must address network latency and order routing. Qlib’s framework focuses on backtesting or simulation; integrating actual exchange connectivity is non-trivial.
  3. Overfitting & Market Regimes: Intraday data is often noisy; guard against overfitting your ML or RL models to fleeting patterns.
  4. Joint Optimization: Tweaking daily portfolio turnover and intraday execution in isolation can be suboptimal. Qlib’s nested design helps you see the whole chain’s PnL effect.
  5. Reinforcement Learning: Start simple (e.g., Q-learning or policy gradient) before moving to complex neural networks. Use carefully designed rewards capturing cost, fill rates, and profit.

Summary

By combining AI (supervised or RL models) with a Nested Decision Execution approach, Qlib lets you:

  • Unify Daily and Intraday strategies in a single backtest.
  • Leverage Real-time AI for micro-execution decisions.
  • Optimize both large-scale allocations and fine-grained order placements simultaneously.

This framework is especially powerful for High-Frequency Trading use cases, where multiple decision layers (portfolio vs. sub-second order slicing) must interact. Whether you’re using classical ML or advanced RL, Qlib streamlines experimentation and helps close the gap between daily trading and ultra-fast intraday execution.


Further Reading & References

Happy trading!

Understanding Gradient Descent in Linear Regression

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Gradient descent is a fundamental optimization algorithm used in machine learning to minimize the cost function and find the optimal parameters of a model. In the context of linear regression, gradient descent helps in finding the best-fitting line by iteratively updating the model parameters. This article delves into the mechanics of gradient descent in linear regression, focusing on how the parameters are updated and the impact of the sign of the gradient.

Understanding Linear Regression in Machine Learning

· 4 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Linear regression is a fundamental algorithm in supervised machine learning, widely used for predicting continuous outcomes. It models the relationship between a dependent variable and one or more independent variables by fitting a linear equation to observed data. This article delves into the components of linear regression, explaining how inputs, parameters, and the cost function work together to create a predictive model.

Predicting Bank Failures Using Gradient Descent and Machine Learning

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Predicting bank failures is a vital concern in financial risk management, as it can prevent economic crises and protect investors and depositors. Machine learning, particularly algorithms optimized through Gradient Descent, offers powerful tools for identifying early warning signs of bank failures. In this article, we focus on how Gradient Descent and related machine learning methods are used to predict bank failures, helping institutions and regulators manage risks more effectively.

Understanding Gradient Descent and Its Applications in Trading Algorithms

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Gradient Descent is a fundamental optimization algorithm used in machine learning and quantitative finance. In the context of algorithmic trading, it helps in optimizing predictive models, from price forecasting to portfolio optimization. Understanding how Gradient Descent works and how it can be applied in the financial markets is crucial for developing effective trading strategies.

In this article, we will explore the concept of Gradient Descent, its variations, and its applications in trading.

Understanding Euclidean Distance and Its Applications in Trading Algorithms

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Euclidean distance is not just a mathematical concept but a crucial tool for data analysis in various fields, including trading and quantitative finance. In algorithmic trading, Euclidean distance can be applied to evaluate the similarity between financial assets, identify trading signals, and optimize portfolio allocation. As a distance metric, it helps in quantifying the relationship between different financial data points, allowing for more effective trading strategies.

In this article, we will discuss what Euclidean distance is, how it's calculated, and where it fits in the world of financial markets and algorithmic trading.

Understanding the Bias-Variance Tradeoff and No Free Lunch Theorem in Machine Learning

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In machine learning, achieving the right balance between model complexity and predictive performance is crucial. A key concept in understanding this balance is the Bias-Variance Tradeoff, which defines how well a model generalizes to unseen data. Along with this, the No Free Lunch Theorem provides an essential principle that explains the limitations of machine learning models. Together, these concepts form the foundation of understanding how machine learning models perform across various domains.

This article will explore the Bias-Variance Tradeoff, the implications of the No Free Lunch theorem, and how to address issues like underfitting, overfitting, and regularization in machine learning models.

Predicting Stock Returns Using Linear Regression in Finance

· 4 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Linear regression is one of the foundational algorithms in machine learning, particularly useful for predicting continuous outcomes. In finance, it serves as a powerful tool for modeling and predicting stock returns based on various market indicators. This article delves into the application of linear regression for predicting daily returns of Amazon stock using a set of financial indices.