Skip to main content

Adapting Stock Forecasts with AI

· 7 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Financial markets are dynamic: price trends, volatility, and patterns constantly change. These shifts in data distribution, commonly called concept drift, pose a serious challenge for AI models trained on historical data. When the market regime changes—such as transitioning from a calm to a volatile environment—a “stale” model can drastically lose predictive power.

DDG-DA (Data Distribution Generation for Predictable Concept Drift Adaptation) addresses this by forecasting how the data distribution might evolve in the future, instead of only reacting to the most recent data. The approach is rooted in meta-learning (via Qlib’s Meta Controller framework) and helps trading or investment models stay ahead of new trends.

By the end of this article, you will understand:

  1. Why concept drift complicates forecasting in stocks and other financial time series
  2. How DDG-DA uses a future distribution predictor to resample training data
  3. How to incorporate this into Qlib-based workflows to improve stock return and risk-adjusted performance

Concept Drift in Stock Markets

Concept drift refers to changes in the underlying distribution of stock market data. These changes can manifest in multiple ways:

  • Trends: Bull or bear markets can shift faster or slower than expected
  • Volatility: Sudden spikes can invalidate models calibrated during calmer periods
  • Patterns: Market microstructure changes or new correlations can emerge, causing old patterns to wane

Traditional methods often react after drift appears (by retraining on recent data). However, if the drift is somewhat predictable, we can model its trajectory—and proactively train models on future conditions before they fully materialize.

Diagram: Concept Drift Overview

Here, a continuous market data stream (A) encounters distribution shifts (B). These can appear as new trends (C), volatility regimes (D), or changed patterns (E). As a result, a previously trained model (F) gradually loses accuracy (G) if not adapted.


DDG-DA: High-Level Approach

The core principle behind DDG-DA is to forecast the distribution shift itself. Specifically:

  1. Predict Future Distributions

    • A meta-model observes historical tasks (for example, monthly or daily tasks in which you train a new stock-prediction model).
    • This meta-model estimates how the data distribution might move in the next period, such as anticipating an uptick in volatility or a shift in factor exposures.
  2. Generate Synthetic Training Samples

    • Using the distribution forecast, DDG-DA resamples historical data to emulate the expected future conditions.
    • It might assign higher weights to periods with similar volatility or market conditions so the final training set reflects what the market might soon become.
  3. Train or Retrain the Forecasting Model

    • Your usual forecasting model (for example, LightGBM or LSTM) is then retrained on these forward-looking samples, aligning better with the next period’s actual data distribution.
    • As a result, the model remains more accurate when concept drift occurs.

Diagram: DDG-DA Core Steps

This process repeats periodically (for example, each month) to keep your forecasting models aligned with upcoming market conditions.


How It Integrates with Qlib

Qlib provides an AI-oriented Quantitative Investment Platform that handles:

  • Data: Collecting and structuring historical pricing data, factors, and fundamentals
  • Modeling: Building daily or intraday forecasts using built-in ML or custom models
  • Meta Controller: A specialized component for tasks like DDG-DA, which revolve around higher-level meta-learning and distribution adaptation

Diagram: Qlib plus DDG-DA Integration

  1. Qlib Data Layer (A): Feeds into a standard ML pipeline for daily or intraday forecasting (B).
  2. DDG-DA sits in the Meta Controller (C), analyzing tasks, predicting distribution changes, and adjusting the pipeline.
  3. Results circle back into Qlib for backtesting and analysis (D).

Example: Monthly Stock Trend Forecasting

  1. Setting the Tasks

    • Suppose you update your stock-ranking model every month, using the last 2 years of data.
    • Each month is a “task” in Qlib. Over multiple months, you get a series of tasks for training and validation.
  2. Train the Meta-Model

    • DDG-DA learns a function that maps old data distribution patterns to new sample weights.
    • This ensures the next month’s training data distribution is closer to the actual conditions that month.
  3. Evaluate

    • Compare the results to standard approaches:
      • Rolling Retrain: Only uses the most recent data, ignoring the predictable drift pattern
      • Gradual Forgetting: Weighted by how recent data is, but no direct distribution forecast
      • DDG-DA: Weighs data by predicted future distribution, leading to stronger alignment when drift is not purely random

Diagram: Monthly Task Workflow


Performance and Findings

Research in the associated DDG-DA paper and Qlib examples shows:

  • Better Signal Quality: Higher Information Coefficient (IC) for stock selection
  • Enhanced Portfolio Returns: Larger annual returns, improved Sharpe Ratio, and lower drawdowns in backtests
  • Versatility: Works with a wide range of ML models (Linear, LightGBM, neural networks)
  • Limitations: If concept drift is completely random or abrupt (no pattern), DDG-DA’s advantages diminish. Predictability is key

Diagram: Performance Improvement


Practical Steps

  1. Install Qlib and ensure you have the dataset (for example, Alpha158) set up
  2. Clone the DDG-DA Example from the Qlib GitHub:
    git clone https://github.com/microsoft/qlib.git
    cd qlib/examples/benchmarks_dynamic/DDG-DA
  3. Install Requirements:
    pip install -r requirements.txt
  4. Run the Workflow:
    python workflow.py run
    • By default, it uses a simple linear forecasting model
    • To use LightGBM or another model, specify the --conf_path argument, for example:
      python workflow.py --conf_path=../workflow_config_lightgbm_Alpha158.yaml run
  5. Analyze Results:
    • Qlib’s recorder logs signal metrics (IC, ICIR) and backtest performance (annual return, Sharpe)
    • Compare with baseline methods (Rolling Retrain, Exponential Forgetting, etc.)

Diagram: Running DDG-DA Workflow


Conclusion

DDG-DA shows how AI can proactively tackle concept drift in stock forecasting. Instead of merely reacting to new data, it anticipates potential distribution changes, producing a more robust, forward-looking training set. When integrated into Qlib’s Meta Controller, it seamlessly fits your existing pipelines, from data ingestion to backtesting.

For practical use:

  • Ensure your market conditions exhibit some predictability. Random, sudden changes are harder to model
  • Combine with conventional best practices (risk management, hyperparameter tuning) for a holistic pipeline
  • Monitor performance: If drift patterns shift, you may need to retrain or retune the DDG-DA meta-model

By forecasting future market states and adapting ahead of time, DDG-DA helps your quantitative strategies remain agile and profitable in evolving financial environments.


Further Reading and References

Happy (adaptive) trading!

Leveraging Qlib and MLflow for Unified Experiment Tracking

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Financial markets present a dynamic environment where active research and experimentation are critical. Qlib offers a complete “AI-oriented” solution for quantitative investment—covering data loaders, feature engineering, model training, and evaluation. Meanwhile, MLflow provides robust functionality for experiment tracking, handling metrics, artifacts, and hyperparameters across multiple runs. You can further enhance your documentation using specialized syntax for highlighting important information, such as notes or warnings, to help readers navigate complex workflows.

This article shows how to integrate Qlib and MLflow to manage your entire workflow—from data ingestion and factor engineering to model storage and versioning—under a single, unified experiment system. It also demonstrates various ways to emphasize notes or warnings to help readers explore the complexities of this setup.

By the end of this article, you will learn:

  1. How Qlib manages data and modeling in a typical quant workflow
  2. How MLflow tracks experiment artifacts, logs metrics, and organizes multiple runs
  3. How to integrate Qlib’s “Recorder” concept with MLflow’s tracking

1. Qlib Overview

Qlib is a powerful open-source toolkit designed for AI-based quantitative investment. It streamlines common challenges in this domain:

  • Data Layer: Standardizes daily or intraday bars, fundamental factors, and alpha signals
  • Feature Engineering: Offers an expression engine (alpha modeling) plus factor definitions
  • Modeling: Easily pluggable ML models (LightGBM, Linear, RNN, etc.) with out-of-the-box training logic
  • Evaluation and Backtest: Includes modules for analyzing signals, computing IC/RankIC, and running trading strategies in a backtest simulator

Diagram: Qlib Architecture

Below is a high-level view of Qlib’s architecture—how data flows from raw sources into Qlib’s data handlers, transforms into features, and ultimately fuels model training.

note

Some Qlib features—like intraday data handling or advanced factor expressions—may require additional configuration. Double-check your data paths and environment setup to ensure all pieces are properly configured.


2. MLflow Overview

MLflow is an experiment-tracking tool that organizes runs and artifacts:

  • Tracking: Logs params, metrics, tags, and artifacts (model checkpoints, charts)
  • UI: A local or remote interface (mlflow ui) for comparing runs side by side
  • Model Registry: Version controls deployed models, enabling easy rollback or re-deployment

Diagram: MLflow Overview

warning

When configuring MLflow on remote servers, remember to secure the tracking server appropriately. Unsecured endpoints may expose logs and artifacts to unintended parties.


3. Combining Qlib and MLflow

In typical usage, Qlib handles data ingestion, feature transformations, and model training. MLflow complements it by capturing:

  1. Run Metadata: Each Qlib “Recorder” maps to an MLflow run
  2. Metrics & Params: Qlib logs metrics like Sharpe Ratio or Information Coefficient (IC); MLflow’s UI centralizes them
  3. Artifacts: Saved model files, prediction results, or charts are stored in MLflow’s artifact repository

Diagram: Qlib + MLflow Integration

Below is a top-down diagram showing how user code interacts with Qlib, which in turn leverages MLflow for run logging.


4. Minimal Example

Here’s a simplified script showing the synergy among the three components:

import qlib
from qlib.workflow import R
from qlib.utils import init_instance_by_config

# 1) Init Qlib and MLflow
qlib.init(
exp_manager={
"class": "MLflowExpManager",
"module_path": "qlib.workflow.expm",
"kwargs": {
"uri": "file:/path/to/mlruns",
"default_exp_name": "QlibExperiment"
},
}
)

# 2) Start experiment and train
with R.start(experiment_name="QlibExperiment", recorder_name="run1"):
# Basic config
model_config = {"class": "LightGBMModel", "kwargs": {"learning_rate": 0.05}}
dataset_config = {...}

model = init_instance_by_config(model_config)
dataset = init_instance_by_config(dataset_config)
model.fit(dataset)

# Evaluate
predictions = model.predict(dataset)

# log some metrics
R.log_metrics(Sharpe=1.2, IC=0.03)

# Save artifacts
R.save_objects(**{"pred.pkl": predictions, "trained_model.pkl": model})
info

The snippet above logs metrics like Sharpe or IC, making them easily comparable across multiple runs. You can further log hyperparameters via R.log_params(...) for more granular comparisons.

Results:

  • A new MLflow run named “run1” under “QlibExperiment”
  • MLflow logs parameters/metrics (learning_rate, Sharpe, IC)
  • Artifacts “pred.pkl” and “trained_model.pkl” appear in MLflow’s artifact UI

5. Best Practices

  1. Organize Qlib tasks: Use Qlib’s SignalRecord or PortAnaRecord classes to store signals/backtest results, ensuring logs are automatically tied to MLflow runs
  2. Parameter Logging: Send hyperparameters or relevant config to R.log_params(...) for easy comparison in MLflow
  3. Artifact Naming: Keep artifact names consistent (e.g., "pred.pkl") across multiple runs
  4. Model Registry: Consider pushing your best runs to MLflow’s Model Registry for versioned deployment
danger

A mismatch between your local Qlib environment and remote MLflow server can cause logging errors. Ensure both environments are in sync (same Python versions, same library versions).


6. Conclusion

By connecting Qlib’s experiment pipeline to MLflow’s tracking features—and documenting everything thoroughly—you get the best of all worlds:

  • Qlib: AI-centric quant platform automating data handling, factor engineering, and modeling
  • MLflow: A robust interface for comparing runs, storing artifacts, and version-controlling the entire process

This synergy simplifies large-scale experimentation—especially when you frequently iterate over factor definitions, hyperparameters, or new trading strategies.


Further Reading and References

Experiment happy!

Qlib’s Nested Execution for High-Frequency Trading with AI

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

High-Frequency Trading (HFT) involves handling large volumes of orders at extremely high speeds—often measured in microseconds or milliseconds. AI (machine learning and reinforcement learning, in particular) has become pivotal in capturing fleeting market opportunities and managing real-time decisions in these ultra-fast trading environments.

In Qlib, the Nested Decision Execution Framework simplifies building multi-level HFT strategies, allowing a high-level (daily or weekly) strategy to nest an intraday (or sub-intraday) executor or sub-workflow. This design enables realistic joint backtesting: daily portfolio selection and intraday HFT execution interact seamlessly, ensuring that real slippage, partial fills, and transaction costs are accurately accounted for.

By the end of this guide, you’ll understand:

  1. How Qlib structures multi-level workflows (daily vs. intraday).
  2. How AI techniques (supervised and reinforcement learning) slot into Qlib’s design.
  3. How to set up an Executor sub-workflow for high-frequency order splitting and real-time decision-making.

Multi-Level Strategy Workflow

Below is an overview diagram (adapted from Qlib’s documentation) depicting how daily strategies can nest intraday sub-strategies or RL agents:

  • Daily Strategy: Generates coarse decisions (e.g., “Buy X shares by day’s end”).
  • Executor: Breaks decisions into smaller actions. Within it, a Reinforcement Learning policy (or any other AI model) can run at minute or sub-minute intervals.
  • Simulator/Environment: Provides intraday data, simulates order fills/slippage, and feeds rewards back to the RL policy.

This nesting allows realistic interaction between daily allocation goals and intraday fill performance.


Key Components

1. Information Extractor (Intraday)

For HFT, Qlib can store data at 1-minute intervals, or even tick/orderbook-level data, using specialized backends (e.g., Arctic). An example below shows how Qlib can manage non-fixed-frequency records:

# Example snippet from qlib/examples/orderbook_data
# Download sample data, then import into your local mongo or Arctic DB
python create_dataset.py initialize_library
python create_dataset.py import_data

Once imported, intraday/tick data can be accessed by Qlib’s normal data APIs for feature engineering or direct RL state representation.


2. Forecast Model (Intraday + Daily)

A single Qlib workflow can hold multiple forecast models:

  • Daily Model: Predicts overnight returns or daily alpha (e.g., LightGBM on daily bars).
  • Intraday Model: Predicts short-term (minutes/seconds) price movements. This might be a small neural net or an RL policy evaluating states like order-book depth, spread, volume patterns, etc.

Qlib’s reinforcement learning interface (QlibRL) can also handle advanced models:

  • Policy: Learns from reward signals (e.g., PnL, transaction costs, slippage).
  • Action Interpreter: Converts policy actions into actual orders.

3. Decision Generator (Daily vs. Intraday)

Daily Decision Generator might produce a target portfolio:

Stock A: +5% allocation
Stock B: -2% allocation

Intraday Decision Generator (within the Executor) can then split these top-level instructions into multiple smaller trades. For example, an RL policy might decide to buy 2% of Stock A during the opening auction, 1% during midday, and 2% near closing, based on real-time microprice signals.


4. Executor & Sub-workflow (Nested)

Executor is where the nested approach truly shines. It wraps a more granular intraday or high-frequency sub-strategy.

This sub-workflow can be as simple as scheduling trades evenly or as advanced as an RL policy that:

  1. Observes short-term price movement.
  2. Acts to minimize slippage and transaction cost.
  3. Receives reward signals from the environment (filled shares, average fill price vs. VWAP, etc.).

5. Environment & Simulator

When applying Reinforcement Learning, Qlib uses an Environment wrapper:

  1. State: Intraday features (latest LOB data, partial fill stats).
  2. Action: The RL agent chooses to place a limit order, market order, or skip.
  3. Reward: Often the negative cost of trading or realized PnL improvement.

You can leverage Qlib’s built-in simulators or customize them for specific market microstructures.


Example Workflow Snippets

Here’s a high-level script illustrating a daily + intraday nested setup. (Pseudocode for demonstration only.)

# daily_intraday_workflow.py

import qlib
from qlib.config import C
from qlib.data import D
from qlib.rl.order_execution_policy import RLOrderExecPolicy
from qlib.strategy.base import BaseStrategy

class DailyAlphaStrategy(BaseStrategy):
"""Generates daily-level decisions (which stocks to buy/sell)."""

def generate_trade_decision(self, *args, **kwargs):
# Imagine we have daily predictions from a model...
scores = self.signal.get_signal() # daily alpha scores
# Then produce a dictionary {stock: weight or shares}
decisions = compute_target_positions(scores)
return decisions

class NestedExecutor:
"""Executor that calls an intraday RL sub-strategy for each daily decision."""

def __init__(self, intraday_policy):
self.intraday_policy = intraday_policy

def execute_daily_decision(self, daily_decision):
# Suppose daily_decision = { 'AAPL': +100 shares, 'MSFT': +50 shares }
# We'll break it into sub-orders via RL
for stock, shares in daily_decision.items():
# RL agent decides how to place those shares intraday
self.intraday_policy.run_execution(stock, shares)

def main():
qlib.init(provider_uri="your_data_path") # local data or remote server

daily_strategy = DailyAlphaStrategy(signal=your_daily_signal)
intraday_policy = RLOrderExecPolicy() # RL policy with QlibRL

executor = NestedExecutor(intraday_policy=intraday_policy)

# Hypothetical daily loop
for date in trading_calendar:
daily_decision = daily_strategy.generate_trade_decision()
executor.execute_daily_decision(daily_decision)

if __name__ == "__main__":
main()

Notes:

  • DailyAlphaStrategy uses a daily alpha model for stock scoring.
  • NestedExecutor calls RLOrderExecPolicy, which runs intraday steps.
  • Real code will handle position objects, trade calendars, and backtest frameworks in more detail.

Practical Tips for HFT + AI

  1. Data Freshness: HFT signals must be updated almost in real-time. Ensure your Qlib data pipeline is either streaming or as close to real-time as possible.
  2. Latency Considerations: Real HFT in production must address network latency and order routing. Qlib’s framework focuses on backtesting or simulation; integrating actual exchange connectivity is non-trivial.
  3. Overfitting & Market Regimes: Intraday data is often noisy; guard against overfitting your ML or RL models to fleeting patterns.
  4. Joint Optimization: Tweaking daily portfolio turnover and intraday execution in isolation can be suboptimal. Qlib’s nested design helps you see the whole chain’s PnL effect.
  5. Reinforcement Learning: Start simple (e.g., Q-learning or policy gradient) before moving to complex neural networks. Use carefully designed rewards capturing cost, fill rates, and profit.

Summary

By combining AI (supervised or RL models) with a Nested Decision Execution approach, Qlib lets you:

  • Unify Daily and Intraday strategies in a single backtest.
  • Leverage Real-time AI for micro-execution decisions.
  • Optimize both large-scale allocations and fine-grained order placements simultaneously.

This framework is especially powerful for High-Frequency Trading use cases, where multiple decision layers (portfolio vs. sub-second order slicing) must interact. Whether you’re using classical ML or advanced RL, Qlib streamlines experimentation and helps close the gap between daily trading and ultra-fast intraday execution.


Further Reading & References

Happy trading!

A Comprehensive Guide to Qlib’s Portfolio Strategy, TopkDropoutStrategy, and EnhancedIndexingStrategy

· 9 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In Qlib, portfolio strategies turn prediction scores into actionable orders (buy/sell) for building and rebalancing a portfolio. This article will:

  1. Explain the architecture of key strategy classes.
  2. Demonstrate TopkDropoutStrategy and EnhancedIndexingStrategy in detail.
  3. Present diagrams and code blocks illustrating the step-by-step flows.

By the end, you’ll see how to plug your own predictive model scores into these strategies and make them trade automatically.


Class Hierarchy

Below is a simple diagram showing how these classes inherit from one another:

  • BaseStrategy: Core abstraction; requires a method to generate a trade decision.
  • BaseSignalStrategy: Extends BaseStrategy with “signals” (model scores).
  • TopkDropoutStrategy: Buys the top-K scoring stocks and drops the worst ones.
  • WeightStrategyBase: Uses target weights (fractions of the portfolio) rather than discrete buy/sell.
  • EnhancedIndexingStrategy: Adds advanced risk modeling for partial index tracking.

High-Level Trading Flow for Top-K

Here’s a top-down look at a generic daily (or periodic) process once your predictions are ready:


Code Walkthrough

Below we break down the code for Qlib’s portfolio strategies into sections, each supplemented by additional flow diagrams relevant to that part of the code.

1. Imports and Setup

import os
import copy
import warnings
import numpy as np
import pandas as pd

from typing import Dict, List, Text, Tuple, Union
from abc import ABC

from qlib.data import D
from qlib.data.dataset import Dataset
from qlib.model.base import BaseModel
from qlib.strategy.base import BaseStrategy
from qlib.backtest.position import Position
from qlib.backtest.signal import Signal, create_signal_from
from qlib.backtest.decision import Order, OrderDir, TradeDecisionWO
from qlib.log import get_module_logger
from qlib.utils import get_pre_trading_date, load_dataset
from qlib.contrib.strategy.order_generator import OrderGenerator, OrderGenWOInteract
from qlib.contrib.strategy.optimizer import EnhancedIndexingOptimizer

Explanation

  • Core Python imports for numerical operations, data processing, and type hints.
  • Qlib-specific imports:
    • BaseStrategy, Position, Signal, and TradeDecisionWO for implementing custom strategies and managing trade decisions.
    • OrderGenerator and EnhancedIndexingOptimizer for generating orders from target weights and optimizing risk exposure.

2. BaseSignalStrategy

Below is a class diagram illustrating BaseSignalStrategy inheriting from BaseStrategy and adding a signal field:

class BaseSignalStrategy(BaseStrategy, ABC):
def __init__(
self,
*,
signal: Union[Signal, Tuple[BaseModel, Dataset], List, Dict, Text, pd.Series, pd.DataFrame] = None,
model=None,
dataset=None,
risk_degree: float = 0.95,
trade_exchange=None,
level_infra=None,
common_infra=None,
**kwargs,
):
"""
Parameters
-----------
signal :
Could be a Signal object or raw predictions from a model/dataset.
risk_degree : float
Fraction of total capital to invest (default 0.95).
trade_exchange : Exchange
Market info for dealing orders, generating reports, etc.
"""
super().__init__(level_infra=level_infra, common_infra=common_infra, trade_exchange=trade_exchange, **kwargs)

self.risk_degree = risk_degree

# For backward-compatibility with (model, dataset)
if model is not None and dataset is not None:
warnings.warn("`model` `dataset` is deprecated; use `signal`.", DeprecationWarning)
signal = model, dataset

self.signal: Signal = create_signal_from(signal)

def get_risk_degree(self, trade_step=None):
"""Return the fraction of total value to allocate."""
return self.risk_degree

Key Points

  • BaseSignalStrategy extends BaseStrategy and integrates a concept of a signal (predictions).
  • risk_degree indicates what fraction of the portfolio’s capital is invested (defaults to 95%).

3. TopkDropoutStrategy

Here’s a flow diagram specifically for the generate_trade_decision method in TopkDropoutStrategy, showing how the code sorts holdings, identifies “drop” stocks, and selects new buys:

class TopkDropoutStrategy(BaseSignalStrategy):
def __init__(
self,
*,
topk,
n_drop,
method_sell="bottom",
method_buy="top",
hold_thresh=1,
only_tradable=False,
forbid_all_trade_at_limit=True,
**kwargs,
):
"""
Parameters
-----------
topk : int
Desired number of stocks to hold.
n_drop : int
Number of stocks replaced each rebalance.
method_sell : str
Approach to dropping existing stocks (e.g. 'bottom').
method_buy : str
Approach to adding new stocks (e.g. 'top').
hold_thresh : int
Must hold a stock for at least this many days before selling.
only_tradable : bool
Ignore non-tradable stocks.
forbid_all_trade_at_limit : bool
Disallow trades if limit up/down is reached.
"""
super().__init__(**kwargs)
self.topk = topk
self.n_drop = n_drop
self.method_sell = method_sell
self.method_buy = method_buy
self.hold_thresh = hold_thresh
self.only_tradable = only_tradable
self.forbid_all_trade_at_limit = forbid_all_trade_at_limit

def generate_trade_decision(self, execute_result=None):
trade_step = self.trade_calendar.get_trade_step()
trade_start_time, trade_end_time = self.trade_calendar.get_step_time(trade_step)
pred_start_time, pred_end_time = self.trade_calendar.get_step_time(trade_step, shift=1)
pred_score = self.signal.get_signal(start_time=pred_start_time, end_time=pred_end_time)

# If no score, do nothing
if pred_score is None:
return TradeDecisionWO([], self)

# If multiple columns, pick the first
if isinstance(pred_score, pd.DataFrame):
pred_score = pred_score.iloc[:, 0]

# Helper functions for picking top/bottom stocks...
...

# Copy current position
current_temp: Position = copy.deepcopy(self.trade_position)
sell_order_list = []
buy_order_list = []
cash = current_temp.get_cash()
current_stock_list = current_temp.get_stock_list()

# Sort current holdings by descending score
last = pred_score.reindex(current_stock_list).sort_values(ascending=False).index

# Identify new stocks to buy
...

# Figure out which existing stocks to sell
...

# Create Sell Orders
...

# Create Buy Orders
...

return TradeDecisionWO(sell_order_list + buy_order_list, self)

Key Points

  • The “top-K, drop worst-K” concept is implemented by comparing current holdings to the broader universe of stocks sorted by score.
  • Some specifics:
    • method_sell can be "bottom", so you drop the lowest-scored holdings.
    • method_buy can be "top", so you pick the top new stocks that aren’t in the portfolio.

4. WeightStrategyBase

Below is a quick diagram for how WeightStrategyBase converts target weights into final orders:

class WeightStrategyBase(BaseSignalStrategy):
def __init__(
self,
*,
order_generator_cls_or_obj=OrderGenWOInteract,
**kwargs,
):
super().__init__(**kwargs)
if isinstance(order_generator_cls_or_obj, type):
self.order_generator: OrderGenerator = order_generator_cls_or_obj()
else:
self.order_generator: OrderGenerator = order_generator_cls_or_obj

def generate_target_weight_position(self, score, current, trade_start_time, trade_end_time):
"""
Subclasses must override this to return:
{stock_id: target_weight}
"""
raise NotImplementedError()

def generate_trade_decision(self, execute_result=None):
trade_step = self.trade_calendar.get_trade_step()
trade_start_time, trade_end_time = self.trade_calendar.get_step_time(trade_step)
pred_start_time, pred_end_time = self.trade_calendar.get_step_time(trade_step, shift=1)
pred_score = self.signal.get_signal(start_time=pred_start_time, end_time=pred_end_time)
if pred_score is None:
return TradeDecisionWO([], self)

current_temp = copy.deepcopy(self.trade_position)
assert isinstance(current_temp, Position)

# Let the subclass produce the weights
target_weight_position = self.generate_target_weight_position(
score=pred_score, current=current_temp, trade_start_time=trade_start_time, trade_end_time=trade_end_time
)

# Convert weights -> Orders
order_list = self.order_generator.generate_order_list_from_target_weight_position(
current=current_temp,
trade_exchange=self.trade_exchange,
risk_degree=self.get_risk_degree(trade_step),
target_weight_position=target_weight_position,
pred_start_time=pred_start_time,
pred_end_time=pred_end_time,
trade_start_time=trade_start_time,
trade_end_time=trade_end_time,
)
return TradeDecisionWO(order_list, self)

Key Points

  • WeightStrategyBase uses a target-weight approach: you specify a final allocation for each stock.
  • The built-in order_generator calculates how many shares to buy/sell to achieve the target allocation.

5. EnhancedIndexingStrategy

Lastly, a diagram shows how this strategy merges model scores with factor data and a benchmark:

class EnhancedIndexingStrategy(WeightStrategyBase):
"""
Combines active and passive management, aiming to
outperform a benchmark index while controlling tracking error.
"""

FACTOR_EXP_NAME = "factor_exp.pkl"
FACTOR_COV_NAME = "factor_cov.pkl"
SPECIFIC_RISK_NAME = "specific_risk.pkl"
BLACKLIST_NAME = "blacklist.pkl"

def __init__(
self,
*,
riskmodel_root,
market="csi500",
turn_limit=None,
name_mapping={},
optimizer_kwargs={},
verbose=False,
**kwargs,
):
super().__init__(**kwargs)
self.logger = get_module_logger("EnhancedIndexingStrategy")

self.riskmodel_root = riskmodel_root
self.market = market
self.turn_limit = turn_limit

self.factor_exp_path = name_mapping.get("factor_exp", self.FACTOR_EXP_NAME)
self.factor_cov_path = name_mapping.get("factor_cov", self.FACTOR_COV_NAME)
self.specific_risk_path = name_mapping.get("specific_risk", self.SPECIFIC_RISK_NAME)
self.blacklist_path = name_mapping.get("blacklist", self.BLACKLIST_NAME)

self.optimizer = EnhancedIndexingOptimizer(**optimizer_kwargs)
self.verbose = verbose
self._riskdata_cache = {}

def get_risk_data(self, date):
if date in self._riskdata_cache:
return self._riskdata_cache[date]

root = self.riskmodel_root + "/" + date.strftime("%Y%m%d")
if not os.path.exists(root):
return None

factor_exp = load_dataset(root + "/" + self.factor_exp_path, index_col=[0])
factor_cov = load_dataset(root + "/" + self.factor_cov_path, index_col=[0])
specific_risk = load_dataset(root + "/" + self.specific_risk_path, index_col=[0])

if not factor_exp.index.equals(specific_risk.index):
specific_risk = specific_risk.reindex(factor_exp.index, fill_value=specific_risk.max())

universe = factor_exp.index.tolist()
blacklist = []
if os.path.exists(root + "/" + self.blacklist_path):
blacklist = load_dataset(root + "/" + self.blacklist_path).index.tolist()

self._riskdata_cache[date] = factor_exp.values, factor_cov.values, specific_risk.values, universe, blacklist
return self._riskdata_cache[date]

def generate_target_weight_position(self, score, current, trade_start_time, trade_end_time):
trade_date = trade_start_time
pre_date = get_pre_trading_date(trade_date, future=True)

outs = self.get_risk_data(pre_date)
if outs is None:
self.logger.warning(f"No risk data for {pre_date:%Y-%m-%d}, skipping optimization")
return None

factor_exp, factor_cov, specific_risk, universe, blacklist = outs

# Align score with risk model universe
score = score.reindex(universe).fillna(score.min()).values

# Current portfolio weights
cur_weight = current.get_stock_weight_dict(only_stock=False)
cur_weight = np.array([cur_weight.get(stock, 0) for stock in universe])
cur_weight = cur_weight / self.get_risk_degree(trade_date)

# Benchmark weight
bench_weight = D.features(
D.instruments("all"), [f"${self.market}_weight"], start_time=pre_date, end_time=pre_date
).squeeze()
bench_weight.index = bench_weight.index.droplevel(level="datetime")
bench_weight = bench_weight.reindex(universe).fillna(0).values

# Track which stocks are tradable and which are blacklisted
tradable = D.features(D.instruments("all"), ["$volume"], start_time=pre_date, end_time=pre_date).squeeze()
tradable.index = tradable.index.droplevel(level="datetime")
tradable = tradable.reindex(universe).gt(0).values
mask_force_hold = ~tradable
mask_force_sell = np.array([stock in blacklist for stock in universe], dtype=bool)

# Optimize based on scores + factor model
weight = self.optimizer(
r=score,
F=factor_exp,
cov_b=factor_cov,
var_u=specific_risk**2,
w0=cur_weight,
wb=bench_weight,
mfh=mask_force_hold,
mfs=mask_force_sell,
)

target_weight_position = {stock: w for stock, w in zip(universe, weight) if w > 0}

if self.verbose:
self.logger.info(f"trade date: {trade_date:%Y-%m-%d}")
self.logger.info(f"number of holding stocks: {len(target_weight_position)}")
self.logger.info(f"total holding weight: {weight.sum():.6f}")

return target_weight_position

Key Points

  • Uses riskmodel_root to pull factor exposures, covariances, and specific risk estimates.
  • Combines your model scores with a benchmark weight to control tracking error via an optimizer.
  • Produces a final weight map, which Qlib then converts to buy/sell orders.

Summary

  • BaseSignalStrategy attaches prediction data to a strategy.
  • TopkDropoutStrategy implements a straightforward “buy top-K, drop worst-K” approach.
  • WeightStrategyBase generalizes weight-based rebalancing.
  • EnhancedIndexingStrategy is a powerful extension, combining active signals and passive indexing with risk control.

By customizing just a few methods or parameters, you can adapt these strategies to your own investing style. Simply feed your daily scores (prediction of returns) into Qlib, pick a strategy class, and let Qlib do the rest.

Happy Trading!

Understanding Score IC in Qlib for Enhanced Profit

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

One of the core ideas in quantitative finance is that model predictions—often called “scores”—can be mapped to expected returns on an instrument. In Qlib, these scores are evaluated using metrics like the Information Coefficient (IC) and Rank IC to show how well the scores predict future returns. Essentially, the higher the score, the more profit the instruments—if your IC is positive and statistically significant, the highest-scored stocks should, on average, outperform the lower-scored ones.

Powering Quant Finance with Qlib’s PyTorch MLP on Alpha360

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Qlib is an AI-oriented, open-source platform from Microsoft that simplifies the entire quantitative finance process. By leveraging PyTorch, Qlib can seamlessly integrate modern neural networks—like Multi-Layer Perceptrons (MLPs)—to process large datasets, engineer alpha factors, and run flexible backtests. In this post, we focus on a PyTorch MLP pipeline for Alpha360 data in the US market, examining a single YAML configuration that unifies data ingestion, model training, and performance evaluation.

Adaptive Deep Learning in Quant Finance with Qlib’s PyTorch AdaRNN

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

AdaRNN is a specialized PyTorch model designed to adaptively learn from non-stationary financial time series—where market distributions evolve over time. Originally proposed in the paper AdaRNN: Adaptive Learning and Forecasting for Time Series, it leverages both GRU layers and transfer-loss techniques to mitigate the effects of distributional shift. This article demonstrates how AdaRNN can be applied within Microsoft’s Qlib—an open-source, AI-oriented platform for quantitative finance.

Harnessing AI for Quantitative Finance with Qlib and LightGBM

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In the realm of quantitative finance, machine learning and deep learning are revolutionizing how researchers and traders discover alpha, manage portfolios, and adapt to market shifts. Qlib by Microsoft is a powerful open-source framework that merges AI techniques with end-to-end finance workflows.

This article demonstrates how Qlib automates an AI-driven quant workflow—from data ingestion and feature engineering to model training and backtesting—using a single YAML configuration for a LightGBM model. Specifically, we’ll explore the AI-centric aspects of how qrun orchestrates the entire pipeline and highlight best practices for leveraging advanced ML models in your quantitative strategies.

Understanding Gradient Descent in Linear Regression

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Gradient descent is a fundamental optimization algorithm used in machine learning to minimize the cost function and find the optimal parameters of a model. In the context of linear regression, gradient descent helps in finding the best-fitting line by iteratively updating the model parameters. This article delves into the mechanics of gradient descent in linear regression, focusing on how the parameters are updated and the impact of the sign of the gradient.

Understanding Linear Regression in Machine Learning

· 4 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Linear regression is a fundamental algorithm in supervised machine learning, widely used for predicting continuous outcomes. It models the relationship between a dependent variable and one or more independent variables by fitting a linear equation to observed data. This article delves into the components of linear regression, explaining how inputs, parameters, and the cost function work together to create a predictive model.