Skip to main content

12 posts tagged with "finance"

View All Tags

Qlib’s Nested Execution for High-Frequency Trading with AI

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

High-Frequency Trading (HFT) involves handling large volumes of orders at extremely high speeds—often measured in microseconds or milliseconds. AI (machine learning and reinforcement learning, in particular) has become pivotal in capturing fleeting market opportunities and managing real-time decisions in these ultra-fast trading environments.

In Qlib, the Nested Decision Execution Framework simplifies building multi-level HFT strategies, allowing a high-level (daily or weekly) strategy to nest an intraday (or sub-intraday) executor or sub-workflow. This design enables realistic joint backtesting: daily portfolio selection and intraday HFT execution interact seamlessly, ensuring that real slippage, partial fills, and transaction costs are accurately accounted for.

By the end of this guide, you’ll understand:

  1. How Qlib structures multi-level workflows (daily vs. intraday).
  2. How AI techniques (supervised and reinforcement learning) slot into Qlib’s design.
  3. How to set up an Executor sub-workflow for high-frequency order splitting and real-time decision-making.

Multi-Level Strategy Workflow

Below is an overview diagram (adapted from Qlib’s documentation) depicting how daily strategies can nest intraday sub-strategies or RL agents:

  • Daily Strategy: Generates coarse decisions (e.g., “Buy X shares by day’s end”).
  • Executor: Breaks decisions into smaller actions. Within it, a Reinforcement Learning policy (or any other AI model) can run at minute or sub-minute intervals.
  • Simulator/Environment: Provides intraday data, simulates order fills/slippage, and feeds rewards back to the RL policy.

This nesting allows realistic interaction between daily allocation goals and intraday fill performance.


Key Components

1. Information Extractor (Intraday)

For HFT, Qlib can store data at 1-minute intervals, or even tick/orderbook-level data, using specialized backends (e.g., Arctic). An example below shows how Qlib can manage non-fixed-frequency records:

# Example snippet from qlib/examples/orderbook_data
# Download sample data, then import into your local mongo or Arctic DB
python create_dataset.py initialize_library
python create_dataset.py import_data

Once imported, intraday/tick data can be accessed by Qlib’s normal data APIs for feature engineering or direct RL state representation.


2. Forecast Model (Intraday + Daily)

A single Qlib workflow can hold multiple forecast models:

  • Daily Model: Predicts overnight returns or daily alpha (e.g., LightGBM on daily bars).
  • Intraday Model: Predicts short-term (minutes/seconds) price movements. This might be a small neural net or an RL policy evaluating states like order-book depth, spread, volume patterns, etc.

Qlib’s reinforcement learning interface (QlibRL) can also handle advanced models:

  • Policy: Learns from reward signals (e.g., PnL, transaction costs, slippage).
  • Action Interpreter: Converts policy actions into actual orders.

3. Decision Generator (Daily vs. Intraday)

Daily Decision Generator might produce a target portfolio:

Stock A: +5% allocation
Stock B: -2% allocation

Intraday Decision Generator (within the Executor) can then split these top-level instructions into multiple smaller trades. For example, an RL policy might decide to buy 2% of Stock A during the opening auction, 1% during midday, and 2% near closing, based on real-time microprice signals.


4. Executor & Sub-workflow (Nested)

Executor is where the nested approach truly shines. It wraps a more granular intraday or high-frequency sub-strategy.

This sub-workflow can be as simple as scheduling trades evenly or as advanced as an RL policy that:

  1. Observes short-term price movement.
  2. Acts to minimize slippage and transaction cost.
  3. Receives reward signals from the environment (filled shares, average fill price vs. VWAP, etc.).

5. Environment & Simulator

When applying Reinforcement Learning, Qlib uses an Environment wrapper:

  1. State: Intraday features (latest LOB data, partial fill stats).
  2. Action: The RL agent chooses to place a limit order, market order, or skip.
  3. Reward: Often the negative cost of trading or realized PnL improvement.

You can leverage Qlib’s built-in simulators or customize them for specific market microstructures.


Example Workflow Snippets

Here’s a high-level script illustrating a daily + intraday nested setup. (Pseudocode for demonstration only.)

# daily_intraday_workflow.py

import qlib
from qlib.config import C
from qlib.data import D
from qlib.rl.order_execution_policy import RLOrderExecPolicy
from qlib.strategy.base import BaseStrategy

class DailyAlphaStrategy(BaseStrategy):
"""Generates daily-level decisions (which stocks to buy/sell)."""

def generate_trade_decision(self, *args, **kwargs):
# Imagine we have daily predictions from a model...
scores = self.signal.get_signal() # daily alpha scores
# Then produce a dictionary {stock: weight or shares}
decisions = compute_target_positions(scores)
return decisions

class NestedExecutor:
"""Executor that calls an intraday RL sub-strategy for each daily decision."""

def __init__(self, intraday_policy):
self.intraday_policy = intraday_policy

def execute_daily_decision(self, daily_decision):
# Suppose daily_decision = { 'AAPL': +100 shares, 'MSFT': +50 shares }
# We'll break it into sub-orders via RL
for stock, shares in daily_decision.items():
# RL agent decides how to place those shares intraday
self.intraday_policy.run_execution(stock, shares)

def main():
qlib.init(provider_uri="your_data_path") # local data or remote server

daily_strategy = DailyAlphaStrategy(signal=your_daily_signal)
intraday_policy = RLOrderExecPolicy() # RL policy with QlibRL

executor = NestedExecutor(intraday_policy=intraday_policy)

# Hypothetical daily loop
for date in trading_calendar:
daily_decision = daily_strategy.generate_trade_decision()
executor.execute_daily_decision(daily_decision)

if __name__ == "__main__":
main()

Notes:

  • DailyAlphaStrategy uses a daily alpha model for stock scoring.
  • NestedExecutor calls RLOrderExecPolicy, which runs intraday steps.
  • Real code will handle position objects, trade calendars, and backtest frameworks in more detail.

Practical Tips for HFT + AI

  1. Data Freshness: HFT signals must be updated almost in real-time. Ensure your Qlib data pipeline is either streaming or as close to real-time as possible.
  2. Latency Considerations: Real HFT in production must address network latency and order routing. Qlib’s framework focuses on backtesting or simulation; integrating actual exchange connectivity is non-trivial.
  3. Overfitting & Market Regimes: Intraday data is often noisy; guard against overfitting your ML or RL models to fleeting patterns.
  4. Joint Optimization: Tweaking daily portfolio turnover and intraday execution in isolation can be suboptimal. Qlib’s nested design helps you see the whole chain’s PnL effect.
  5. Reinforcement Learning: Start simple (e.g., Q-learning or policy gradient) before moving to complex neural networks. Use carefully designed rewards capturing cost, fill rates, and profit.

Summary

By combining AI (supervised or RL models) with a Nested Decision Execution approach, Qlib lets you:

  • Unify Daily and Intraday strategies in a single backtest.
  • Leverage Real-time AI for micro-execution decisions.
  • Optimize both large-scale allocations and fine-grained order placements simultaneously.

This framework is especially powerful for High-Frequency Trading use cases, where multiple decision layers (portfolio vs. sub-second order slicing) must interact. Whether you’re using classical ML or advanced RL, Qlib streamlines experimentation and helps close the gap between daily trading and ultra-fast intraday execution.


Further Reading & References

Happy trading!

A Comprehensive Guide to Qlib’s Portfolio Strategy, TopkDropoutStrategy, and EnhancedIndexingStrategy

· 9 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In Qlib, portfolio strategies turn prediction scores into actionable orders (buy/sell) for building and rebalancing a portfolio. This article will:

  1. Explain the architecture of key strategy classes.
  2. Demonstrate TopkDropoutStrategy and EnhancedIndexingStrategy in detail.
  3. Present diagrams and code blocks illustrating the step-by-step flows.

By the end, you’ll see how to plug your own predictive model scores into these strategies and make them trade automatically.


Class Hierarchy

Below is a simple diagram showing how these classes inherit from one another:

  • BaseStrategy: Core abstraction; requires a method to generate a trade decision.
  • BaseSignalStrategy: Extends BaseStrategy with “signals” (model scores).
  • TopkDropoutStrategy: Buys the top-K scoring stocks and drops the worst ones.
  • WeightStrategyBase: Uses target weights (fractions of the portfolio) rather than discrete buy/sell.
  • EnhancedIndexingStrategy: Adds advanced risk modeling for partial index tracking.

High-Level Trading Flow for Top-K

Here’s a top-down look at a generic daily (or periodic) process once your predictions are ready:


Code Walkthrough

Below we break down the code for Qlib’s portfolio strategies into sections, each supplemented by additional flow diagrams relevant to that part of the code.

1. Imports and Setup

import os
import copy
import warnings
import numpy as np
import pandas as pd

from typing import Dict, List, Text, Tuple, Union
from abc import ABC

from qlib.data import D
from qlib.data.dataset import Dataset
from qlib.model.base import BaseModel
from qlib.strategy.base import BaseStrategy
from qlib.backtest.position import Position
from qlib.backtest.signal import Signal, create_signal_from
from qlib.backtest.decision import Order, OrderDir, TradeDecisionWO
from qlib.log import get_module_logger
from qlib.utils import get_pre_trading_date, load_dataset
from qlib.contrib.strategy.order_generator import OrderGenerator, OrderGenWOInteract
from qlib.contrib.strategy.optimizer import EnhancedIndexingOptimizer

Explanation

  • Core Python imports for numerical operations, data processing, and type hints.
  • Qlib-specific imports:
    • BaseStrategy, Position, Signal, and TradeDecisionWO for implementing custom strategies and managing trade decisions.
    • OrderGenerator and EnhancedIndexingOptimizer for generating orders from target weights and optimizing risk exposure.

2. BaseSignalStrategy

Below is a class diagram illustrating BaseSignalStrategy inheriting from BaseStrategy and adding a signal field:

class BaseSignalStrategy(BaseStrategy, ABC):
def __init__(
self,
*,
signal: Union[Signal, Tuple[BaseModel, Dataset], List, Dict, Text, pd.Series, pd.DataFrame] = None,
model=None,
dataset=None,
risk_degree: float = 0.95,
trade_exchange=None,
level_infra=None,
common_infra=None,
**kwargs,
):
"""
Parameters
-----------
signal :
Could be a Signal object or raw predictions from a model/dataset.
risk_degree : float
Fraction of total capital to invest (default 0.95).
trade_exchange : Exchange
Market info for dealing orders, generating reports, etc.
"""
super().__init__(level_infra=level_infra, common_infra=common_infra, trade_exchange=trade_exchange, **kwargs)

self.risk_degree = risk_degree

# For backward-compatibility with (model, dataset)
if model is not None and dataset is not None:
warnings.warn("`model` `dataset` is deprecated; use `signal`.", DeprecationWarning)
signal = model, dataset

self.signal: Signal = create_signal_from(signal)

def get_risk_degree(self, trade_step=None):
"""Return the fraction of total value to allocate."""
return self.risk_degree

Key Points

  • BaseSignalStrategy extends BaseStrategy and integrates a concept of a signal (predictions).
  • risk_degree indicates what fraction of the portfolio’s capital is invested (defaults to 95%).

3. TopkDropoutStrategy

Here’s a flow diagram specifically for the generate_trade_decision method in TopkDropoutStrategy, showing how the code sorts holdings, identifies “drop” stocks, and selects new buys:

class TopkDropoutStrategy(BaseSignalStrategy):
def __init__(
self,
*,
topk,
n_drop,
method_sell="bottom",
method_buy="top",
hold_thresh=1,
only_tradable=False,
forbid_all_trade_at_limit=True,
**kwargs,
):
"""
Parameters
-----------
topk : int
Desired number of stocks to hold.
n_drop : int
Number of stocks replaced each rebalance.
method_sell : str
Approach to dropping existing stocks (e.g. 'bottom').
method_buy : str
Approach to adding new stocks (e.g. 'top').
hold_thresh : int
Must hold a stock for at least this many days before selling.
only_tradable : bool
Ignore non-tradable stocks.
forbid_all_trade_at_limit : bool
Disallow trades if limit up/down is reached.
"""
super().__init__(**kwargs)
self.topk = topk
self.n_drop = n_drop
self.method_sell = method_sell
self.method_buy = method_buy
self.hold_thresh = hold_thresh
self.only_tradable = only_tradable
self.forbid_all_trade_at_limit = forbid_all_trade_at_limit

def generate_trade_decision(self, execute_result=None):
trade_step = self.trade_calendar.get_trade_step()
trade_start_time, trade_end_time = self.trade_calendar.get_step_time(trade_step)
pred_start_time, pred_end_time = self.trade_calendar.get_step_time(trade_step, shift=1)
pred_score = self.signal.get_signal(start_time=pred_start_time, end_time=pred_end_time)

# If no score, do nothing
if pred_score is None:
return TradeDecisionWO([], self)

# If multiple columns, pick the first
if isinstance(pred_score, pd.DataFrame):
pred_score = pred_score.iloc[:, 0]

# Helper functions for picking top/bottom stocks...
...

# Copy current position
current_temp: Position = copy.deepcopy(self.trade_position)
sell_order_list = []
buy_order_list = []
cash = current_temp.get_cash()
current_stock_list = current_temp.get_stock_list()

# Sort current holdings by descending score
last = pred_score.reindex(current_stock_list).sort_values(ascending=False).index

# Identify new stocks to buy
...

# Figure out which existing stocks to sell
...

# Create Sell Orders
...

# Create Buy Orders
...

return TradeDecisionWO(sell_order_list + buy_order_list, self)

Key Points

  • The “top-K, drop worst-K” concept is implemented by comparing current holdings to the broader universe of stocks sorted by score.
  • Some specifics:
    • method_sell can be "bottom", so you drop the lowest-scored holdings.
    • method_buy can be "top", so you pick the top new stocks that aren’t in the portfolio.

4. WeightStrategyBase

Below is a quick diagram for how WeightStrategyBase converts target weights into final orders:

class WeightStrategyBase(BaseSignalStrategy):
def __init__(
self,
*,
order_generator_cls_or_obj=OrderGenWOInteract,
**kwargs,
):
super().__init__(**kwargs)
if isinstance(order_generator_cls_or_obj, type):
self.order_generator: OrderGenerator = order_generator_cls_or_obj()
else:
self.order_generator: OrderGenerator = order_generator_cls_or_obj

def generate_target_weight_position(self, score, current, trade_start_time, trade_end_time):
"""
Subclasses must override this to return:
{stock_id: target_weight}
"""
raise NotImplementedError()

def generate_trade_decision(self, execute_result=None):
trade_step = self.trade_calendar.get_trade_step()
trade_start_time, trade_end_time = self.trade_calendar.get_step_time(trade_step)
pred_start_time, pred_end_time = self.trade_calendar.get_step_time(trade_step, shift=1)
pred_score = self.signal.get_signal(start_time=pred_start_time, end_time=pred_end_time)
if pred_score is None:
return TradeDecisionWO([], self)

current_temp = copy.deepcopy(self.trade_position)
assert isinstance(current_temp, Position)

# Let the subclass produce the weights
target_weight_position = self.generate_target_weight_position(
score=pred_score, current=current_temp, trade_start_time=trade_start_time, trade_end_time=trade_end_time
)

# Convert weights -> Orders
order_list = self.order_generator.generate_order_list_from_target_weight_position(
current=current_temp,
trade_exchange=self.trade_exchange,
risk_degree=self.get_risk_degree(trade_step),
target_weight_position=target_weight_position,
pred_start_time=pred_start_time,
pred_end_time=pred_end_time,
trade_start_time=trade_start_time,
trade_end_time=trade_end_time,
)
return TradeDecisionWO(order_list, self)

Key Points

  • WeightStrategyBase uses a target-weight approach: you specify a final allocation for each stock.
  • The built-in order_generator calculates how many shares to buy/sell to achieve the target allocation.

5. EnhancedIndexingStrategy

Lastly, a diagram shows how this strategy merges model scores with factor data and a benchmark:

class EnhancedIndexingStrategy(WeightStrategyBase):
"""
Combines active and passive management, aiming to
outperform a benchmark index while controlling tracking error.
"""

FACTOR_EXP_NAME = "factor_exp.pkl"
FACTOR_COV_NAME = "factor_cov.pkl"
SPECIFIC_RISK_NAME = "specific_risk.pkl"
BLACKLIST_NAME = "blacklist.pkl"

def __init__(
self,
*,
riskmodel_root,
market="csi500",
turn_limit=None,
name_mapping={},
optimizer_kwargs={},
verbose=False,
**kwargs,
):
super().__init__(**kwargs)
self.logger = get_module_logger("EnhancedIndexingStrategy")

self.riskmodel_root = riskmodel_root
self.market = market
self.turn_limit = turn_limit

self.factor_exp_path = name_mapping.get("factor_exp", self.FACTOR_EXP_NAME)
self.factor_cov_path = name_mapping.get("factor_cov", self.FACTOR_COV_NAME)
self.specific_risk_path = name_mapping.get("specific_risk", self.SPECIFIC_RISK_NAME)
self.blacklist_path = name_mapping.get("blacklist", self.BLACKLIST_NAME)

self.optimizer = EnhancedIndexingOptimizer(**optimizer_kwargs)
self.verbose = verbose
self._riskdata_cache = {}

def get_risk_data(self, date):
if date in self._riskdata_cache:
return self._riskdata_cache[date]

root = self.riskmodel_root + "/" + date.strftime("%Y%m%d")
if not os.path.exists(root):
return None

factor_exp = load_dataset(root + "/" + self.factor_exp_path, index_col=[0])
factor_cov = load_dataset(root + "/" + self.factor_cov_path, index_col=[0])
specific_risk = load_dataset(root + "/" + self.specific_risk_path, index_col=[0])

if not factor_exp.index.equals(specific_risk.index):
specific_risk = specific_risk.reindex(factor_exp.index, fill_value=specific_risk.max())

universe = factor_exp.index.tolist()
blacklist = []
if os.path.exists(root + "/" + self.blacklist_path):
blacklist = load_dataset(root + "/" + self.blacklist_path).index.tolist()

self._riskdata_cache[date] = factor_exp.values, factor_cov.values, specific_risk.values, universe, blacklist
return self._riskdata_cache[date]

def generate_target_weight_position(self, score, current, trade_start_time, trade_end_time):
trade_date = trade_start_time
pre_date = get_pre_trading_date(trade_date, future=True)

outs = self.get_risk_data(pre_date)
if outs is None:
self.logger.warning(f"No risk data for {pre_date:%Y-%m-%d}, skipping optimization")
return None

factor_exp, factor_cov, specific_risk, universe, blacklist = outs

# Align score with risk model universe
score = score.reindex(universe).fillna(score.min()).values

# Current portfolio weights
cur_weight = current.get_stock_weight_dict(only_stock=False)
cur_weight = np.array([cur_weight.get(stock, 0) for stock in universe])
cur_weight = cur_weight / self.get_risk_degree(trade_date)

# Benchmark weight
bench_weight = D.features(
D.instruments("all"), [f"${self.market}_weight"], start_time=pre_date, end_time=pre_date
).squeeze()
bench_weight.index = bench_weight.index.droplevel(level="datetime")
bench_weight = bench_weight.reindex(universe).fillna(0).values

# Track which stocks are tradable and which are blacklisted
tradable = D.features(D.instruments("all"), ["$volume"], start_time=pre_date, end_time=pre_date).squeeze()
tradable.index = tradable.index.droplevel(level="datetime")
tradable = tradable.reindex(universe).gt(0).values
mask_force_hold = ~tradable
mask_force_sell = np.array([stock in blacklist for stock in universe], dtype=bool)

# Optimize based on scores + factor model
weight = self.optimizer(
r=score,
F=factor_exp,
cov_b=factor_cov,
var_u=specific_risk**2,
w0=cur_weight,
wb=bench_weight,
mfh=mask_force_hold,
mfs=mask_force_sell,
)

target_weight_position = {stock: w for stock, w in zip(universe, weight) if w > 0}

if self.verbose:
self.logger.info(f"trade date: {trade_date:%Y-%m-%d}")
self.logger.info(f"number of holding stocks: {len(target_weight_position)}")
self.logger.info(f"total holding weight: {weight.sum():.6f}")

return target_weight_position

Key Points

  • Uses riskmodel_root to pull factor exposures, covariances, and specific risk estimates.
  • Combines your model scores with a benchmark weight to control tracking error via an optimizer.
  • Produces a final weight map, which Qlib then converts to buy/sell orders.

Summary

  • BaseSignalStrategy attaches prediction data to a strategy.
  • TopkDropoutStrategy implements a straightforward “buy top-K, drop worst-K” approach.
  • WeightStrategyBase generalizes weight-based rebalancing.
  • EnhancedIndexingStrategy is a powerful extension, combining active signals and passive indexing with risk control.

By customizing just a few methods or parameters, you can adapt these strategies to your own investing style. Simply feed your daily scores (prediction of returns) into Qlib, pick a strategy class, and let Qlib do the rest.

Happy Trading!

Understanding Score IC in Qlib for Enhanced Profit

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

One of the core ideas in quantitative finance is that model predictions—often called “scores”—can be mapped to expected returns on an instrument. In Qlib, these scores are evaluated using metrics like the Information Coefficient (IC) and Rank IC to show how well the scores predict future returns. Essentially, the higher the score, the more profit the instruments—if your IC is positive and statistically significant, the highest-scored stocks should, on average, outperform the lower-scored ones.

Powering Quant Finance with Qlib’s PyTorch MLP on Alpha360

· 5 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Qlib is an AI-oriented, open-source platform from Microsoft that simplifies the entire quantitative finance process. By leveraging PyTorch, Qlib can seamlessly integrate modern neural networks—like Multi-Layer Perceptrons (MLPs)—to process large datasets, engineer alpha factors, and run flexible backtests. In this post, we focus on a PyTorch MLP pipeline for Alpha360 data in the US market, examining a single YAML configuration that unifies data ingestion, model training, and performance evaluation.

Adaptive Deep Learning in Quant Finance with Qlib’s PyTorch AdaRNN

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

AdaRNN is a specialized PyTorch model designed to adaptively learn from non-stationary financial time series—where market distributions evolve over time. Originally proposed in the paper AdaRNN: Adaptive Learning and Forecasting for Time Series, it leverages both GRU layers and transfer-loss techniques to mitigate the effects of distributional shift. This article demonstrates how AdaRNN can be applied within Microsoft’s Qlib—an open-source, AI-oriented platform for quantitative finance.

Harnessing AI for Quantitative Finance with Qlib and LightGBM

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In the realm of quantitative finance, machine learning and deep learning are revolutionizing how researchers and traders discover alpha, manage portfolios, and adapt to market shifts. Qlib by Microsoft is a powerful open-source framework that merges AI techniques with end-to-end finance workflows.

This article demonstrates how Qlib automates an AI-driven quant workflow—from data ingestion and feature engineering to model training and backtesting—using a single YAML configuration for a LightGBM model. Specifically, we’ll explore the AI-centric aspects of how qrun orchestrates the entire pipeline and highlight best practices for leveraging advanced ML models in your quantitative strategies.

Correct Exchange Mapping in VeighNa to Resolve IB Security Definition Errors

· 14 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In the intricate world of algorithmic trading, seamless integration between trading platforms and broker APIs is paramount.

One common issue when interfacing with Interactive Brokers (IB) API is encountering the error:

ERROR:root:Error - ReqId: 1, Code: 200, Message: No security definition has been found for the request

This error typically arises due to incorrect exchange mapping, preventing Interactive Brokers (IB) from recognizing the requested security. This article delves into the importance of accurate exchange mapping within the VeighNa trading platform, provides a detailed overview of IB's symbol rules, explains the updatePortfolio method, and offers guidance on implementing correct mappings to avoid such errors.

Understanding the Sniper Algorithm Implementation in Algorithmic Trading

· 8 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In the realm of algorithmic trading, execution algorithms play a pivotal role in optimizing trade orders to minimize market impact and slippage. One such algorithm is the Sniper Algorithm, which is designed to execute trades discreetly and efficiently by capitalizing on favorable market conditions.

This article aims to review and understand the implementation of the Sniper Algorithm as provided in the VeighNa trading platform's open-source repository. By dissecting the code and explaining its components, we hope to provide clarity on how the algorithm functions and how it can be utilized in practical trading scenarios.

Backtesting NVIDIA Stock Strategies on VeighNa - Moving Average Crossover Strategy

· 15 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

Backtesting is essential for validating trading strategies, especially in the high-frequency and volatile world of stocks like NVIDIA (NVDA). Using VeighNa, an open-source algorithmic trading system, provides traders with the flexibility to thoroughly test strategies and optimize for performance. In this guide, we'll walk through setting up VeighNa, backtesting a simple Moving Average Crossover strategy on NVIDIA, explaining the strategy in detail, troubleshooting common installation issues, and optimizing your strategy.

Automating Financial Data Collection and Uploading to Hugging Face for Algorithmic Trading

· 6 min read
Vadim Nicolai
Senior Software Engineer at Vitrifi

Introduction

In the fast-paced world of algorithmic trading, accessing reliable and timely financial data is essential for backtesting strategies, optimizing models, and making data-driven trading decisions. Automating data collection can streamline your workflow and ensure that you have access to the most recent market information. In this guide, we’ll walk through how to automate the collection of stock data using Python and yfinance, and how to upload this data to Hugging Face for convenient access and future use.

Although this article uses NVIDIA stock data as an example, the process is applicable to any publicly traded company or financial instrument. By integrating data collection and storage into one automated pipeline, traders and analysts can focus on what matters most—developing strategies and maximizing returns.