Automatic formulaic alpha generation with reinforcement learning.
Paper Generating Synergistic Formulaic Alpha Collections via Reinforcement Learning accepted by KDD 2023, Applied Data Science (ADS) track, more info TBD.
Preprint available on arXiv.
Note that you can either use our builtin alpha calculation pipeline(see Choice 1), or implement an adapter to your own pipeline(see Choice 2).
Builtin pipeline requires Qlib library and local-storaged stock data.
- We need some of the metadata (but not the actual stock price/volume data) given by Qlib, so follow the data preparing process in Qlib first.
- The actual stock data we use are retrieved from baostock, due to concerns on the timeliness and truthfulness of the data source used by Qlib.
- The data can be downloaded by running the script
data_collection/fetch_baostock_data.py
. The newly downloaded data is saved into~/.qlib/qlib_data/cn_data_baostock_fwdadj
by default. This path can be customized to fit your specific needs, but make sure to use the correct path when loading the data (Inalphagen_qlib/stock_data.py
, functionStockData._init_qlib
, the path should be passed to qlib withqlib.init(provider_uri=path)
).
Maybe you have better implements of alpha calculation, you can implement an adapter of alphagen.data.calculator.AlphaCalculator
. The interface is defined as follows:
class AlphaCalculator(metaclass=ABCMeta):
@abstractmethod
def calc_single_IC_ret(self, expr: Expression) -> float:
'Calculate IC between a single alpha and a predefined target.'
@abstractmethod
def calc_mutual_IC(self, expr1: Expression, expr2: Expression) -> float:
'Calculate IC between two alphas.'
@abstractmethod
def calc_pool_IC_ret(self, exprs: List[Expression], weights: List[float]) -> float:
'First combine the alphas linearly,'
'then Calculate IC between the linear combination and a predefined target.'
@abstractmethod
def calc_pool_rIC_ret(self, exprs: List[Expression], weights: List[float]) -> float:
'First combine the alphas linearly,'
'then Calculate Rank IC between the linear combination and a predefined target.'
Reminder: the values evaluated from different alphas may have drastically different scales, we recommend that you should normalize them before combination.
All principle components of our expriment are located in train_maskable_ppo.py.
These parameters may help you build an AlphaCalculator
:
- instruments (Set of instruments)
- start_time & end_time (Data range for each dataset)
- target (Target stock trend, e.g., 20d return rate)
These parameters will define a RL run:
- pool_capacity (Size of combination model)
- steps (Limit of RL steps)
- batch_size (PPO batch size)
- features_extractor_kwargs (Arguments for LSTM shared net)
- seed (Random seed)
- device (PyTorch device)
- save_path (Path for checkpoints)
- tensorboard_log (Path for TensorBoard)
Simply run train_maskable_ppo.py, or DIY if you understand our code well.
- Model checkpoints and alpha pools are located in
save_path
;- The model is compatiable with stable-baselines3
- Alpha pools are formatted in human-readable JSON.
- Tensorboard logs are located in
tensorboard_log
.
gplearn implements Genetic Programming, a commonly used method for symbolic regression. We maintained a modified version of gplearn to make it compatiable with our task. The corresponding experiment scipt is gp.py
DSO is a mature deep learning framework for symbolic optimization tasks. We maintained a minimal version of DSO to make it compatiable with our task. The corresponding experiment scipt is dso.py
/alphagen
contains the basic data structures and the essential modules for starting an alpha mining pipeline;/alphagen_qlib
contains the qlib-specific APIs for data preparation;/alphagen_generic
contains data structures and utils designed for our baselines, which basically follow gplearn APIs, but with modifications for quant pipeline;/gplearn
and/dso
contains modified versions of our baselines.
We implemented some trading strategies based on Qlib. See backtest.py and trade_decision.py for demos.
TBD
Feel free to submit Issues or Pull requests.
This work is maintained by the MLDM research group, IIP, ICT, CAS.
Maintainers include:
Thanks to the following contributors: