Source code for NeurIPS 2019 paper "Learning Latent Processes from High-Dimensional Event Sequences via Efficient Sampling"
- Python 3.5
- PyTorch 1.0.1
- GPUs with 12GB memory
- Use
data_generate.py
to generate the synthetic datasets - The memetracker dataset can be downloaded from:https://snap.stanford.edu/data/memetracker9.html
- The weibo dataset can be downloaded from: https://www.aminer.cn/influencelocality
- Our great thanks to authors of the real-world datasets.
- Use
data/#DATA#/preprocess.py
to preprocess the downloaded dataset and you can get the.pkl
files in each folder - To check the statistics of each dataset, run
cck.py
in each folder, where you could also add lines of your own code to check the dataset.
To train on small datasets (Syn-Small and Memetracker), you can run
python train_small.py
To train on large datasets (Syn-Large and Weibo), you can run
python train_large.py
We also released our pre-trained model parameters for each dataset in /model folder. For a quick test, run
python test.py
If you have any problems on this code, feel free to contact [email protected]. If you use this code as part of your research, please cite the following paper:
@inproceedings{LANTERN-19,
author = {Qitian Wu and Zixuan Zhang and Xiaofeng Gao and Junchi Yan and
Guihai Chen},
title = {Learning Latent Process from High-Dimensional Event Sequences via Efficient Sampling},
booktitle = {Thirty-third Conference on Neural Information Processing Systems, {NeurIPS} 2019, Vancouver, Canada,
Dec 8-14, 2019},
year = {2019}
}