Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).
Website | Documentation | Tutorials | Examples | Models | Datasets | Preferred.AI
Currently, we are supporting Python 3. There are several ways to install Cornac:
- From PyPI (you may need a C++ compiler):
pip3 install cornac
- From Anaconda:
conda install cornac -c conda-forge
- From the GitHub source (for latest updates):
pip3 install Cython
git clone https://github.com/PreferredAI/cornac.git
cd cornac
python3 setup.py install
Note:
Additional dependencies required by models are listed here.
Some algorithm implementations use OpenMP
to support multi-threading. For OSX users, in order to run those algorithms efficiently, you might need to install gcc
from Homebrew to have an OpenMP compiler:
brew install gcc | brew link gcc
If you want to utilize your GPUs, you might consider:
- TensorFlow installation instructions.
- PyTorch installation instructions.
- cuDNN (for Nvidia GPUs).
Flow of an Experiment in Cornac
Load the built-in MovieLens 100K dataset (will be downloaded if not cached):
import cornac
ml_100k = cornac.datasets.movielens.load_feedback(variant="100K")
Split the data based on ratio:
rs = cornac.eval_methods.RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
Here we are comparing Biased MF
, PMF
, and BPR
:
mf = cornac.models.MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123)
pmf = cornac.models.PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123)
bpr = cornac.models.BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
Define metrics used to evaluate the models:
mae = cornac.metrics.MAE()
rmse = cornac.metrics.RMSE()
recall = cornac.metrics.Recall(k=[10, 20])
ndcg = cornac.metrics.NDCG(k=[10, 20])
auc = cornac.metrics.AUC()
Put everything together into an experiment and run it:
cornac.Experiment(eval_method=rs,
models=[mf, pmf, bpr],
metrics=[mae, rmse, recall, ndcg, auc],
user_based=True).run()
Output:
MAE | RMSE | AUC | NDCG@10 | NDCG@20 | Recall@10 | Recall@20 | Train (s) | Test (s) | |
---|---|---|---|---|---|---|---|---|---|
MF | 0.7430 | 0.8998 | 0.7445 | 0.0479 | 0.0556 | 0.0352 | 0.0654 | 0.13 | 1.57 |
PMF | 0.7534 | 0.9138 | 0.7744 | 0.0617 | 0.0719 | 0.0479 | 0.0880 | 2.18 | 1.64 |
BPR | N/A | N/A | 0.8695 | 0.0975 | 0.1129 | 0.0891 | 0.1449 | 3.74 | 1.49 |
For more details, please take a look at our examples.
The recommender models supported by Cornac are listed below. Why don't you join us to lengthen the list?
Your contributions at any level of the library are welcome. If you intend to contribute, please:
- Fork the Cornac repository to your own account.
- Make changes and create pull requests.
You can also post bug reports and feature requests in GitHub issues.