Skip to content

πŸ₯ˆ Silver Medal Solution to Kaggle H&M Personalized Fashion Recommendations

License

Notifications You must be signed in to change notification settings

Wp-Zhang/H-M-Fashion-RecSys

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

87 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

H&M Personalized Fashion Recommendations

Kaggle H&M Personalized Fashion Recommendations πŸ₯ˆ Silver Medal Solution 45/3006

background

rank

This repo contains our final solution. Big shout out to my wonderful teammates! @zhouyuanzhe @tarickMorty @Thomasyyj @ChenmienTan

Our team ranked 45/3006 in the end with a LB score of 0.0292 and a PB score of 0.02996.

Our final solution contains 2 recall strategies and we trained 3 different ranking models (LGB ranker, LGB classifier, DNN) for each strategy.

Candidates from the two strategies are quite different so that ensembling the ranking results can help to improve the score. From our experiments, LB score of a single recall strategy can only reach 0.0286 and ensembling helps us to boost up to 0.0292. We also believe that ensembling can make our predicting result more robust.

Due to hardware limits (50G of RAM), we only generated avg 50 candidates for each user and used 4 weeks of data to train the models.

Usage:

  1. Clone this repo
  2. Create data folders in the structure shown below and copy the four .csv files from the original Kaggle competition dataset to data/raw/.
  3. Pre-trained embeddings can be generated by this notebook or you can directly download them through the links below and put them in data/external/.
  4. Run Jupyter Notebooks in notebooks/. Please note that features used by all models are generated in the Feature Engineering part in LGB Recall 1.ipynb, so make sure you run it first.

Google Drive Links of Pre-trained Embeddings

Project Organization

β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ data
β”‚Β Β  β”œβ”€β”€ external       <- External data source, e.g. article/customer pre-trained embeddings.
β”‚Β Β  β”œβ”€β”€ interim        <- Intermediate data that has been transformed, e.g. Candidates generated form recall strategies.
β”‚Β Β  β”œβ”€β”€ processed      <- Processed data for training, e.g. dataframe that has been merged with generated features.
β”‚Β Β  └── raw            <- The original dataset.
β”‚
β”œβ”€β”€ docs               <- Sphinx docstring documentation.
β”‚
β”œβ”€β”€ models             <- Trained and serialized models
β”‚
β”œβ”€β”€ notebooks          <- Jupyter notebooks. 
β”‚
└── src                <- Source code for use in this project.
 Β Β  β”œβ”€β”€ __init__.py    <- Makes src a Python module
    β”‚
 Β Β  β”œβ”€β”€ data           <- Scripts to preprocess data
 Β Β  β”‚Β Β  β”œβ”€β”€ datahelper.py
    β”‚   └── metrics.py
    β”‚
 Β Β  β”œβ”€β”€ features       <- Scripts of feature engineering
 Β Β  β”‚Β Β  └── base_features.py
    β”‚
 Β Β  └── retrieval      <- Scripts to generate candidate articles for ranking models
 Β Β      β”œβ”€β”€ collector.py
        └── rules.py

Project based on the cookiecutter data science project template. #cookiecutterdatascience

About

πŸ₯ˆ Silver Medal Solution to Kaggle H&M Personalized Fashion Recommendations

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •