Instructor: Ezzeri Esa
- email: [email protected]
- twitter: @ezzeriesa
- github: savarin
This tutorial requires pandas, scikit-learn and IPython with the IPython Notebook. If you're not sure how to install these packages, we recommend the free Anaconda distribution.
We will be reviewing the materials with the IPython Notebook. You should be able to type
ipython notebook
in your terminal window and see the notebook panel load in your web browser.
You can clone the material in this tutorial using git as follows:
git clone git://github.com/savarin/pyconuk-introtutorial.git
Alternatively, there is a link above to download the contents of this repository as a zip file.
The notebooks can be viewed in a static fashion using the nbviewer site, as per the links in the section below. However, we recommend reviewing them interactively with the IPython Notebook.
The tutorial will start with data manipulation using pandas - loading data, and cleaning data. We'll then use scikit-learn to make predictions. By the end of the session, we would have worked on the Kaggle Titanic competition from start to finish, through a number of iterations in an increasing order of sophistication. We’ll also have a brief discussion on cross-validation and making visualisations.
- Section 1-0 - First Cut.ipynb
- Section 1-1 - Filling-in Missing Values.ipynb
- Section 1-2 - Creating Dummy Variables.ipynb
- Section 1-3 - Parameter Tuning.ipynb
- Appendix A - Cross-Validation.ipynb
- Appendix B - Visualisation.ipynb
Time-permitting, we would cover the following additional materials.
- Section 1-4 - Building Pipelines.ipynb
- Section 1-5 - Final Checks.ipynb
- Section 2-1 - Support Vector Machines.ipynb
- Section 2-2 - SVM with Parameter Tuning.ipynb
A Kaggle account would be required for the purposes of making submissions and reviewing our performance on the leaderboard.
Special thanks to amueller, jakevdp, and ogrisel for the excellent materials they've posted.