This repository contains all the instructions and necessary code for Data Mining 2018 (Fall Semester) lab session.
- Operating system: Preferably Linux or MacOS. If you have Windows, things may crash unexpectedly (try installing a virtual machine if you need to)
- RAM: Minimum 8 GB
- Disk space: Mininium 8 GB
In this lab session we are going to be using Python as our main programming language. If you are not familiar with it, I recommend you start with this free Python course offered by Codecademy.
Here is a list of the required programs and libraries necessary for this lab session. (Please install them before coming to our lab session on Tuesday; this will save us a lot of time, plus these include some of the same libraries you may need for your first assignment).
- Python 3+ (Note: coding will be done strictly on Python 3)
- Install latest version of Python 3
Using an environment is to avoid some library conflict problems. You can refer this Setup Instructions to install and setup.
- Anaconda (recommended but not required)
- Install anaconda environment
- Python virtualenv (recommended to Linux/MacOS user)
- Install virtual environment
- Kaggle Kernel
- Run on the cloud (with some limitations)
- Reference: Kaggle Kernels Instructions
- Jupyter (Strongly recommended but not required)
- Install
jupyter
and Use$jupyter notebook
in terminal to run
- Install
- Scikit Learn
- Install
sklearn
latest python library
- Install
- Pandas
- Install
pandas
python library
- Install
- Numpy
- Install
numpy
python library
- Install
- Matplotlib
- Install
maplotlib
for python
- Install
- Plotly
- Install and signup for
plotly
- Install and signup for
- Seaborn
- Install and signup for
seaborn
- Install and signup for
- NLTK
- Install
nltk
library
- Install
Open a jupyter notebook and run the following commands. If you have properly installed all the necessary libraries, you shouldn't have any problems running the lines of code below.
# import library
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
import plotly.plotly as py
import plotly.graph_objs as go
import seaborn as sns
import pandas as pd
import numpy as np
import nltk
import math
# prepare dataset
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
twenty_train.data[0:5]
If you have some hardware problem, you can follow the Kaggle Kernels Instructions to code on kaggle notebook.
Please note that we will upload the jupyter notebook that will be used as guide on both this Githug page and our lab's organization page. (We will provide the link on the day of the lab session). Additional instructions for assignments will be posted there, and submissions will more than likely be required to be posted through Github. In other words, if you don't have an account yet, please create a Github account in advance.
Don't worry! You will have plenty of time to learn Git before the assignment's due date. For the meanwhile, this tutorial can help you get started with GitHub. Learning how to do version control and upload code using git will be useful for other courses in the future so if you want to take your skills to the next level, you can try this online course offered by Codecademy.
One more thing: I have setup a Slack page where we can engage (chat) or just in case anyone has any questions or concerns throughout the course. In Slack you can also setup private groups with your classmates and get to know each other better. From my experience, these tools are very helpful to get help from TAs and other classmates. This chat room will definitely be super helpful for when the project and exam time comes around. Come say 👋 if you are interested in joining the conversation. I will send an invite to your emails (provided by the iLMS). If you don't receive an invite by Sunday (07/10/2018) night, check your spam or send me your email again to [email protected].