Skip to content

Commit

Permalink
update ml-coding
Browse files Browse the repository at this point in the history
  • Loading branch information
Alireza Dirafzoon committed Mar 15, 2023
1 parent 0dc49ac commit c26db1d
Showing 1 changed file with 82 additions and 162 deletions.
244 changes: 82 additions & 162 deletions src/MLC/Notebooks/k_means_1.ipynb
Original file line number Diff line number Diff line change
@@ -1,204 +1,124 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "functional-corrections",
"metadata": {},
"source": [
"## K-means with multi-dimensional data\n",
" \n",
"$X_{n \\times d}$"
"## K-means "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "formal-antique",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import time"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "durable-horse",
"metadata": {},
"outputs": [],
"source": [
"n, d, k=1000, 20, 4\n",
"max_itr=100"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "egyptian-omaha",
"attachments": {},
"cell_type": "markdown",
"id": "109c1cfe",
"metadata": {},
"outputs": [],
"source": [
"X=np.random.random((n,d))"
"K-means clustering is a popular unsupervised machine learning algorithm used for grouping similar data points into k - clusters. Goal: to partition a given dataset into k (predefined) clusters.\n",
"\n",
"The k-means algorithm works by first randomly initializing k cluster centers, one for each cluster. Each data point in the dataset is then assigned to the nearest cluster center based on their distance. The distance metric used is typically Euclidean distance, but other distance measures such as Manhattan distance or cosine similarity can also be used.\n",
"\n",
"After all the data points have been assigned to a cluster, the algorithm calculates the new mean for each cluster by taking the average of all the data points assigned to that cluster. These new means become the new cluster centers. The algorithm then repeats the assignment and mean calculation steps until the cluster assignments no longer change or until a maximum number of iterations is reached.\n",
"\n",
"The final output of the k-means algorithm is a set of k clusters, where each cluster contains the data points that are most similar to each other based on the distance metric used. The algorithm is commonly used in various fields such as image segmentation, market segmentation, and customer profiling.\n",
"\n",
"\n",
"```\n",
"Initialize:\n",
"- K: number of clusters\n",
"- Data: the input dataset\n",
"- Randomly select K initial centroids\n",
"\n",
"Repeat:\n",
"- Assign each data point to the nearest centroid (based on Euclidean distance)\n",
"- Calculate the mean of each cluster to update its centroid\n",
"- Check if the centroids have converged (i.e., they no longer change)\n",
"\n",
"Until:\n",
"- The centroids have converged\n",
"- The maximum number of iterations has been reached\n",
"\n",
"Output:\n",
"- The final K clusters and their corresponding centroids\n",
"```\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "employed-helen",
"id": "36cafa73",
"metadata": {},
"source": [
"$$ argmin_j ||x_i - c_j||_2 $$"
"Here's an implementation of k-means clustering algorithm in Python from scratch:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "center-timer",
"execution_count": 1,
"id": "ab3cb277",
"metadata": {},
"outputs": [],
"source": [
"def k_means(X, k):\n",
" #Randomly Initialize Centroids\n",
" np.random.seed(0)\n",
" C= X[np.random.randint(n,size=k),:]\n",
" E=np.float('inf')\n",
" for itr in range(max_itr):\n",
"import numpy as np\n",
"\n",
"class KMeans:\n",
" def __init__(self, k, max_iterations=100):\n",
" self.k = k\n",
" self.max_iterations = max_iterations\n",
" \n",
" def fit(self, X):\n",
" # Initialize centroids randomly\n",
" self.centroids = X[np.random.choice(range(len(X)), self.k, replace=False)]\n",
" \n",
" # Find the distance of each point from the centroids \n",
" E_prev=E\n",
" E=0\n",
" center_idx=np.zeros(n)\n",
" for i in range(n):\n",
" min_d=np.float('inf')\n",
" c=0\n",
" for j in range(k):\n",
" d=np.linalg.norm(X[i,:]-C[j,:],2)\n",
" if d<min_d:\n",
" min_d=d\n",
" c=j\n",
" for i in range(self.max_iterations):\n",
" # Assign each data point to the nearest centroid\n",
" cluster_assignments = []\n",
" for j in range(len(X)):\n",
" distances = np.linalg.norm(X[j] - self.centroids, axis=1)\n",
" cluster_assignments.append(np.argmin(distances))\n",
" \n",
" E+=min_d\n",
" center_idx[i]=c\n",
" # Update centroids\n",
" for k in range(self.k):\n",
" cluster_data_points = X[np.where(np.array(cluster_assignments) == k)]\n",
" if len(cluster_data_points) > 0:\n",
" self.centroids[k] = np.mean(cluster_data_points, axis=0)\n",
" \n",
" #Find the new centers\n",
" for j in range(k):\n",
" C[j,:]=np.mean( X[center_idx==j,:] ,0)\n",
" \n",
" if itr%10==0:\n",
" print(E)\n",
" if E_prev==E:\n",
" break\n",
" # Check for convergence\n",
" if i > 0 and np.array_equal(self.centroids, previous_centroids):\n",
" break\n",
" \n",
" return C, E, center_idx"
" # Update previous centroids\n",
" previous_centroids = np.copy(self.centroids)\n",
" \n",
" # Store the final cluster assignments\n",
" self.cluster_assignments = cluster_assignments\n",
" \n",
" def predict(self, X):\n",
" # Assign each data point to the nearest centroid\n",
" cluster_assignments = []\n",
" for j in range(len(X)):\n",
" distances = np.linalg.norm(X[j] - self.centroids, axis=1)\n",
" cluster_assignments.append(np.argmin(distances))\n",
" \n",
" return cluster_assignments"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "material-hayes",
"id": "538027c3",
"metadata": {},
"source": [
"$$ argmin_j ||x_i - c_j||_2 $$\n",
"\n",
"$$||x_i - c_j||_2 = \\sqrt{(x_i - c_j)^T (x_i-c_j)} = \\sqrt{x_i^T x_i -2 x_i^T c_j + c_j^T c_j} $$\n",
"\n",
"- $ diag(X~X^T)$, can be used to get $x_i^T x_i$\n",
"\n",
"- $X~C^T $, can be used to get $x_i^T c_j$\n",
"The KMeans class has an __init__ method that takes the number of clusters (k) and the maximum number of iterations to run (max_iterations). The fit method takes the input dataset (X) and runs the k-means clustering algorithm. The predict method takes a new dataset (X) and returns the cluster assignments for each data point based on the centroids learned during training.\n",
"\n",
"- $diag(C~C^T)$, can be used to get $c_j^T c_j$"
"Note that this implementation assumes that the input dataset X is a NumPy array with each row representing a single data point and each column representing a feature. The algorithm also uses Euclidean distance to calculate the distances between data points and centroids.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "colored-linux",
"metadata": {},
"outputs": [],
"source": [
"def k_means_vectorized(X, k):\n",
" \n",
" #Randomly Initialize Centroids\n",
" np.random.seed(0)\n",
" C= X[np.random.randint(n,size=k),:]\n",
" E=np.float('inf')\n",
" for itr in range(max_itr):\n",
" # Find the distance of each point from the centroids \n",
" XX= np.tile(np.diag(np.matmul(X, X.T)), (k,1) ).T\n",
" XC=np.matmul(X, C.T)\n",
" CC= np.tile(np.diag(np.matmul(C, C.T)), (n,1)) \n",
"\n",
" D= np.sqrt(XX-2*XC+CC)\n",
"\n",
" # Assign the elements to the centroids:\n",
" center_idx=np.argmin(D, axis=1)\n",
"\n",
" #Find the new centers\n",
" for j in range(k):\n",
" C[j,:]=np.mean( X[center_idx==j,:] ,0)\n",
"\n",
" #Find the error\n",
" E_prev=E\n",
" E=np.sum(D[np.arange(n),center_idx])\n",
" if itr%10==0:\n",
" print(E)\n",
" if E_prev==E:\n",
" break\n",
" \n",
" return C, E, center_idx"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "equivalent-platinum",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1517.502248752696\n",
"1218.91004301866\n",
"1217.362137659097\n",
"0.8816308975219727 seconds\n"
]
}
],
"source": [
"start=time.time()\n",
"C, E, center_idx = k_means(X, k)\n",
"print(time.time()-start,'seconds')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "environmental-steam",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1517.502248752696\n",
"1218.9100430186547\n",
"1217.3621376590977\n",
"0.09020209312438965 seconds\n"
]
}
],
"source": [
"start=time.time()\n",
"C, E, center_idx = k_means_vectorized(X, k)\n",
"print(time.time()-start,'seconds')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "north-picking",
"cell_type": "markdown",
"id": "1724d308",
"metadata": {},
"outputs": [],
"source": []
}
],
Expand All @@ -218,7 +138,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.6"
"version": "3.9.7"
}
},
"nbformat": 4,
Expand Down

0 comments on commit c26db1d

Please sign in to comment.