Skip to content

Commit 3477718

Browse files
Yunnijkbradley
authored andcommitted
[SPARK-18081][ML][DOCS] Add user guide for Locality Sensitive Hashing(LSH)
## What changes were proposed in this pull request? The user guide for LSH is added to ml-features.md, with several scala/java examples in spark-examples. ## How was this patch tested? Doc has been generated through Jekyll, and checked through manual inspection. Author: Yunni <[email protected]> Author: Yun Ni <[email protected]> Author: Joseph K. Bradley <[email protected]> Author: Yun Ni <[email protected]> Closes apache#15795 from Yunni/SPARK-18081-lsh-guide.
1 parent 4a3c096 commit 3477718

File tree

5 files changed

+436
-0
lines changed

5 files changed

+436
-0
lines changed

docs/ml-features.md

+111
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ This section covers algorithms for working with features, roughly divided into t
99
* Extraction: Extracting features from "raw" data
1010
* Transformation: Scaling, converting, or modifying features
1111
* Selection: Selecting a subset from a larger set of features
12+
* Locality Sensitive Hashing (LSH): This class of algorithms combines aspects of feature transformation with other algorithms.
1213

1314
**Table of Contents**
1415

@@ -1480,3 +1481,113 @@ for more details on the API.
14801481
{% include_example python/ml/chisq_selector_example.py %}
14811482
</div>
14821483
</div>
1484+
1485+
# Locality Sensitive Hashing
1486+
[Locality Sensitive Hashing (LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) is an important class of hashing techniques, which is commonly used in clustering, approximate nearest neighbor search and outlier detection with large datasets.
1487+
1488+
The general idea of LSH is to use a family of functions ("LSH families") to hash data points into buckets, so that the data points which are close to each other are in the same buckets with high probability, while data points that are far away from each other are very likely in different buckets. An LSH family is formally defined as follows.
1489+
1490+
In a metric space `(M, d)`, where `M` is a set and `d` is a distance function on `M`, an LSH family is a family of functions `h` that satisfy the following properties:
1491+
`\[
1492+
\forall p, q \in M,\\
1493+
d(p,q) \leq r1 \Rightarrow Pr(h(p)=h(q)) \geq p1\\
1494+
d(p,q) \geq r2 \Rightarrow Pr(h(p)=h(q)) \leq p2
1495+
\]`
1496+
This LSH family is called `(r1, r2, p1, p2)`-sensitive.
1497+
1498+
In Spark, different LSH families are implemented in separate classes (e.g., `MinHash`), and APIs for feature transformation, approximate similarity join and approximate nearest neighbor are provided in each class.
1499+
1500+
In LSH, we define a false positive as a pair of distant input features (with `$d(p,q) \geq r2$`) which are hashed into the same bucket, and we define a false negative as a pair of nearby features (with `$d(p,q) \leq r1$`) which are hashed into different buckets.
1501+
1502+
## LSH Operations
1503+
1504+
We describe the major types of operations which LSH can be used for. A fitted LSH model has methods for each of these operations.
1505+
1506+
### Feature Transformation
1507+
Feature transformation is the basic functionality to add hashed values as a new column. This can be useful for dimensionality reduction. Users can specify input and output column names by setting `inputCol` and `outputCol`.
1508+
1509+
LSH also supports multiple LSH hash tables. Users can specify the number of hash tables by setting `numHashTables`. This is also used for [OR-amplification](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Amplification) in approximate similarity join and approximate nearest neighbor. Increasing the number of hash tables will increase the accuracy but will also increase communication cost and running time.
1510+
1511+
The type of `outputCol` is `Seq[Vector]` where the dimension of the array equals `numHashTables`, and the dimensions of the vectors are currently set to 1. In future releases, we will implement AND-amplification so that users can specify the dimensions of these vectors.
1512+
1513+
### Approximate Similarity Join
1514+
Approximate similarity join takes two datasets and approximately returns pairs of rows in the datasets whose distance is smaller than a user-defined threshold. Approximate similarity join supports both joining two different datasets and self-joining. Self-joining will produce some duplicate pairs.
1515+
1516+
Approximate similarity join accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as `outputCol`.
1517+
1518+
In the joined dataset, the origin datasets can be queried in `datasetA` and `datasetB`. A distance column will be added to the output dataset to show the true distance between each pair of rows returned.
1519+
1520+
### Approximate Nearest Neighbor Search
1521+
Approximate nearest neighbor search takes a dataset (of feature vectors) and a key (a single feature vector), and it approximately returns a specified number of rows in the dataset that are closest to the vector.
1522+
1523+
Approximate nearest neighbor search accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as `outputCol`.
1524+
1525+
A distance column will be added to the output dataset to show the true distance between each output row and the searched key.
1526+
1527+
**Note:** Approximate nearest neighbor search will return fewer than `k` rows when there are not enough candidates in the hash bucket.
1528+
1529+
## LSH Algorithms
1530+
1531+
### Bucketed Random Projection for Euclidean Distance
1532+
1533+
[Bucketed Random Projection](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions) is an LSH family for Euclidean distance. The Euclidean distance is defined as follows:
1534+
`\[
1535+
d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_i (x_i - y_i)^2}
1536+
\]`
1537+
Its LSH family projects feature vectors `$\mathbf{x}$` onto a random unit vector `$\mathbf{v}$` and portions the projected results into hash buckets:
1538+
`\[
1539+
h(\mathbf{x}) = \Big\lfloor \frac{\mathbf{x} \cdot \mathbf{v}}{r} \Big\rfloor
1540+
\]`
1541+
where `r` is a user-defined bucket length. The bucket length can be used to control the average size of hash buckets (and thus the number of buckets). A larger bucket length (i.e., fewer buckets) increases the probability of features being hashed to the same bucket (increasing the numbers of true and false positives).
1542+
1543+
Bucketed Random Projection accepts arbitrary vectors as input features, and supports both sparse and dense vectors.
1544+
1545+
<div class="codetabs">
1546+
<div data-lang="scala" markdown="1">
1547+
1548+
Refer to the [BucketedRandomProjectionLSH Scala docs](api/scala/index.html#org.apache.spark.ml.feature.BucketedRandomProjectionLSH)
1549+
for more details on the API.
1550+
1551+
{% include_example scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala %}
1552+
</div>
1553+
1554+
<div data-lang="java" markdown="1">
1555+
1556+
Refer to the [BucketedRandomProjectionLSH Java docs](api/java/org/apache/spark/ml/feature/BucketedRandomProjectionLSH.html)
1557+
for more details on the API.
1558+
1559+
{% include_example java/org/apache/spark/examples/ml/JavaBucketedRandomProjectionLSHExample.java %}
1560+
</div>
1561+
</div>
1562+
1563+
### MinHash for Jaccard Distance
1564+
[MinHash](https://en.wikipedia.org/wiki/MinHash) is an LSH family for Jaccard distance where input features are sets of natural numbers. Jaccard distance of two sets is defined by the cardinality of their intersection and union:
1565+
`\[
1566+
d(\mathbf{A}, \mathbf{B}) = 1 - \frac{|\mathbf{A} \cap \mathbf{B}|}{|\mathbf{A} \cup \mathbf{B}|}
1567+
\]`
1568+
MinHash applies a random hash function `g` to each element in the set and take the minimum of all hashed values:
1569+
`\[
1570+
h(\mathbf{A}) = \min_{a \in \mathbf{A}}(g(a))
1571+
\]`
1572+
1573+
The input sets for MinHash are represented as binary vectors, where the vector indices represent the elements themselves and the non-zero values in the vector represent the presence of that element in the set. While both dense and sparse vectors are supported, typically sparse vectors are recommended for efficiency. For example, `Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5, 1.0)])` means there are 10 elements in the space. This set contains elem 2, elem 3 and elem 5. All non-zero values are treated as binary "1" values.
1574+
1575+
**Note:** Empty sets cannot be transformed by MinHash, which means any input vector must have at least 1 non-zero entry.
1576+
1577+
<div class="codetabs">
1578+
<div data-lang="scala" markdown="1">
1579+
1580+
Refer to the [MinHashLSH Scala docs](api/scala/index.html#org.apache.spark.ml.feature.MinHashLSH)
1581+
for more details on the API.
1582+
1583+
{% include_example scala/org/apache/spark/examples/ml/MinHashLSHExample.scala %}
1584+
</div>
1585+
1586+
<div data-lang="java" markdown="1">
1587+
1588+
Refer to the [MinHashLSH Java docs](api/java/org/apache/spark/ml/feature/MinHashLSH.html)
1589+
for more details on the API.
1590+
1591+
{% include_example java/org/apache/spark/examples/ml/JavaMinHashLSHExample.java %}
1592+
</div>
1593+
</div>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
18+
package org.apache.spark.examples.ml;
19+
20+
import org.apache.spark.sql.SparkSession;
21+
22+
// $example on$
23+
import java.util.Arrays;
24+
import java.util.List;
25+
26+
import org.apache.spark.ml.feature.BucketedRandomProjectionLSH;
27+
import org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel;
28+
import org.apache.spark.ml.linalg.Vector;
29+
import org.apache.spark.ml.linalg.Vectors;
30+
import org.apache.spark.ml.linalg.VectorUDT;
31+
import org.apache.spark.sql.Dataset;
32+
import org.apache.spark.sql.Row;
33+
import org.apache.spark.sql.RowFactory;
34+
import org.apache.spark.sql.types.DataTypes;
35+
import org.apache.spark.sql.types.Metadata;
36+
import org.apache.spark.sql.types.StructField;
37+
import org.apache.spark.sql.types.StructType;
38+
// $example off$
39+
40+
public class JavaBucketedRandomProjectionLSHExample {
41+
public static void main(String[] args) {
42+
SparkSession spark = SparkSession
43+
.builder()
44+
.appName("JavaBucketedRandomProjectionLSHExample")
45+
.getOrCreate();
46+
47+
// $example on$
48+
List<Row> dataA = Arrays.asList(
49+
RowFactory.create(0, Vectors.dense(1.0, 1.0)),
50+
RowFactory.create(1, Vectors.dense(1.0, -1.0)),
51+
RowFactory.create(2, Vectors.dense(-1.0, -1.0)),
52+
RowFactory.create(3, Vectors.dense(-1.0, 1.0))
53+
);
54+
55+
List<Row> dataB = Arrays.asList(
56+
RowFactory.create(4, Vectors.dense(1.0, 0.0)),
57+
RowFactory.create(5, Vectors.dense(-1.0, 0.0)),
58+
RowFactory.create(6, Vectors.dense(0.0, 1.0)),
59+
RowFactory.create(7, Vectors.dense(0.0, -1.0))
60+
);
61+
62+
StructType schema = new StructType(new StructField[]{
63+
new StructField("id", DataTypes.IntegerType, false, Metadata.empty()),
64+
new StructField("keys", new VectorUDT(), false, Metadata.empty())
65+
});
66+
Dataset<Row> dfA = spark.createDataFrame(dataA, schema);
67+
Dataset<Row> dfB = spark.createDataFrame(dataB, schema);
68+
69+
Vector key = Vectors.dense(1.0, 0.0);
70+
71+
BucketedRandomProjectionLSH mh = new BucketedRandomProjectionLSH()
72+
.setBucketLength(2.0)
73+
.setNumHashTables(3)
74+
.setInputCol("keys")
75+
.setOutputCol("values");
76+
77+
BucketedRandomProjectionLSHModel model = mh.fit(dfA);
78+
79+
// Feature Transformation
80+
model.transform(dfA).show();
81+
// Cache the transformed columns
82+
Dataset<Row> transformedA = model.transform(dfA).cache();
83+
Dataset<Row> transformedB = model.transform(dfB).cache();
84+
85+
// Approximate similarity join
86+
model.approxSimilarityJoin(dfA, dfB, 1.5).show();
87+
model.approxSimilarityJoin(transformedA, transformedB, 1.5).show();
88+
// Self Join
89+
model.approxSimilarityJoin(dfA, dfA, 2.5).filter("datasetA.id < datasetB.id").show();
90+
91+
// Approximate nearest neighbor search
92+
model.approxNearestNeighbors(dfA, key, 2).show();
93+
model.approxNearestNeighbors(transformedA, key, 2).show();
94+
// $example off$
95+
96+
spark.stop();
97+
}
98+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
18+
package org.apache.spark.examples.ml;
19+
20+
import org.apache.spark.sql.SparkSession;
21+
22+
// $example on$
23+
import java.util.Arrays;
24+
import java.util.List;
25+
26+
import org.apache.spark.ml.feature.MinHashLSH;
27+
import org.apache.spark.ml.feature.MinHashLSHModel;
28+
import org.apache.spark.ml.linalg.VectorUDT;
29+
import org.apache.spark.ml.linalg.Vectors;
30+
import org.apache.spark.sql.Dataset;
31+
import org.apache.spark.sql.Row;
32+
import org.apache.spark.sql.RowFactory;
33+
import org.apache.spark.sql.types.DataTypes;
34+
import org.apache.spark.sql.types.Metadata;
35+
import org.apache.spark.sql.types.StructField;
36+
import org.apache.spark.sql.types.StructType;
37+
// $example off$
38+
39+
public class JavaMinHashLSHExample {
40+
public static void main(String[] args) {
41+
SparkSession spark = SparkSession
42+
.builder()
43+
.appName("JavaMinHashLSHExample")
44+
.getOrCreate();
45+
46+
// $example on$
47+
List<Row> data = Arrays.asList(
48+
RowFactory.create(0, Vectors.sparse(6, new int[]{0, 1, 2}, new double[]{1.0, 1.0, 1.0})),
49+
RowFactory.create(1, Vectors.sparse(6, new int[]{2, 3, 4}, new double[]{1.0, 1.0, 1.0})),
50+
RowFactory.create(2, Vectors.sparse(6, new int[]{0, 2, 4}, new double[]{1.0, 1.0, 1.0}))
51+
);
52+
53+
StructType schema = new StructType(new StructField[]{
54+
new StructField("id", DataTypes.IntegerType, false, Metadata.empty()),
55+
new StructField("keys", new VectorUDT(), false, Metadata.empty())
56+
});
57+
Dataset<Row> dataFrame = spark.createDataFrame(data, schema);
58+
59+
MinHashLSH mh = new MinHashLSH()
60+
.setNumHashTables(1)
61+
.setInputCol("keys")
62+
.setOutputCol("values");
63+
64+
MinHashLSHModel model = mh.fit(dataFrame);
65+
model.transform(dataFrame).show();
66+
// $example off$
67+
68+
spark.stop();
69+
}
70+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
18+
// scalastyle:off println
19+
package org.apache.spark.examples.ml
20+
21+
// $example on$
22+
import org.apache.spark.ml.feature.BucketedRandomProjectionLSH
23+
import org.apache.spark.ml.linalg.Vectors
24+
// $example off$
25+
import org.apache.spark.sql.SparkSession
26+
27+
object BucketedRandomProjectionLSHExample {
28+
def main(args: Array[String]): Unit = {
29+
// Creates a SparkSession
30+
val spark = SparkSession
31+
.builder
32+
.appName("BucketedRandomProjectionLSHExample")
33+
.getOrCreate()
34+
35+
// $example on$
36+
val dfA = spark.createDataFrame(Seq(
37+
(0, Vectors.dense(1.0, 1.0)),
38+
(1, Vectors.dense(1.0, -1.0)),
39+
(2, Vectors.dense(-1.0, -1.0)),
40+
(3, Vectors.dense(-1.0, 1.0))
41+
)).toDF("id", "keys")
42+
43+
val dfB = spark.createDataFrame(Seq(
44+
(4, Vectors.dense(1.0, 0.0)),
45+
(5, Vectors.dense(-1.0, 0.0)),
46+
(6, Vectors.dense(0.0, 1.0)),
47+
(7, Vectors.dense(0.0, -1.0))
48+
)).toDF("id", "keys")
49+
50+
val key = Vectors.dense(1.0, 0.0)
51+
52+
val brp = new BucketedRandomProjectionLSH()
53+
.setBucketLength(2.0)
54+
.setNumHashTables(3)
55+
.setInputCol("keys")
56+
.setOutputCol("values")
57+
58+
val model = brp.fit(dfA)
59+
60+
// Feature Transformation
61+
model.transform(dfA).show()
62+
// Cache the transformed columns
63+
val transformedA = model.transform(dfA).cache()
64+
val transformedB = model.transform(dfB).cache()
65+
66+
// Approximate similarity join
67+
model.approxSimilarityJoin(dfA, dfB, 1.5).show()
68+
model.approxSimilarityJoin(transformedA, transformedB, 1.5).show()
69+
// Self Join
70+
model.approxSimilarityJoin(dfA, dfA, 2.5).filter("datasetA.id < datasetB.id").show()
71+
72+
// Approximate nearest neighbor search
73+
model.approxNearestNeighbors(dfA, key, 2).show()
74+
model.approxNearestNeighbors(transformedA, key, 2).show()
75+
// $example off$
76+
77+
spark.stop()
78+
}
79+
}
80+
// scalastyle:on println

0 commit comments

Comments
 (0)