forked from witgo/spark
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-5519][MLLIB] add user guide with example code for fp-growth
The API is still not very Java-friendly because `Array[Item]` in `freqItemsets` is recognized as `Object` in Java. We might want to define a case class to wrap the return pair to make it Java friendly. Author: Xiangrui Meng <[email protected]> Closes apache#4661 from mengxr/SPARK-5519 and squashes the following commits: 58ccc25 [Xiangrui Meng] add user guide with example code for fp-growth
- Loading branch information
Showing
4 changed files
with
216 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,100 @@ | ||
--- | ||
layout: global | ||
title: Frequent Pattern Mining - MLlib | ||
displayTitle: <a href="mllib-guide.html">MLlib</a> - Frequent Pattern Mining | ||
--- | ||
|
||
Mining frequent items, itemsets, subsequences, or other substructures is usually among the | ||
first steps to analyze a large-scale dataset, which has been an active research topic in | ||
data mining for years. | ||
We refer users to Wikipedia's [association rule learning](http://en.wikipedia.org/wiki/Association_rule_learning) | ||
for more information. | ||
MLlib provides a parallel implementation of FP-growth, | ||
a popular algorithm to mining frequent itemsets. | ||
|
||
## FP-growth | ||
|
||
The FP-growth algorithm is described in the paper | ||
[Han et al., Mining frequent patterns without candidate generation](http://dx.doi.org/10.1145/335191.335372), | ||
where "FP" stands for frequent pattern. | ||
Given a dataset of transactions, the first step of FP-growth is to calculate item frequencies and identify frequent items. | ||
Different from [Apriori-like](http://en.wikipedia.org/wiki/Apriori_algorithm) algorithms designed for the same purpose, | ||
the second step of FP-growth uses a suffix tree (FP-tree) structure to encode transactions without generating candidate sets | ||
explicitly, which are usually expensive to generate. | ||
After the second step, the frequent itemsets can be extracted from the FP-tree. | ||
In MLlib, we implemented a parallel version of FP-growth called PFP, | ||
as described in [Li et al., PFP: Parallel FP-growth for query recommendation](http://dx.doi.org/10.1145/1454008.1454027). | ||
PFP distributes the work of growing FP-trees based on the suffices of transactions, | ||
and hence more scalable than a single-machine implementation. | ||
We refer users to the papers for more details. | ||
|
||
MLlib's FP-growth implementation takes the following (hyper-)parameters: | ||
|
||
* `minSupport`: the minimum support for an itemset to be identified as frequent. | ||
For example, if an item appears 3 out of 5 transactions, it has a support of 3/5=0.6. | ||
* `numPartitions`: the number of partitions used to distribute the work. | ||
|
||
**Examples** | ||
|
||
<div class="codetabs"> | ||
<div data-lang="scala" markdown="1"> | ||
|
||
[`FPGrowth`](api/java/org/apache/spark/mllib/fpm/FPGrowth.html) implements the | ||
FP-growth algorithm. | ||
It take a `JavaRDD` of transactions, where each transaction is an `Iterable` of items of a generic type. | ||
Calling `FPGrowth.run` with transactions returns an | ||
[`FPGrowthModel`](api/java/org/apache/spark/mllib/fpm/FPGrowthModel.html) | ||
that stores the frequent itemsets with their frequencies. | ||
|
||
{% highlight scala %} | ||
import org.apache.spark.rdd.RDD | ||
import org.apache.spark.mllib.fpm.{FPGrowth, FPGrowthModel} | ||
|
||
val transactions: RDD[Array[String]] = ... | ||
|
||
val fpg = new FPGrowth() | ||
.setMinSupport(0.2) | ||
.setNumPartitions(10) | ||
val model = fpg.run(transactions) | ||
|
||
model.freqItemsets.collect().foreach { case (itemset, freq) => | ||
println(itemset.mkString("[", ",", "]") + ", " + freq) | ||
} | ||
{% endhighlight %} | ||
|
||
</div> | ||
|
||
<div data-lang="java" markdown="1"> | ||
|
||
[`FPGrowth`](api/java/org/apache/spark/mllib/fpm/FPGrowth.html) implements the | ||
FP-growth algorithm. | ||
It take an `RDD` of transactions, where each transaction is an `Array` of items of a generic type. | ||
Calling `FPGrowth.run` with transactions returns an | ||
[`FPGrowthModel`](api/java/org/apache/spark/mllib/fpm/FPGrowthModel.html) | ||
that stores the frequent itemsets with their frequencies. | ||
|
||
{% highlight java %} | ||
import java.util.Arrays; | ||
import java.util.List; | ||
|
||
import scala.Tuple2; | ||
|
||
import org.apache.spark.api.java.JavaRDD; | ||
import org.apache.spark.mllib.fpm.FPGrowth; | ||
import org.apache.spark.mllib.fpm.FPGrowthModel; | ||
|
||
JavaRDD<List<String>> transactions = ... | ||
|
||
FPGrowth fpg = new FPGrowth() | ||
.setMinSupport(0.2) | ||
.setNumPartitions(10); | ||
|
||
FPGrowthModel<String> model = fpg.run(transactions); | ||
|
||
for (Tuple2<Object, Long> s: model.javaFreqItemsets().collect()) { | ||
System.out.println("(" + Arrays.toString((Object[]) s._1()) + "): " + s._2()); | ||
} | ||
{% endhighlight %} | ||
|
||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
63 changes: 63 additions & 0 deletions
63
examples/src/main/java/org/apache/spark/examples/mllib/JavaFPGrowthExample.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.examples.mllib; | ||
|
||
import java.util.ArrayList; | ||
import java.util.Arrays; | ||
|
||
import scala.Tuple2; | ||
|
||
import com.google.common.collect.Lists; | ||
|
||
import org.apache.spark.SparkConf; | ||
import org.apache.spark.api.java.JavaRDD; | ||
import org.apache.spark.api.java.JavaSparkContext; | ||
import org.apache.spark.mllib.fpm.FPGrowth; | ||
import org.apache.spark.mllib.fpm.FPGrowthModel; | ||
|
||
/** | ||
* Java example for mining frequent itemsets using FP-growth. | ||
*/ | ||
public class JavaFPGrowthExample { | ||
|
||
public static void main(String[] args) { | ||
SparkConf sparkConf = new SparkConf().setAppName("JavaFPGrowthExample"); | ||
JavaSparkContext sc = new JavaSparkContext(sparkConf); | ||
|
||
|
||
// TODO: Read a user-specified input file. | ||
@SuppressWarnings("unchecked") | ||
JavaRDD<ArrayList<String>> transactions = sc.parallelize(Lists.newArrayList( | ||
Lists.newArrayList("r z h k p".split(" ")), | ||
Lists.newArrayList("z y x w v u t s".split(" ")), | ||
Lists.newArrayList("s x o n r".split(" ")), | ||
Lists.newArrayList("x z y m t s q e".split(" ")), | ||
Lists.newArrayList("z".split(" ")), | ||
Lists.newArrayList("x z y r q t p".split(" "))), 2); | ||
|
||
FPGrowth fpg = new FPGrowth() | ||
.setMinSupport(0.3); | ||
FPGrowthModel<String> model = fpg.run(transactions); | ||
|
||
for (Tuple2<Object, Long> s: model.javaFreqItemsets().collect()) { | ||
System.out.println(Arrays.toString((Object[]) s._1()) + ", " + s._2()); | ||
} | ||
|
||
sc.stop(); | ||
} | ||
} |
51 changes: 51 additions & 0 deletions
51
examples/src/main/scala/org/apache/spark/examples/mllib/FPGrowthExample.scala
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.examples.mllib | ||
|
||
import org.apache.spark.mllib.fpm.FPGrowth | ||
import org.apache.spark.{SparkContext, SparkConf} | ||
|
||
/** | ||
* Example for mining frequent itemsets using FP-growth. | ||
*/ | ||
object FPGrowthExample { | ||
|
||
def main(args: Array[String]) { | ||
val conf = new SparkConf().setAppName("FPGrowthExample") | ||
val sc = new SparkContext(conf) | ||
|
||
// TODO: Read a user-specified input file. | ||
val transactions = sc.parallelize(Seq( | ||
"r z h k p", | ||
"z y x w v u t s", | ||
"s x o n r", | ||
"x z y m t s q e", | ||
"z", | ||
"x z y r q t p").map(_.split(" ")), numSlices = 2) | ||
|
||
val fpg = new FPGrowth() | ||
.setMinSupport(0.3) | ||
val model = fpg.run(transactions) | ||
|
||
model.freqItemsets.collect().foreach { case (itemset, freq) => | ||
println(itemset.mkString("[", ",", "]") + ", " + freq) | ||
} | ||
|
||
sc.stop() | ||
} | ||
} |