17.5.15

Intro to Apache Spark函数与方法的词频统计

39    | sc.textFile
25    | messages.filter
13    | lines.filter
13    | errors.map
13    | messages.cache
11    | sc.parallelize
10    | l.split
9     | x.split
8     | distFile.flatMap
7     | f.flatMap
6     | ")).collect
6     | ')).collect
6     | distFile.map
4     | logData.filter
4     | ")).map
4     | sc.accumulator
4     | wc.saveAsTextFile
3     |
3     | teenagers.map
3     | org.apache.spark.sql.SQLContext
3     | ')).map
3     | 1)).reduceByKey
3     | p
3     | people.txt").map
2     | people.registerTempTable
2     | t
2     | }).count
2     | line.contains
2     | s.contains
2     | 1)).cache
2     | rdd.foreach
2     | reg.join
2     | Arrays.asList
2     | words.reduceByKey
2     | r
2     | 4)).foreach
2     | _).collect.foreach
2     | sc.broadcast
2     | w.reduceByKey
1     | ssc.awaitTermination
1     | parquetFile.registerTempTable
1     | Click
1     | .reduce
1     | sqlCtx.inferSchema
1     | SparkConf
1     | teenNames.collect
1     | 13).where
1     | %s".format
1     | KMeans.train
1     | 2).cache
1     | words.map
1     | peopleTable.registerTempTable
1     | lines.flatMap
1     | counts.collect
1     | people.saveAsParquetFile
1     | ssc.start
1     | ssc.socketTextStream
1     | counts.saveAsTextFile
1     | pairs.reduceByKey
1     | println
1     | org.apache.spark.sql.hive.HiveContext
1     | teenagers.collect
1     | wordCounts.print
1     | 19).select
1     | 10).collect
1     | distData.filter
1     | spark.parallelize
1     | value").collect
1     | model.predict
1     | people.where
1     | line.split
1     | }.reduce
1     | args
1     | c
1     | sqlContext.parquetFile
1     | java.text.SimpleDateFormat
1     | List
1     | spark.stop
1     | lines.map
1     | parts.map
1     | Register
1     | System.out.println
1     | sqlCtx.sql
1     | test_data.map
1     | g.triplets.filter

No comments: