├── .gitignore ├── MapReduceDemo ├── src │ └── main │ │ └── java │ │ └── cn │ │ └── edu │ │ └── ecnu │ │ └── mapreduce │ │ └── example │ │ └── java │ │ └── wordcount │ │ ├── WordCountReducer.java │ │ ├── WordCountMapper.java │ │ └── WordCount.java └── pom.xml ├── README.md ├── SparkDemo ├── pom.xml └── src │ └── main │ ├── scala │ └── cn │ │ └── edu │ │ └── ecnu │ │ └── spark │ │ └── example │ │ └── scala │ │ └── wordcount │ │ └── WordCount.scala │ └── java │ └── cn │ └── edu │ └── ecnu │ └── spark │ └── example │ └── java │ └── wordcount │ └── WordCount.java ├── StormDemo ├── src │ └── main │ │ └── java │ │ └── cn │ │ └── edu │ │ └── ecnu │ │ └── storm │ │ └── example │ │ └── java │ │ └── wordcount │ │ ├── CountBolt.java │ │ ├── SplitBolt.java │ │ ├── WordCountTopology.java │ │ └── SocketSpout.java └── pom.xml ├── FlinkDemo ├── src │ └── main │ │ ├── scala │ │ └── cn │ │ │ └── edu │ │ │ └── ecnu │ │ │ └── flink │ │ │ └── example │ │ │ └── scala │ │ │ └── wordcount │ │ │ └── WordCountScala.scala │ │ └── java │ │ └── cn │ │ └── edu │ │ └── ecnu │ │ └── flink │ │ └── example │ │ └── java │ │ └── wordcount │ │ └── WordCount.java └── pom.xml ├── HDFSFileDemo ├── pom.xml └── src │ └── main │ └── java │ └── cn │ └── edu │ └── ecnu │ └── hdfs │ └── example │ └── java │ └── fileupload │ └── Writer.java ├── SparkStreamingDemo ├── pom.xml └── src │ └── main │ └── java │ └── cn │ └── edu │ └── ecnu │ └── sparkstreaming │ └── example │ └── java │ └── wordcount │ └── WordCount.java ├── GiraphDemo ├── src │ └── main │ │ └── java │ │ └── cn │ │ └── edu │ │ └── ecnu │ │ └── giraph │ │ └── example │ │ └── java │ │ └── sssp │ │ ├── ShortestPathComputation.java │ │ └── ShortestPathRunner.java └── pom.xml └── LICENSE /.gitignore: -------------------------------------------------------------------------------- 1 | */.* 2 | */logs/* 3 | */target/* 4 | *.iml 5 | */.idea/* 6 | */output/* 7 | */out/* 8 | */classes/* 9 | .idea/ 10 | */META-INF/* 11 | */spark_output/* 12 | */input/* 13 | */sparkstreaming_output/* 14 | -------------------------------------------------------------------------------- /MapReduceDemo/src/main/java/cn/edu/ecnu/mapreduce/example/java/wordcount/WordCountReducer.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.mapreduce.example.java.wordcount; 2 | 3 | import org.apache.hadoop.io.IntWritable; 4 | import org.apache.hadoop.io.Text; 5 | import org.apache.hadoop.mapreduce.Reducer; 6 | 7 | import java.io.IOException; 8 | 9 | /* 步骤1:确定输入键值对[K2,List(V2)]的数据类型为[Text, IntWritable],输出键值对[K3,V3]的数据类型为[Text,IntWritable] */ 10 | public class WordCountReducer extends Reducer { 11 | @Override 12 | protected void reduce(Text key, Iterable values, Context context) 13 | throws IOException, InterruptedException { 14 | /* 步骤2:编写处理逻辑将[K2,List(V2)]转换为[K3,V3]并输出 */ 15 | int sum = 0; 16 | // 遍历累加求和 17 | for (IntWritable value : values) { 18 | sum += value.get(); 19 | } 20 | // 输出计数结果 21 | context.write(key, new IntWritable(sum)); 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /MapReduceDemo/src/main/java/cn/edu/ecnu/mapreduce/example/java/wordcount/WordCountMapper.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.mapreduce.example.java.wordcount; 2 | 3 | import org.apache.hadoop.io.IntWritable; 4 | import org.apache.hadoop.io.LongWritable; 5 | import org.apache.hadoop.io.Text; 6 | import org.apache.hadoop.mapreduce.Mapper; 7 | 8 | import java.io.IOException; 9 | 10 | /* 步骤1:确定输入键值对[K1,V1]的数据类型为[LongWritable,Text],输出键值对 [K2,V2]的数据类型为[Text,IntWritable] */ 11 | public class WordCountMapper extends Mapper { 12 | 13 | @Override 14 | protected void map(LongWritable key, Text value, Context context) 15 | throws IOException, InterruptedException { 16 | /* 步骤2:编写处理逻辑将[K1,V1]转换为[K2,V2]并输出 */ 17 | // 以空格作为分隔符拆分成单词 18 | String[] datas = value.toString().split(" "); 19 | for (String data : datas) { 20 | // 输出分词结果 21 | context.write(new Text(data), new IntWritable(1)); 22 | } 23 | } 24 | } -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 《分布式系统实践教程》示例代码 2 | 3 | ## [实验五 HDFS 2.x 编程](HDFSFileDemo) 4 | 5 | [上传文件](HDFSFileDemo/src/main/java/cn/edu/ecnu/hdfs/example/java/fileupload) 6 | 7 | 8 | ## [实验六 MapReduce 2.x 编程](MapReduceDemo) 9 | 10 | [词频统计](MapReduceDemo/src/main/java/cn/edu/ecnu/mapreduce/example/java/wordcount) 11 | 12 | 13 | ## [实验八 Spark 编程](SparkDemo) 14 | 15 | [词频统计_Java版](SparkDemo/src/main/java/cn/edu/ecnu/spark/example/java/wordcount) 16 | 17 | 18 | [词频统计_Scala版](SparkDemo/src/main/scala/cn/edu/ecnu/spark/example/scala/wordcount) 19 | 20 | ## [实验十一 Storm 编程](StormDemo) 21 | 22 | [词频统计](StormDemo/src/main/java/cn/edu/ecnu/storm/example/java/wordcount) 23 | 24 | ## [实验十二 SparkStreaming 编程](SparkStreamingDemo) 25 | 26 | [词频统计](SparkStreamingDemo/src/main/java/cn/edu/ecnu/sparkstreaming/example/java/wordcount) 27 | 28 | 29 | ## [实验十四 Flink 编程](FlinkDemo) 30 | 31 | [词频统计_Java版](FlinkDemo/src/main/java/cn/edu/ecnu/flink/example/java/wordcount) 32 | 33 | 34 | [词频统计_Scala版](FlinkDemo/src/main/scala/cn/edu/ecnu/flink/example/scala/wordcount) 35 | 36 | ## [实验十七 Giraph 编程](GiraphDemo) 37 | 38 | [最短路径](GiraphDemo/src/main/java/cn/edu/ecnu/giraph/example/java/sssp) 39 | -------------------------------------------------------------------------------- /SparkDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | SparkDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.maven.plugins 15 | maven-compiler-plugin 16 | 17 | 8 18 | 8 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | org.apache.spark 27 | spark-core_2.11 28 | 2.4.7 29 | 30 | 31 | 32 | 33 | -------------------------------------------------------------------------------- /SparkDemo/src/main/scala/cn/edu/ecnu/spark/example/scala/wordcount/WordCount.scala: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.spark.example.scala.wordcount 2 | 3 | import org.apache.spark.{SparkConf, SparkContext} 4 | 5 | object WordCount { 6 | 7 | def run(args: Array[String]): Unit = { 8 | /* 步骤1:通过SparkConf设置配置信息,并创建SparkContext */ 9 | val conf = new SparkConf 10 | conf.setAppName("WordCount") 11 | conf.setMaster("local") // 仅用于本地进行调试,如在集群中运行则删除本行 12 | val sc = new SparkContext(conf) 13 | 14 | /* 步骤2:按应用逻辑使用操作算子编写DAG,其中包括RDD的创建、转换和行动等 */ 15 | // 读入文本数据,创建名为lines的RDD 16 | val lines = sc.textFile(args(0)) 17 | // 将lines中的每一个文本行按空格分割成单个单词 18 | val words = lines.flatMap { line => line.split(" ") } 19 | // 将每个单词的频数设置为1,即将每个单词映射为[单词, 1] 20 | val pairs = words.map { word => (word, 1) } 21 | 22 | // 按单词聚合,并对相同单词的频数使用sum进行累计 23 | val wordCounts = pairs.groupByKey().map(t => (t._1, t._2.sum)) 24 | // 如需使用合并机制则将第上一行替换为下行 25 | // val wordCounts = pairs.reduceByKey(_+_) 26 | 27 | // 输出词频统计结果到文件 28 | wordCounts.saveAsTextFile(args(1)) 29 | 30 | /* 步骤3:关闭SparkContext */ 31 | sc.stop() 32 | } 33 | 34 | def main(args: Array[String]): Unit = { 35 | run(args) 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /StormDemo/src/main/java/cn/edu/ecnu/storm/example/java/wordcount/CountBolt.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.storm.example.java.wordcount; 2 | 3 | import org.apache.storm.topology.BasicOutputCollector; 4 | import org.apache.storm.topology.OutputFieldsDeclarer; 5 | import org.apache.storm.topology.base.BaseBasicBolt; 6 | import org.apache.storm.tuple.Tuple; 7 | 8 | import java.util.HashMap; 9 | import java.util.Map; 10 | 11 | public class CountBolt extends BaseBasicBolt { 12 | // 保存单词的频数 13 | Map counts = new HashMap(); 14 | 15 | /* 步骤1:描述元组的处理逻辑 */ 16 | @Override 17 | public void execute(Tuple tuple, BasicOutputCollector collector) { 18 | // 从接收到的元组中按字段提取单词 19 | String word = tuple.getStringByField("word"); 20 | // 获取该单词对应的频数 21 | Integer count = counts.get(word); 22 | if (count == null) { 23 | count = 0; 24 | } 25 | // 计数增加,并将单词和对应的频数加入 map 中 26 | count++; 27 | counts.put(word, count); 28 | // 输出结果,也可采用写入文件等其它方式 29 | System.out.println(word + "," + count); 30 | } 31 | 32 | /* 步骤2:声明输出元组的字段名称 */ 33 | @Override 34 | public void declareOutputFields(OutputFieldsDeclarer declarer) { 35 | // 为空 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /FlinkDemo/src/main/scala/cn/edu/ecnu/flink/example/scala/wordcount/WordCountScala.scala: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.flink.example.scala.wordcount 2 | 3 | import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment 4 | import org.apache.flink.streaming.api.scala._ 5 | import org.apache.flink.streaming.api.windowing.assigners.{GlobalWindows, TumblingEventTimeWindows, TumblingProcessingTimeWindows} 6 | import org.apache.flink.streaming.api.windowing.time.Time 7 | 8 | object WordCount { 9 | def run(args: Array[String]): Unit = { 10 | /* 步骤1:创建StreamExecutionEnvironment对象 */ 11 | val env = StreamExecutionEnvironment.getExecutionEnvironment 12 | 13 | /* 步骤2:按应用逻辑使用操作算子编写DAG,操作算子包括数据源、转换、数据池等 */ 14 | // 从指定的主机名和端口号接收数据,创建名为lines的DataStream 15 | val lines = env.socketTextStream(args(0), args(1).toInt) 16 | // 将lines中的每一个文本行按空格分割成单个单词 17 | val words = lines.flatMap(w => w.split(" ")) 18 | // 将每个单词的频数设置为1,即将每个单词映射为[单词, 1] 19 | val pairs = words.map(word => (word, 1)) 20 | // 按单词聚合,并对相同单词的频数使用sum进行累计 21 | val counts = pairs.keyBy(0) 22 | .sum(1) 23 | // 输出词频统计结果 24 | counts.print() 25 | 26 | /* 步骤3:触发程序执行 */ 27 | env.execute("Streaming WordCount") 28 | } 29 | 30 | def main(args: Array[String]): Unit = { 31 | run(args) 32 | } 33 | } -------------------------------------------------------------------------------- /StormDemo/src/main/java/cn/edu/ecnu/storm/example/java/wordcount/SplitBolt.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.storm.example.java.wordcount; 2 | 3 | import org.apache.storm.task.TopologyContext; 4 | import org.apache.storm.topology.BasicOutputCollector; 5 | import org.apache.storm.topology.OutputFieldsDeclarer; 6 | import org.apache.storm.topology.base.BaseBasicBolt; 7 | import org.apache.storm.tuple.Fields; 8 | import org.apache.storm.tuple.Tuple; 9 | import org.apache.storm.tuple.Values; 10 | 11 | import java.util.Map; 12 | import java.util.StringTokenizer; 13 | 14 | public class SplitBolt extends BaseBasicBolt { 15 | @Override 16 | public void prepare(Map stormConf, TopologyContext context) { 17 | super.prepare(stormConf, context); 18 | } 19 | /* 步骤1:描述元组的处理逻辑 */ 20 | @Override 21 | public void execute(Tuple tuple, BasicOutputCollector collector) { 22 | String sentence = tuple.getStringByField("sentence"); 23 | StringTokenizer iter = new StringTokenizer(sentence); 24 | while (iter.hasMoreElements()) { 25 | collector.emit(new Values(iter.nextToken())); 26 | } 27 | } 28 | 29 | /* 步骤2:声明输出元组的字段名称 */ 30 | @Override 31 | public void declareOutputFields(OutputFieldsDeclarer declarer) { 32 | // 该元组仅有一个字段 33 | declarer.declare(new Fields("word")); 34 | } 35 | } 36 | -------------------------------------------------------------------------------- /HDFSFileDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | HDFSFileDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.hadoop 15 | hadoop-common 16 | 2.10.1 17 | 18 | 19 | 20 | 21 | org.apache.hadoop 22 | hadoop-hdfs 23 | 2.10.1 24 | 25 | 26 | 27 | 28 | org.apache.hadoop 29 | hadoop-client 30 | 2.10.1 31 | 32 | 33 | 34 | 35 | -------------------------------------------------------------------------------- /SparkStreamingDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | SparkStreamingDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.maven.plugins 15 | maven-compiler-plugin 16 | 17 | 8 18 | 8 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | org.apache.spark 27 | spark-streaming_2.11 28 | 2.4.7 29 | 30 | 31 | 32 | 33 | io.netty 34 | netty-all 35 | 4.1.17.Final 36 | 37 | 38 | -------------------------------------------------------------------------------- /StormDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | StormDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.maven.plugins 15 | maven-compiler-plugin 16 | 17 | 8 18 | 8 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | org.apache.storm 28 | storm-core 29 | 1.2.3 30 | 31 | 32 | org.apache.storm 33 | storm-client 34 | 2.1.0 35 | provided 36 | 37 | 38 | 39 | 40 | -------------------------------------------------------------------------------- /HDFSFileDemo/src/main/java/cn/edu/ecnu/hdfs/example/java/fileupload/Writer.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.hdfs.example.java.fileupload; 2 | 3 | import java.io.FileInputStream; 4 | import java.io.IOException; 5 | import java.net.URI; 6 | import org.apache.hadoop.conf.Configuration; 7 | import org.apache.hadoop.fs.FSDataOutputStream; 8 | import org.apache.hadoop.fs.FileSystem; 9 | import org.apache.hadoop.fs.Path; 10 | import org.apache.hadoop.io.IOUtils; 11 | 12 | public class Writer { 13 | 14 | public void write(String hdfsFilePath, String localFilePath) throws IOException { 15 | /* 步骤1:获取HDFS的文件系统对象 */ 16 | Configuration conf = new Configuration(); 17 | FileSystem fs = FileSystem.get(URI.create(hdfsFilePath), conf); 18 | /* 步骤2:获取输出流hdfsOutputStream */ 19 | FSDataOutputStream hdfsOutputStream = fs.create(new Path(hdfsFilePath)); 20 | /* 步骤3:利用输出流写入HDFS文件 */ 21 | // 读取本地文件的输入流 22 | FileInputStream localInputStream = new FileInputStream(localFilePath); 23 | // 将本地文件的输入流拷贝至HDFS文件的输出流 24 | IOUtils.copyBytes(localInputStream, hdfsOutputStream, 4096, true); 25 | } 26 | 27 | public static void main(String[] args) throws IOException { 28 | if (args.length < 2) { 29 | System.err.println("Usage: "); 30 | System.exit(-1); 31 | } 32 | 33 | Writer writer = new Writer(); 34 | writer.write(args[0], args[1]); 35 | } 36 | } 37 | -------------------------------------------------------------------------------- /MapReduceDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | MapReduceDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.maven.plugins 15 | maven-compiler-plugin 16 | 17 | 6 18 | 6 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | org.apache.hadoop 28 | hadoop-common 29 | 2.10.1 30 | 31 | 32 | 33 | org.apache.hadoop 34 | hadoop-client 35 | 2.10.1 36 | 37 | 38 | 39 | org.apache.hadoop 40 | hadoop-hdfs 41 | 2.10.1 42 | 43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /MapReduceDemo/src/main/java/cn/edu/ecnu/mapreduce/example/java/wordcount/WordCount.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.mapreduce.example.java.wordcount; 2 | 3 | import org.apache.hadoop.conf.Configured; 4 | import org.apache.hadoop.fs.Path; 5 | import org.apache.hadoop.io.IntWritable; 6 | import org.apache.hadoop.io.Text; 7 | import org.apache.hadoop.mapreduce.Job; 8 | import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 9 | import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 10 | import org.apache.hadoop.util.Tool; 11 | import org.apache.hadoop.util.ToolRunner; 12 | 13 | public class WordCount extends Configured implements Tool { 14 | 15 | @Override 16 | public int run(String[] args) throws Exception { 17 | /* 步骤1:设置作业的信息 */ 18 | Job job = Job.getInstance(getConf(), getClass().getSimpleName()); 19 | // 设置程序的类名 20 | job.setJarByClass(getClass()); 21 | 22 | // 设置数据的输入输出路径 23 | FileInputFormat.addInputPath(job, new Path(args[0])); 24 | FileOutputFormat.setOutputPath(job, new Path(args[1])); 25 | 26 | // 设置map和reduce方法 27 | job.setMapperClass(WordCountMapper.class); 28 | job.setReducerClass(WordCountReducer.class); 29 | job.setCombinerClass(WordCountReducer.class); 30 | 31 | // 设置map方法的输出键值对数据类型 32 | job.setMapOutputKeyClass(Text.class); 33 | job.setMapOutputValueClass(IntWritable.class); 34 | // 设置reduce方法的输出键值对数据类型 35 | job.setOutputKeyClass(Text.class); 36 | job.setOutputValueClass(IntWritable.class); 37 | 38 | return job.waitForCompletion(true) ? 0 : 1; 39 | } 40 | 41 | public static void main(String[] args) throws Exception { 42 | /* 步骤2:运行作业 */ 43 | int exitCode = ToolRunner.run(new WordCount(), args); 44 | System.exit(exitCode); 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /GiraphDemo/src/main/java/cn/edu/ecnu/giraph/example/java/sssp/ShortestPathComputation.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.giraph.example.java.sssp; 2 | 3 | import org.apache.giraph.edge.Edge; 4 | import org.apache.giraph.graph.BasicComputation; 5 | import org.apache.giraph.graph.Vertex; 6 | import org.apache.hadoop.io.DoubleWritable; 7 | import org.apache.hadoop.io.FloatWritable; 8 | import org.apache.hadoop.io.LongWritable; 9 | 10 | /* 步骤1:确定顶点标识I、顶点的计算值V、边的权值E以及消息值M的数据类型 */ 11 | public class ShortestPathComputation 12 | extends BasicComputation { 13 | 14 | // 源点 15 | protected static final int SOURCE_VERTEX = 0; 16 | // 表示无穷大 17 | protected static final Double INF = Double.MAX_VALUE; 18 | 19 | @Override 20 | public void compute(Vertex vertex, 21 | Iterable messages) { 22 | /* 步骤2:编写与顶点计算、更新相关的处理逻辑以及发送消息 */ 23 | // 超步0时将顶点初始化为表示无穷大的INF 24 | if (getSuperstep() == 0) { 25 | vertex.setValue(new DoubleWritable(INF)); 26 | } 27 | 28 | // 根据接收到的消息计算当前距离源点的最短路径值 29 | double minDist = vertex.getId().get() == SOURCE_VERTEX ? 0d : INF; 30 | for (DoubleWritable message : messages) { 31 | minDist = Math.min(minDist, message.get()); 32 | } 33 | 34 | // 当minDist小于顶点的计算值时将计算值更新为minDist 35 | if (minDist < vertex.getValue().get()) { 36 | vertex.setValue(new DoubleWritable(minDist)); 37 | for (Edge edge : vertex.getEdges()) { 38 | double distance = minDist + edge.getValue().get(); 39 | sendMessage(edge.getTargetVertexId(), new DoubleWritable(distance)); 40 | } 41 | } 42 | 43 | vertex.voteToHalt(); 44 | } 45 | } 46 | -------------------------------------------------------------------------------- /StormDemo/src/main/java/cn/edu/ecnu/storm/example/java/wordcount/WordCountTopology.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.storm.example.java.wordcount; 2 | 3 | import org.apache.storm.Config; 4 | import org.apache.storm.LocalCluster; 5 | import org.apache.storm.StormSubmitter; 6 | import org.apache.storm.topology.*; 7 | import org.apache.storm.tuple.Fields; 8 | 9 | public class WordCountTopology { 10 | public static void main(String[] args) throws Exception { 11 | if (args.length < 3) { 12 | System.exit(-1); 13 | return; 14 | } 15 | /* 步骤1:构建拓扑 */ 16 | TopologyBuilder builder = new TopologyBuilder(); 17 | // 设置 Spout 的名称为 "SPOUT", executor 数量为 1,任务数量为 1 18 | builder.setSpout("Spout", new SocketSpout(args[1], args[2]), 1); 19 | // 设置 Bolt 的名称为 "SPLIT", executor 数量为 2,任务数量为 2,与 "SPOUT" 之间的流分组策略为随机分组 20 | builder.setBolt("split", new SplitBolt(), 2).setNumTasks(2).shuffleGrouping("Spout"); 21 | // 设置 Bolt 取名 "COUNT",executor 数量为 2,任务数量为 2,订阅策略为 fieldsGrouping 22 | builder.setBolt("count", new CountBolt(), 2).fieldsGrouping("split", new Fields("word")); 23 | 24 | /* 步骤2:设置配置信息 */ 25 | Config conf = new Config(); 26 | conf.setDebug(false); // 关闭调试模式 27 | conf.setNumWorkers(2); // 设置 Worker 数量为 2 28 | conf.setNumAckers(0); // 设置 Acker 数量为 0 29 | 30 | /* 步骤3:指定程序运行的方式 */ 31 | if (args[0].equals("cluster")) { // 在集群运行程序,拓扑的名称为 WORDCOUNT 32 | StormSubmitter.submitTopology("WORDCOUNT", conf, builder.createTopology()); 33 | } else if (args[0].equals("local")) { // 在本地 IDE 调试程序,拓扑的名称为 WORDCOUNT 34 | LocalCluster cluster = new LocalCluster(); 35 | cluster.submitTopology("word-count", conf, builder.createTopology()); 36 | } else { 37 | System.exit(-2); 38 | } 39 | } 40 | } -------------------------------------------------------------------------------- /FlinkDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | FlinkDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | 14 | org.apache.maven.plugins 15 | maven-compiler-plugin 16 | 17 | 6 18 | 6 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | org.apache.flink 28 | flink-streaming-java_2.11 29 | 1.12.1 30 | compile 31 | 32 | 33 | 34 | 35 | org.apache.flink 36 | flink-core 37 | 1.12.1 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | org.apache.flink 47 | flink-clients_2.11 48 | 1.12.1 49 | 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /StormDemo/src/main/java/cn/edu/ecnu/storm/example/java/wordcount/SocketSpout.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.storm.example.java.wordcount; 2 | 3 | import org.apache.storm.spout.SpoutOutputCollector; 4 | import org.apache.storm.task.TopologyContext; 5 | import org.apache.storm.topology.OutputFieldsDeclarer; 6 | import org.apache.storm.topology.base.BaseRichSpout; 7 | import org.apache.storm.tuple.Fields; 8 | import org.apache.storm.tuple.Values; 9 | import org.apache.storm.utils.Utils; 10 | 11 | import java.io.BufferedReader; 12 | import java.io.IOException; 13 | import java.io.InputStreamReader; 14 | import java.net.Socket; 15 | import java.util.Map; 16 | import java.util.Random; 17 | 18 | public class SocketSpout extends BaseRichSpout { 19 | SpoutOutputCollector collector; 20 | String ip; 21 | int port; 22 | BufferedReader br = null; 23 | Socket socket = null; 24 | Random _rand; 25 | 26 | SocketSpout(String ip, String port) { 27 | this.ip = ip; 28 | this.port = Integer.valueOf(port); 29 | } 30 | 31 | /* 步骤1: 初始化Spout */ 32 | @Override 33 | public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector collector) { 34 | this.collector = collector; 35 | _rand = new Random(); 36 | try { 37 | socket = new Socket(ip, port); 38 | br = new BufferedReader(new InputStreamReader(socket.getInputStream())); 39 | } catch (IOException e) { 40 | e.printStackTrace(); 41 | } 42 | } 43 | 44 | /* 步骤2:读取并发送元组 */ 45 | @Override 46 | public void nextTuple() { 47 | try { 48 | String tuple; 49 | if ((tuple = br.readLine()) != null) { // 读取元组 50 | collector.emit(new Values(tuple)); // 发送元组 51 | } 52 | } catch (IOException e) { 53 | e.printStackTrace(); 54 | } 55 | } 56 | 57 | /* 步骤3:声明输出元组的字段名称 */ 58 | @Override 59 | public void declareOutputFields(OutputFieldsDeclarer declarer) { 60 | // 该输出元组仅有一个字段sentence 61 | declarer.declare(new Fields("sentence")); 62 | } 63 | } 64 | -------------------------------------------------------------------------------- /GiraphDemo/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.example 8 | GiraphDemo 9 | 1.0-SNAPSHOT 10 | 11 | 12 | 13 | org.apache.maven.plugins 14 | maven-compiler-plugin 15 | 16 | 6 17 | 6 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | org.apache.giraph 27 | giraph-core 28 | 1.2.0-hadoop2 29 | 30 | 31 | 32 | 33 | org.apache.giraph 34 | giraph-examples 35 | 1.2.0-hadoop2 36 | 37 | 38 | 39 | 40 | org.apache.hadoop 41 | hadoop-common 42 | 2.5.1 43 | 44 | 45 | 46 | 47 | 48 | org.apache.hadoop 49 | hadoop-client 50 | 2.5.1 51 | 52 | 53 | 54 | -------------------------------------------------------------------------------- /SparkStreamingDemo/src/main/java/cn/edu/ecnu/sparkstreaming/example/java/wordcount/WordCount.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.sparkstreaming.example.java.wordcount; 2 | 3 | import org.apache.hadoop.io.IntWritable; 4 | import org.apache.hadoop.io.Text; 5 | import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; 6 | import org.apache.spark.SparkConf; 7 | import org.apache.spark.streaming.Durations; 8 | import org.apache.spark.streaming.api.java.JavaPairDStream; 9 | import org.apache.spark.streaming.api.java.JavaStreamingContext; 10 | import scala.Tuple2; 11 | 12 | import java.util.Arrays; 13 | 14 | public class WordCount { 15 | 16 | public static void main(String[] args) throws Exception { 17 | // 1 创建 JavaStreamingContext 18 | SparkConf sparkConf = new SparkConf(); 19 | sparkConf.setAppName("WordCountJava"); 20 | // sparkConf.setMaster("local[2]"); // 仅用于本地调试, 放集群中运行时删除本行 21 | JavaStreamingContext sc = new JavaStreamingContext(sparkConf, Durations.seconds(10)); // 处理间隔为 10s 22 | sc.sparkContext().setLogLevel("ERROR"); 23 | 24 | // 2 处理数据 25 | JavaPairDStream wordCount = sc 26 | // 接收 socket 流数据, 其中 args[0] 为 IP 地址, args[1] 为端口号 27 | .socketTextStream(args[0], Integer.parseInt(args[1])) 28 | // 将接收到的字符串按空格分割 29 | .flatMap((String line) -> Arrays.asList(line.split(" ")).iterator()) 30 | // 映射单词为 (word, 1) 31 | .mapToPair((String word) -> new Tuple2<>(word, 1)) 32 | // 窗口操作, 窗口长度是 20s, 滑动时间间隔是 10s, 即每隔 10s 统计前 20s 的单词 33 | .reduceByKeyAndWindow( 34 | (Integer v1, Integer v2) -> v1 + v2, 35 | Durations.seconds(20), 36 | Durations.seconds(10) 37 | ); 38 | 39 | // 3 显示结果 40 | wordCount.print(); 41 | 42 | // 4 将结果输出到文件中, 其中 args[2] 为输出路径 43 | wordCount.saveAsNewAPIHadoopFiles( 44 | args[2] + "/stream", "output", Text.class, IntWritable.class, TextOutputFormat.class 45 | ); 46 | 47 | // 5 开启StreamingContext 48 | sc.start(); 49 | sc.awaitTermination(); 50 | } 51 | 52 | } 53 | -------------------------------------------------------------------------------- /GiraphDemo/src/main/java/cn/edu/ecnu/giraph/example/java/sssp/ShortestPathRunner.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.giraph.example.java.sssp; 2 | 3 | import org.apache.giraph.conf.GiraphConfiguration; 4 | import org.apache.giraph.conf.GiraphConstants; 5 | import org.apache.giraph.io.formats.GiraphTextInputFormat; 6 | import org.apache.giraph.io.formats.GiraphTextOutputFormat; 7 | import org.apache.giraph.io.formats.IdWithValueTextOutputFormat; 8 | import org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat; 9 | import org.apache.giraph.job.GiraphJob; 10 | import org.apache.hadoop.conf.Configured; 11 | import org.apache.hadoop.fs.Path; 12 | import org.apache.hadoop.util.Tool; 13 | import org.apache.hadoop.util.ToolRunner; 14 | 15 | 16 | public class ShortestPathRunner extends Configured implements Tool { 17 | 18 | @Override 19 | public int run(String[] args) throws Exception { 20 | /* 步骤1: 设置作业的信息 */ 21 | GiraphConfiguration giraphConf = new GiraphConfiguration(getConf()); 22 | 23 | // 设置compute方法 24 | giraphConf.setComputationClass(ShortestPathComputation.class); 25 | // 设置图数据的输入格式 26 | giraphConf.setVertexInputFormatClass( JsonLongDoubleFloatDoubleVertexInputFormat.class); 27 | // 设置图数据的输出格式 28 | giraphConf.setVertexOutputFormatClass(IdWithValueTextOutputFormat.class); 29 | 30 | // 启用本地调试模式 31 | giraphConf.setLocalTestMode(true); 32 | // 最小和最大的Worker数量均为1,Master协调超步时所需Worker响应的百分比为100 33 | giraphConf.setWorkerConfiguration(1, 1, 100); 34 | // Master和Worker位于同一进程 35 | GiraphConstants.SPLIT_MASTER_WORKER.set(giraphConf, false); 36 | 37 | // 创建Giraph作业 38 | GiraphJob giraphJob = new GiraphJob(giraphConf, getClass().getSimpleName()); 39 | 40 | // 设置图数据的输入路径 41 | GiraphTextInputFormat.addVertexInputPath(giraphConf, new Path(args[0])); 42 | // 设置图数据的输出路径 43 | GiraphTextOutputFormat.setOutputPath(giraphJob.getInternalJob(), new Path(args[1])); 44 | 45 | return giraphJob.run(true) ? 0 : -1; 46 | } 47 | 48 | public static void main(String[] args) throws Exception { 49 | /* 步骤2: 运行作业 */ 50 | int exitCode = ToolRunner.run(new ShortestPathRunner(), args); 51 | System.exit(exitCode); 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /FlinkDemo/src/main/java/cn/edu/ecnu/flink/example/java/wordcount/WordCount.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.flink.example.java.wordcount; 2 | 3 | import org.apache.flink.api.common.functions.FlatMapFunction; 4 | import org.apache.flink.api.common.functions.MapFunction; 5 | import org.apache.flink.api.java.tuple.Tuple2; 6 | import org.apache.flink.streaming.api.datastream.DataStream; 7 | import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; 8 | import org.apache.flink.util.Collector; 9 | 10 | public class WordCount { 11 | public static void main(String[] args) throws Exception { 12 | run(args); 13 | } 14 | 15 | public static void run(String[] args) throws Exception { 16 | /* 步骤1:创建StreamExecutionEnvironment对象 */ 17 | StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); 18 | 19 | /* 步骤2:按应用逻辑使用操作算子编写DAG,操作算子包括数据源、转换、数据池等 */ 20 | // 从指定的主机名和端口号接收数据,创建名为lines的DataStream| 21 | DataStream lines = env.socketTextStream(args[0], Integer.parseInt(args[1]), "\n"); 22 | // 将lines中的每一个文本行按空格分割成单个单词 23 | DataStream words = 24 | lines.flatMap( 25 | new FlatMapFunction() { 26 | @Override 27 | public void flatMap(String value, Collector out) throws Exception { 28 | for (String word : value.split(" ")) { 29 | out.collect(word); 30 | } 31 | } 32 | }); 33 | // 将每个单词的频数设置为1,即将每个单词映射为[单词, 1] 34 | DataStream> pairs = 35 | words.map( 36 | new MapFunction>() { 37 | @Override 38 | public Tuple2 map(String value) throws Exception { 39 | return new Tuple2(value, 1); 40 | } 41 | }); 42 | // 按单词聚合,并对相同单词的频数使用sum进行累计 43 | DataStream> counts = pairs.keyBy(0).sum(1); 44 | // 输出词频统计结果| 45 | counts.print(); 46 | 47 | /* 步骤3:触发程序执行 */ 48 | env.execute("Streaming WordCount"); 49 | } 50 | } -------------------------------------------------------------------------------- /SparkDemo/src/main/java/cn/edu/ecnu/spark/example/java/wordcount/WordCount.java: -------------------------------------------------------------------------------- 1 | package cn.edu.ecnu.spark.example.java.wordcount; 2 | 3 | import org.apache.spark.SparkConf; 4 | import org.apache.spark.api.java.JavaPairRDD; 5 | import org.apache.spark.api.java.JavaRDD; 6 | import org.apache.spark.api.java.JavaSparkContext; 7 | import org.apache.spark.api.java.function.*; 8 | import scala.Tuple2; 9 | 10 | import java.util.Arrays; 11 | import java.util.Iterator; 12 | 13 | public class WordCount { 14 | 15 | public static void run(String[] args) { 16 | /* 步骤1:通过SparkConf设置配置信息,并创建SparkContext */ 17 | SparkConf conf = new SparkConf(); 18 | conf.setAppName("WordCountJava"); 19 | conf.setMaster("local"); // 仅用于本地进行调试,如在集群中运行则删除本行 20 | JavaSparkContext sc = new JavaSparkContext(conf); 21 | 22 | /* 步骤2:按应用逻辑使用操作算子编写DAG,其中包括RDD的创建、转换和行动等 */ 23 | // 读入文本数据,创建名为lines的RDD 24 | JavaRDD lines = sc.textFile(args[0]); 25 | 26 | // 将lines中的每一个文本行按空格分割成单个单词 27 | JavaRDD words = 28 | lines.flatMap( 29 | new FlatMapFunction() { 30 | @Override 31 | public Iterator call(String line) throws Exception { 32 | return Arrays.asList(line.split(" ")).iterator(); 33 | } 34 | }); 35 | // 将每个单词的频数设置为1,即将每个单词映射为[单词, 1] 36 | JavaPairRDD pairs = 37 | words.mapToPair( 38 | new PairFunction() { 39 | @Override 40 | public Tuple2 call(String word) throws Exception { 41 | return new Tuple2(word, 1); 42 | } 43 | }); 44 | // 按单词聚合,并对相同单词的频数使用sum进行累计 45 | JavaPairRDD wordCounts = 46 | pairs 47 | .groupByKey() 48 | .mapToPair( 49 | new PairFunction>, String, Integer>() { 50 | @Override 51 | public Tuple2 call(Tuple2> t) 52 | throws Exception { 53 | Integer sum = Integer.valueOf(0); 54 | for (Integer i : t._2) { 55 | sum += i; 56 | } 57 | return new Tuple2(t._1, sum); 58 | } 59 | }); 60 | // 合并机制 61 | /*JavaPairRDD wordCounts = 62 | pairs.reduceByKey( 63 | new Function2() { 64 | @Override 65 | public Integer call(Integer t1, Integer t2) throws Exception { 66 | return t1 + t2; 67 | } 68 | });*/ 69 | 70 | // 输出词频统计结果到文件 71 | wordCounts.saveAsTextFile(args[1]); 72 | /* 步骤3:关闭SparkContext */ 73 | sc.stop(); 74 | } 75 | 76 | public static void main(String[] args) { 77 | run(args); 78 | } 79 | } 80 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | --------------------------------------------------------------------------------