
vim /export/server/flink/conf/flink-conf.yamljobmanager.rpc.address: node1
taskmanager.numberOfTaskSlots: 2
web.submit.enable: true
#历史服务器
jobmanager.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/
historyserver.web.address: node1
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/vim /export/server/flink/conf/mastersnode1:8081vim /export/server/flink/conf/workersnode1
node2
node3vim /etc/profileexport HADOOP_CONF_DIR=/export/server/hadoop/etc/hadoop scp -r /export/server/flink node2:/export/server/flink
scp -r /export/server/flink node3:/export/server/flink
scp /etc/profile node2:/etc/profile
scp /etc/profile node3:/etc/profile for i in {2..3}; do scp -r flink node$i:$PWD; donesource /etc/profile /export/server/flink/bin/start-cluster.sh
/export/server/flink/bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
/export/server/flink/bin/taskmanager.sh start|start-foreground|stop|stop-all/export/server/flink/bin/historyserver.sh starthttp://node1:8081/#/overview
http://node1:8082/#/overview
TaskManager界面:可以查看到当前Flink集群中有多少个TaskManager,每个TaskManager的slots、内存、CPU Core是多少

/export/server/flink/bin/flink run /export/server/flink/examples/batch/WordCount.jar --input hdfs://node1:8020/wordcount/input/words.txt --output hdfs://node1:8020/wordcount/output/result.txt --parallelism 2
http://node1:50070/explorer.html#/flink/completed-jobs
http://node1:8082/#/overview
/export/server/flink/bin/stop-cluster.sh