前言
数据开发工作中最重要的工作之一便是数据采集,数据采集的正确性直接影响到后续的数据分析研究策略,而数据采集工作中Flume作为一个重要的组件之一。以下我们将从安装以及如何采集一个应用中的日志信息直接存储到HDFS的一个过程。
准备安装文件
以下为大家准备了安装文件,也可以自己到网上下载
下载地址:http://archive.apache.org/dist/flume/1.6.0/
执行以下操作步骤需提先把Java、Hadoop环境安装好,可以参考Hadoop2.7.2集群安装
准备文件 | 下载地址 |
---|---|
spark-2.0.2-bin-hadoop2.7 | 链接: https://pan.baidu.com/s/1dlUjcLjemTcm7jnc0HDiwQ 提取码: typ4 |
Flume安装
Flume安装非常简单,只要上传安装包tar解压即可
1 | [root@master201 Soft]# tar -xvf apache-flume-1.6.0-bin.tar.gz |
Flume配置采集应用日志存储至HDFS
Flume采集webpy日志,然后通过sink输出配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16a1.sources = r1
a1.sinks = k1
a1.channels = c1
Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -f /home/soft/web.py-0.37/log/log.log
Describe the sink
a1.sinks.k1.type = logger
Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1Flume采集webpy日志,通过sink输出到hdfs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23a1.sources = r1
a1.sinks = k1
a1.channels = c1
Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -f /home/soft/web.py-0.37/log/log.log
Describe the sink
a1.sinks.k1.type = logger
Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://hadoop100:9000/jiarong/flume/syslogtcp
a1.sinks.k1.hdfs.filePrefix = Syslog
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute