博客
关于我
flume容错(故障转移)
阅读量:620 次
发布时间:2019-03-13

本文共 2786 字,大约阅读时间需要 9 分钟。

flume容错(故障转移)

1.概念:

在这里插入图片描述

①当一台机器出现故障时,另一台机器代替其工作

②通常用于解决单点故障,给容易出故障的地方设置备份

③备份越多,容错能力越强,资源浪费越严重

2.脚本编写

hadoop01:

创建exec-avro-failover.properties

#agent1 nameagent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set gruopagent1.sinkgroups = g1#set channelagent1.channels.c1.type = memoryagent1.channels.c1.capacity = 1000agent1.channels.c1.transactionCapacity = 100agent1.sources.r1.channels = c1agent1.sources.r1.type = execagent1.sources.r1.command = tail -F /home/xiaokang/logs/456.log# set sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname = hadoop02agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname = hadoop03agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1 k2#set failover 设置故障转移agent1.sinkgroups.g1.processor.type = failover#k1首先消费数据,k1对应的是hadoop02agent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2 = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000

hadoop02:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

hadoop03:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro-failover.properties -Dflume.root.logger=INFO,console

4.验证

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/456.log ; done

hadoop02开始消费数据

在这里插入图片描述
手动结束hadoop02的flume进程之后,hadoop01立即报错,显示hadoop02连接失败。此时hadoop03代替hadoop02执行任务,如下图所示:
在这里插入图片描述
在这里插入图片描述

转载地址:http://ijpaz.baihongyu.com/

你可能感兴趣的文章
mt_rand
查看>>
mysql /*! 50100 ... */ 条件编译
查看>>
mudbox卸载/完美解决安装失败/如何彻底卸载清除干净mudbox各种残留注册表和文件的方法...
查看>>
mysql 1264_关于mysql 出现 1264 Out of range value for column 错误的解决办法
查看>>
mysql 1593_Linux高可用(HA)之MySQL主从复制中出现1593错误码的低级错误
查看>>
mysql 5.6 修改端口_mysql5.6.24怎么修改端口号
查看>>
MySQL 8.0 恢复孤立文件每表ibd文件
查看>>
MySQL 8.0开始Group by不再排序
查看>>
mysql ansi nulls_SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON 什么意思
查看>>
multi swiper bug solution
查看>>
MySQL Binlog 日志监听与 Spring 集成实战
查看>>
MySQL binlog三种模式
查看>>
multi-angle cosine and sines
查看>>
Mysql Can't connect to MySQL server
查看>>
mysql case when 乱码_Mysql CASE WHEN 用法
查看>>
Multicast1
查看>>
MySQL Cluster 7.0.36 发布
查看>>
Multimodal Unsupervised Image-to-Image Translation多通道无监督图像翻译
查看>>
MySQL Cluster与MGR集群实战
查看>>
multipart/form-data与application/octet-stream的区别、application/x-www-form-urlencoded
查看>>