博客
关于我
flume容错(故障转移)
阅读量:620 次
发布时间:2019-03-13

本文共 2786 字,大约阅读时间需要 9 分钟。

flume容错(故障转移)

1.概念:

在这里插入图片描述

①当一台机器出现故障时,另一台机器代替其工作

②通常用于解决单点故障,给容易出故障的地方设置备份

③备份越多,容错能力越强,资源浪费越严重

2.脚本编写

hadoop01:

创建exec-avro-failover.properties

#agent1 nameagent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set gruopagent1.sinkgroups = g1#set channelagent1.channels.c1.type = memoryagent1.channels.c1.capacity = 1000agent1.channels.c1.transactionCapacity = 100agent1.sources.r1.channels = c1agent1.sources.r1.type = execagent1.sources.r1.command = tail -F /home/xiaokang/logs/456.log# set sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname = hadoop02agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname = hadoop03agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1 k2#set failover 设置故障转移agent1.sinkgroups.g1.processor.type = failover#k1首先消费数据,k1对应的是hadoop02agent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2 = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000

hadoop02:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

hadoop03:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro-failover.properties -Dflume.root.logger=INFO,console

4.验证

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/456.log ; done

hadoop02开始消费数据

在这里插入图片描述
手动结束hadoop02的flume进程之后,hadoop01立即报错,显示hadoop02连接失败。此时hadoop03代替hadoop02执行任务,如下图所示:
在这里插入图片描述
在这里插入图片描述

转载地址:http://ijpaz.baihongyu.com/

你可能感兴趣的文章
mysql CPU使用率过高的一次处理经历
查看>>
Multisim中555定时器使用技巧
查看>>
MySQL CRUD 数据表基础操作实战
查看>>
multisim变压器反馈式_穿过隔离栅供电:认识隔离式直流/ 直流偏置电源
查看>>
mysql csv import meets charset
查看>>
multivariate_normal TypeError: ufunc ‘add‘ output (typecode ‘O‘) could not be coerced to provided……
查看>>
MySQL DBA 数据库优化策略
查看>>
multi_index_container
查看>>
mutiplemap 总结
查看>>
MySQL Error Handling in Stored Procedures---转载
查看>>
MVC 区域功能
查看>>
MySQL FEDERATED 提示
查看>>
mysql generic安装_MySQL 5.6 Generic Binary安装与配置_MySQL
查看>>
Mysql group by
查看>>
MySQL I 有福啦,窗口函数大大提高了取数的效率!
查看>>
mysql id自动增长 初始值 Mysql重置auto_increment初始值
查看>>
MySQL in 太多过慢的 3 种解决方案
查看>>
Mysql Innodb 锁机制
查看>>
MySQL InnoDB中意向锁的作用及原理探
查看>>
MySQL InnoDB事务隔离级别与锁机制深入解析
查看>>