博客
关于我
flume容错(故障转移)
阅读量:620 次
发布时间:2019-03-13

本文共 2786 字,大约阅读时间需要 9 分钟。

flume容错(故障转移)

1.概念:

在这里插入图片描述

①当一台机器出现故障时,另一台机器代替其工作

②通常用于解决单点故障,给容易出故障的地方设置备份

③备份越多,容错能力越强,资源浪费越严重

2.脚本编写

hadoop01:

创建exec-avro-failover.properties

#agent1 nameagent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set gruopagent1.sinkgroups = g1#set channelagent1.channels.c1.type = memoryagent1.channels.c1.capacity = 1000agent1.channels.c1.transactionCapacity = 100agent1.sources.r1.channels = c1agent1.sources.r1.type = execagent1.sources.r1.command = tail -F /home/xiaokang/logs/456.log# set sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname = hadoop02agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname = hadoop03agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1 k2#set failover 设置故障转移agent1.sinkgroups.g1.processor.type = failover#k1首先消费数据,k1对应的是hadoop02agent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2 = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000

hadoop02:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

hadoop03:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro-failover.properties -Dflume.root.logger=INFO,console

4.验证

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/456.log ; done

hadoop02开始消费数据

在这里插入图片描述
手动结束hadoop02的flume进程之后,hadoop01立即报错,显示hadoop02连接失败。此时hadoop03代替hadoop02执行任务,如下图所示:
在这里插入图片描述
在这里插入图片描述

转载地址:http://ijpaz.baihongyu.com/

你可能感兴趣的文章
Mysql 索引问题集锦
查看>>
Mysql 纵表转换为横表
查看>>
mysql 编译安装 window篇
查看>>
mysql 网络目录_联机目录数据库
查看>>
MySQL 聚簇索引&&二级索引&&辅助索引
查看>>
Mysql 脏页 脏读 脏数据
查看>>
mysql 自增id和UUID做主键性能分析,及最优方案
查看>>
Mysql 自定义函数
查看>>
mysql 行转列 列转行
查看>>
Mysql 表分区
查看>>
mysql 表的操作
查看>>
mysql 视图,视图更新删除
查看>>
MySQL 触发器
查看>>
mysql 让所有IP访问数据库
查看>>
mysql 记录的增删改查
查看>>
MySQL 设置数据库的隔离级别
查看>>
MySQL 证明为什么用limit时,offset很大会影响性能
查看>>
Mysql 语句操作索引SQL语句
查看>>
MySQL 误操作后数据恢复(update,delete忘加where条件)
查看>>
MySQL 调优/优化的 101 个建议!
查看>>