博客
关于我
flume容错(故障转移)
阅读量:620 次
发布时间:2019-03-13

本文共 2786 字,大约阅读时间需要 9 分钟。

flume容错(故障转移)

1.概念:

在这里插入图片描述

①当一台机器出现故障时,另一台机器代替其工作

②通常用于解决单点故障,给容易出故障的地方设置备份

③备份越多,容错能力越强,资源浪费越严重

2.脚本编写

hadoop01:

创建exec-avro-failover.properties

#agent1 nameagent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set gruopagent1.sinkgroups = g1#set channelagent1.channels.c1.type = memoryagent1.channels.c1.capacity = 1000agent1.channels.c1.transactionCapacity = 100agent1.sources.r1.channels = c1agent1.sources.r1.type = execagent1.sources.r1.command = tail -F /home/xiaokang/logs/456.log# set sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname = hadoop02agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname = hadoop03agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1 k2#set failover 设置故障转移agent1.sinkgroups.g1.processor.type = failover#k1首先消费数据,k1对应的是hadoop02agent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2 = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000

hadoop02:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

hadoop03:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro-failover.properties -Dflume.root.logger=INFO,console

4.验证

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/456.log ; done

hadoop02开始消费数据

在这里插入图片描述
手动结束hadoop02的flume进程之后,hadoop01立即报错,显示hadoop02连接失败。此时hadoop03代替hadoop02执行任务,如下图所示:
在这里插入图片描述
在这里插入图片描述

转载地址:http://ijpaz.baihongyu.com/

你可能感兴趣的文章
Netty源码—3.Reactor线程模型四
查看>>
Netty源码—4.客户端接入流程一
查看>>
Netty源码—4.客户端接入流程二
查看>>
Netty源码—5.Pipeline和Handler一
查看>>
Netty源码—5.Pipeline和Handler二
查看>>
Netty源码—6.ByteBuf原理一
查看>>
Netty源码—6.ByteBuf原理二
查看>>
Netty源码—7.ByteBuf原理三
查看>>
Netty源码—7.ByteBuf原理四
查看>>
Netty源码—8.编解码原理一
查看>>
Netty源码—8.编解码原理二
查看>>
Netty源码解读
查看>>
netty的HelloWorld演示
查看>>
Netty的Socket编程详解-搭建服务端与客户端并进行数据传输
查看>>
Netty的网络框架差点让我一夜秃头,哭了
查看>>
Netty相关
查看>>
Netty简介
查看>>
Netty线程模型理解
查看>>
netty解决tcp粘包和拆包问题
查看>>
Netty速成:基础+入门+中级+高级+源码架构+行业应用
查看>>