博客
关于我
flume容错(故障转移)
阅读量:620 次
发布时间:2019-03-13

本文共 2786 字,大约阅读时间需要 9 分钟。

flume容错(故障转移)

1.概念:

在这里插入图片描述

①当一台机器出现故障时,另一台机器代替其工作

②通常用于解决单点故障,给容易出故障的地方设置备份

③备份越多,容错能力越强,资源浪费越严重

2.脚本编写

hadoop01:

创建exec-avro-failover.properties

#agent1 nameagent1.channels = c1agent1.sources = r1agent1.sinks = k1 k2#set gruopagent1.sinkgroups = g1#set channelagent1.channels.c1.type = memoryagent1.channels.c1.capacity = 1000agent1.channels.c1.transactionCapacity = 100agent1.sources.r1.channels = c1agent1.sources.r1.type = execagent1.sources.r1.command = tail -F /home/xiaokang/logs/456.log# set sink1agent1.sinks.k1.channel = c1agent1.sinks.k1.type = avroagent1.sinks.k1.hostname = hadoop02agent1.sinks.k1.port = 52020# set sink2agent1.sinks.k2.channel = c1agent1.sinks.k2.type = avroagent1.sinks.k2.hostname = hadoop03agent1.sinks.k2.port = 52020#set sink groupagent1.sinkgroups.g1.sinks = k1 k2#set failover 设置故障转移agent1.sinkgroups.g1.processor.type = failover#k1首先消费数据,k1对应的是hadoop02agent1.sinkgroups.g1.processor.priority.k1 = 10agent1.sinkgroups.g1.processor.priority.k2 = 1agent1.sinkgroups.g1.processor.maxpenalty = 10000

hadoop02:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

hadoop03:

创建avro-logger-failover.properties

# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = avroa1.sources.r1.channels = c1a1.sources.r1.bind = hadoop02a1.sources.r1.port = 52020# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c

3.flume启动

[xiaokang@hadoop03 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop02 ~]$ flume-ng agent -n a1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/avro-logger-failover.properties -Dflume.root.logger=INFO,console[xiaokang@hadoop01 ~]$ flume-ng agent -n agent1 -c /opt/software/flume-1.9.0/conf/ -f flume_scripts/exec-avro-failover.properties -Dflume.root.logger=INFO,console

4.验证

[xiaokang@hadoop01 ~]$ while true; do date >> /home/xiaokang/logs/456.log ; done

hadoop02开始消费数据

在这里插入图片描述
手动结束hadoop02的flume进程之后,hadoop01立即报错,显示hadoop02连接失败。此时hadoop03代替hadoop02执行任务,如下图所示:
在这里插入图片描述
在这里插入图片描述

转载地址:http://ijpaz.baihongyu.com/

你可能感兴趣的文章
ng 指令的自定义、使用
查看>>
ng6.1 新特性:滚回到之前的位置
查看>>
nghttp3使用指南
查看>>
【Flink】Flink 2023 Flink 自动化运维的大规模落地实践
查看>>
Nginx
查看>>
nginx + etcd 动态负载均衡实践(一)—— 组件介绍
查看>>
nginx + etcd 动态负载均衡实践(三)—— 基于nginx-upsync-module实现
查看>>
nginx + etcd 动态负载均衡实践(二)—— 组件安装
查看>>
nginx + etcd 动态负载均衡实践(四)—— 基于confd实现
查看>>
Nginx + Spring Boot 实现负载均衡
查看>>
Nginx + Tomcat + SpringBoot 部署项目
查看>>
Nginx + uWSGI + Flask + Vhost
查看>>
Nginx - Header详解
查看>>
nginx - thinkphp 如何实现url的rewrite
查看>>
Nginx - 反向代理、负载均衡、动静分离、底层原理(案例实战分析)
查看>>
Nginx - 反向代理与负载均衡
查看>>
nginx 1.24.0 安装nginx最新稳定版
查看>>
nginx 301 永久重定向
查看>>
nginx 301跳转
查看>>
nginx 403 forbidden
查看>>