我在Kubernetes上运行了Elasticsearch和Kibana。两者都是由ECK创建的。现在,我尝试将Filebeat添加到其中,并将其配置为索引来自Kafka主题的数据。这是我当前的配置:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: my-filebeat
namespace: my-namespace
spec:
type: filebeat
version: 7.10.2
elasticsearchRef:
name: my-elastic
kibanaRef:
name: my-kibana
config:
filebeat.inputs:
- type: kafka
hosts:
- host1:9092
- host2:9092
- host3:9092
topics: ["my.topic"]
group_id: "my_group_id"
index: "my_index"
deployment:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat在pod的日志中,我可以看到如下条目
log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2470,"time":{"ms":192}},"total":{"ticks":7760,"time":{"ms":367},"value":7760},"user":{"ticks":5290,"time":{"ms":175}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"5ce8521c-f237-4994-a02e-dd11dfd31b09","uptime":{"ms":181997}},"memstats":{"gc_next":23678528,"memory_alloc":15320760,"memory_total":459895768},"runtime":{"goroutines":106}},"filebeat":{"harvester":{"open_files":0,"running":0},"inputs":{"kafka":{"bytes_read":46510,"bytes_write":37226}}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.18,"15":0.77,"5":0.97,"norm":{"1":0.0738,"15":0.0481,"5":0.0606}}}}}}也不存在错误条目。所以我假设与Kafka的连接是有效的。不幸的是,在上面指定的my_index中没有数据。我做错了什么?
发布于 2021-02-04 18:59:11
我猜您无法连接到输出中提到的Elasticsearch。
根据文档,ECK保护部署的Elasticsearch,并将其存储在Kubernetes Secrets中。
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html
https://stackoverflow.com/questions/66043825
复制相似问题