Kubernetes日志收集方案 ELK 物理部署

2024-08-20 231 0

为什么收集日志

收集日志可以用于:

  • 分析用户行为
  • 监控服务器状态
  • 增强系统或应用安全性等。

收集哪些日志

  • kubernetes集群节点系统日志
  • kubernetes集群节点应用程序日志
  • kubernetes集群中部署的应用程序日志

日志收集方案

  • ELK + Filebeat
  • ELK + Fluentd

主机准备

为了增加ELK集群的运行效率,一般建议在k8s集群之外使用物理机部署ELK集群,当然也可以直接在k8s集群内部署。

主机 软件 版本 配置 IP
elastic elasticsearch 7.17.23 2C4G 192.168.77.191
logstash logstash 7.17.23 2C4G 192.168.77.191
kibana kibana 7.17.23 2C2G 192.168.77.192

网络配置

[root@kibana ~]# systemctl stop firewalld
[root@kibana ~]# setenforce 0

Elasticsearch

https://www.elastic.co/cn/downloads/elasticsearch
选择平台下载rpm包

[root@es ~]# yum install -y java-11-openjdk
[root@es ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.23-x86_64.rpm
[root@es ~]# rpm -ivh elasticsearch-7.17.23-x86_64.rpm 
[root@es-191 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep ^[a-Z]
cluster.name: k8s-es # 集群名
node.name: es-191 # 节点名
path.data: /var/lib/elasticsearch # 数据目录
path.logs: /var/log/elasticsearch # 日志目录
network.host: 192.168.77.191 # 主机IP
http.port: 9200
discovery.seed_hosts: ["192.168.77.191"] # 主机发现列表
cluster.initial_master_nodes: ["192.168.77.191"] # 集群master节点
[root@es-191 ~]# systemctl enable elasticsearch
[root@es-191 ~]# systemctl start elasticsearch
[root@es-191 ~]# ss -tunlp | grep 9200
tcp   LISTEN 0      4096   [::ffff:192.168.77.191]:9200            *:*    users:(("java",pid=12373,fd=293))
[root@es-191 ~]# curl http://192.168.77.191:9200
{
  "name" : "es-191",
  "cluster_name" : "k8s-es",
  "cluster_uuid" : "UZUrn8-nQdCJ0-ZH6zTP9w",
  "version" : {
    "number" : "7.17.23",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "61d76462eecaf09ada684d1b5d319b5ff6865a83",
    "build_date" : "2024-07-25T14:37:42.448799567Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.3",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Logstash

https://www.elastic.co/cn/downloads/logstash
选择平台下载rpm包

[root@kibana ~]# yum install -y java-11-openjdk
[root@logstash ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.23-x86_64.rpm
[root@logstash ~]# rpm -ivh logstash-7.17.23-x86_64.rpm 
[root@logstash-192 ~]# cat /etc/logstash/logstash.yml | grep ^[a-Z]
node.name: logstash-192
path.data: /var/lib/logstash
api.enabled: true
api.http.host: 192.168.77.192
api.http.port: 9600-9700
path.logs: /var/log/logstash

Logstash 不用预先启动,使用时再启动即可

Logstash测试
标准输入及标准输出

这里启动有点慢

[root@logstash-192 ~]# /usr/share/logstash/bin/logstash -e 'input {stdin{} } output {stdout{} }'
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2024-08-15 13:04:48.922 [main] runner - Starting Logstash {"logstash.version"=>"7.17.23", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.22+7 on 11.0.22+7 +indy +jit [linux-x86_64]"}
... 这里启动有点慢(30s)
The stdin plugin is now waiting for input:
[INFO ] 2024-08-15 13:04:55.957 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
hello # 输入
{
          "host" => "logstash-192",
       "message" => "hello",
      "@version" => "1",
    "@timestamp" => 2024-08-15T17:05:26.453Z
} # json格式输出

使用logstash输入内容到elasticsearch

[root@logstash-192 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => [ "192.168.77.191:9200" ] index => "logstash-%{+YYYY.MM.dd}" } }'
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2024-08-15 13:19:08.354 [main] runner - Starting Logstash {"logstash.version"=>"7.17.23", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.22+7 on 11.0.22+7 +indy +jit [linux-x86_64]"}
... 这里启动有点慢(30s)
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] 2024-08-15 13:19:16.205 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[INFO ] 2024-08-15 13:19:16.338 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
hello es # 输入
下面不会输出,而是直接写入到elasticsearch

见如下kibana索引管理

测试OK 则启动服务

[root@logstash-192 ~]# systemctl start logstash
[root@logstash-192 ~]# systemctl enable logstash

Kibana

https://www.elastic.co/cn/downloads/kibana
选择平台下载rpm包

[root@kibana ~]# yum install -y java-11-openjdk
[root@kibana ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.23-x86_64.rpm
[root@kibana ~]# rpm -ivh kibana-7.17.23-x86_64.rpm
[root@kibana ~]# cat /etc/kibana/kibana.yml | grep "^[a-zA-Z]"
server.port: 5601
server.host: "192.168.77.193" # 监听本机IP或所有IP 0.0.0.0
elasticsearch.hosts: ["http://192.168.77.191:9200"] # 可以多个
i18n.locale: "zh-CN" # 中文界面
[root@kibana ~]# systemctl enable kibana --now
[root@kibana ~]# systemctl status kibana

启用没那么快 需要等一会

[root@kibana ~]# ss -tunlp | grep 5601
tcp   LISTEN 0      511          192.168.77.193:5601      0.0.0.0:*    users:(("node",pid=12720,fd=72))

访问 http://192.168.77.193:5601
若显示 Kibana server is not ready yet 则要检查ES服务器是否启动

image.png

image.png

image.png

image.png

K8S节点系统日志收集

[root@harbor ~]# cat filebeat-to-logstash.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-logs-filebeat-config
  namespace: kube-system

data:
  filebeat.yml: |
    filebeat.inputs:
      - type: log
        paths:
          - /var/log/messages
        fields:
          app: k8s
          type: module
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "k8s-module"
    setup.template.pattern: "k8s-module-*"

    output.logstash:
      hosts: ['192.168.77.192:5044'] # 修改
      index: "k8s-module-%{+yyyy.MM.dd}"

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: k8s-logs
  namespace: kube-system
spec:
  selector:
    matchLabels:
      project: k8s
      app: filebeat
  template:
    metadata:
      labels:
        project: k8s
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.io/elastic/filebeat:7.17.23
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 500Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: k8s-logs
          mountPath: /var/log/messages
      volumes:
      - name: k8s-logs
        hostPath:
          path: /var/log/messages
      - name: filebeat-config
        configMap:
          name: k8s-logs-filebeat-config
[root@harbor ~]# kubectl get pod -n kube-system -owide | grep k8s-logs 
k8s-logs-979k5                            1/1     Running   0               104s   10.243.58.247    k8s-node02     <none>           <none>
k8s-logs-gpbkv                            1/1     Running   0               104s   10.243.85.219    k8s-node01     <none>           <none>
k8s-logs-rbz4q                            1/1     Running   0               104s   10.243.135.142   k8s-node03     <none>           <none>

查看pod输出日志

[root@harbor ~]# kubectl logs k8s-logs-979k5 -n kube-system

相关文章

Rancher 快速创建RKE K8S集群
Kubernetes日志收集方案 EFK Pod部署
kube-promethus 监控Rabbitmq
Ubuntu 22.04 Kubernetes 1.27 二进制部署
kube-prometheus 监控Kafka
kube-prometheus监控MySQL

发布评论