使用ELK进行日志分析

时间:2023-01-26 19:49:39

0x01 前言:

前段时间做应急,总是需要溯源分析,痛点是数据量比较大,想要短时间能分析出来。再者就是之前在调查某酒店事件的时候特别羡慕某产商有各种分析溯源工具。反思过后,终于在没有那么忙的时候开始搭建平台,开始采坑

使用ELK进行日志分析

0x02 ELK搭建:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.2.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.rpm
wget http://download.redis.io/releases/redis-5.0.0.tar.gz

firewalld的基本使用
启动: systemctl start firewalld
查看状态: systemctl status firewalld 
停止: systemctl disable firewalld
禁用: systemctl stop firewalld

我安装的是一个精简版的centos7,所以需要安装Java环境

rpm -qa | grep java*

yum install java-1.8.0-openjdk* -y

#如果存在java,需要删除掉
rpm -e --nodeps 后面跟系统自带的jdk名
http://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-x64.tar.gz

rpm -ivh kibana-6.4.2-x86_64.rpm
rpm -ivh logstash-6.4.2.rpm
rpm -ivh elasticsearch-6.4.2.rpm

yum install gcc-c++

https://nginx.org/download/nginx-1.13.12.tar.gz
tar -zxvf nginx-1.13.12.tar.gz
cd nginx-1.13.12
./configure
make& make install
cd /usr/local/nginx/sbin/
./nginx
./nginx -s stop
./nginx -s quit
./nginx -s reload

 

修改配置文件

vim /etc/logstash/logstash.yml
vim /etc/elasticsearch/elasticsearch.yml
vim /etc/kibana/kibana.yml

启动服务

systemctl start logstash.service
systemctl start elasticsearch.service
systemctl start kibana.service

 

利用redis作为输入源: 

wget http://download.redis.io/releases/redis-5.0.0.tar.gz

tar zxvf redis-5.0.0.tar.gz

cd redis-5.0.0

make && make install

修改redis.conf。

bind 0.0.0.0
protected-mode no
daemonize yes
maxclients 1000000
启动redis

mv redis.conf /etc/
redis-server /etc/redis.conf

 

 

 

 

 

 

 

 

汉化:

https://github.com/anbai-inc/Kibana_Hanization/

python main.py /usr/share/kibana

chrome插件:

https://chrome.google.com/webstore/search/elasticsearch