*NOTE* We now have an opensearch integration available for Atomic Enterprise OSSEC. Please reach out to support@atomicorp.com to inquire about this option.
Configure OSSEC Syslog Output
To configure OSSEC to send alerts to another system via syslog follow these steps:
- Login as root to the OSSEC server.
- Open /var/ossec/etc/ossec.conf in an editor.
- Let’s assume you want to send the alerts to a syslog server at
10.0.0.1
listening on UDP port9000
. Add these lines to ossec.conf at the end of theossec_config
section:
<syslog_output>
<server>10.0.0.1</server>
<port>9000</port>
<format>default</format>
</syslog_output>
- Enable syslog output:
/var/ossec/bin/ossec-control enable client-syslog
- Restart the OSSEC server:
/var/ossec/bin/ossec-control start
Install and Configure Logstash
- Download the Logstash RPM.
- Login as root.
- Run
rpm -Uvh logstash-version.rpm
whereversion
is the version you want download.
Now Logstash needs to be configured to receive OSSEC syslog output on UDP port 9000 or whatever port you decide to use
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
input { # stdin{} udp { port => 9000 type => "syslog" } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_host} %{DATA:syslog_program}: Alert Level: %{BASE10NUM:Alert_Level}; Rule: %{BASE10NUM:Rule} - %{GREEDYDATA:Description}; Location: %{GREEDYDATA:Details}" } add_field => [ "ossec_server", "%{host}" ] } mutate { remove_field => [ "syslog_hostname", "syslog_message", "syslog_pid", "message", "@version", "type", "host" ] } } } output { # stdout { # codec => rubydebug # } elasticsearch_http { host => "10.0.0.1" } } |
Lines [1–7] Every Logstash syslog configuration file contains input
, filter
, and output
sections. The input
section in this case tells Logstash to listen for syslog UDP
packets on any IP address and port 9000
. For debugging you can uncomment line 2 to get input from stdin
. This is handy when testing your parsing code in the filter
section
Lines [9–11] The filter
section divides up the incoming syslog lines that are placed in the Logstash input field called message
with the match
directive. Logstash grok
filters do the basic pattern matching and parsing. You can get a detailed explanation of how grok
works on the Logstash grok documentation page. The syntax for parsing fields is %{<pattern>:<field>}
, where <pattern>
is what will be searched for and <field>
is the name of the field that is found.
Line [12] The syslog_timestamp
, syslog_host
, syslog_program
, and syslog_pid
fields are parsed first. The next three fields are specific to OSSEC: Alert_level
, Rule
, and Description
. The remainder of the message is placed into Details. Here is the parsing sequence for these fields:
Alert_level
– skip past the" Alert level: "
string then extract the numeric characters up to the next space.Rule
– skip past the" Rule: "
string then extract the numeric characters up to the" – "
string.Description
– skip past the" - "
string then extract any characters, including spaces, up to the"; Location: "
string.Details
– skip past the"; Location: "
string then extract the remaining characters, including spaces, from the original"message"
field.
Line [13] The host
field, containing the name of the host on which Logstash is running is mapped to the logstash_host
field with the add_field
directive in grok
.
Lines [15–17] All the fields are parsed so the extraneous fields are trimmed from the output with the remove_field
directive in the mutate section.
Lines [21–24] The output
section sends the parsed output to Elasticsearch or to stdout. You can uncomment codec => rubydebug
statement to output the parsed fields in JSON format for debugging.
Lines [25–26] The elasticsearch_http
directive sends the Logstash output to the Elasticsearch instance running at the IP address specified by the host field. In this case Elasticsearch is running at IP address 10.0.0.1
.
If you store the Logstash configuration in your home directory in a file called logstash.conf and Logstash is installed in /usr/local/share/logstash, then you can start running Logstash like this:
/usr/share/logstash/bin/logstash -f ${HOME}/logstash.conf
Install and Configure Elasticsearch
Single node installation
- Download the Elasticsearch RPM.
- Login as root.
- Run
rpm -Uvh elasticsearch-version.rpm
whereversion
is the version you want to download.
By default, the Elasticsearch files are maintained in /var/lib/elasticsearch and logs in /var/log/elasticsearch. You can change that in elasticsearch.yml.
Set the name of the Elasticsearch cluster to mycluster
to match the cluster name setting from the Logstash config file of the previous section. To do that open /etc/elasticsearch/elasticsearch.yml and set the following line as shown
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: mycluster
The RPM will install Elasticsearch in /usr/share/elasticsearch and the configuration files /etc/elasticsearch/elasticsearch.yml and /etc/sysconfig/elasticsearch. It also creates a service script to start, stop, and check the status of Elasticsearch. You can start and stop Elasticsearch with the service command:
service elasticsearch start|stop|status
Install and Configure Kibana
Elasticsearch provides a web console called Kibana which enables you to build consoles that post queries automatically to your Elasticsearch backend. To install and configure Kibana follow this procedure.
- Download the Kibana RPM.
- Login as root.
- Run
rpm -Uvh kibana-version.rpm
whereversion
is the version you want to download. - Open the /opt/kibana/config.js file in an editor
- Change the URL in the
elasticsearch
field value to the IP address and TCP port of your Elasticsearch system. For example, if Elasticsearch is running on address10.0.0.1
and port9200
the URL would behttp://10.0.0.1:9200
.
To test the installation, open the Kibana URL http://10.0.0.1/kibana/
in a browser. You should get a screen that looks like this:
Query Elasticsearch with Kibana
After going to the Logstash Dashboard, you’ll see a screen that has some panels on it. The top panel queries Elasticsearch for all alerts by default.
To get specific alerts, you enter a query string for one of the OSSEC fields, such as Rule = 70001
, then you’ll see the results in a the panel called EVENTS OVER TIME that shows counts of the events returned from Elasticsearch over time. You can do additional queries by clicking on the plus icon of the most recent query then entering the new query strings and clicking on the magnifying glass icon.