Вы находитесь на странице: 1из 15

LogRhythm Metrics App v1.0.

1
User Guide
July 9, 2018 – Revision A

LogRhythm_MetricsApp_1.0.1_UserGuide_revA

© LogRhythm, Inc. All rights reserved


© LogRhythm, Inc. All rights reserved
This document contains proprietary and confidential information of LogRhythm, Inc., which is protected by
copyright and possible non-disclosure agreements. The Software described in this Guide is furnished under the
End User License Agreement or the applicable Terms and Conditions (“Agreement”) which governs the use of the
Software. This Software may be used or copied only in accordance with the Agreement. No part of this Guide may
be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and
recording for any purpose other than what is permitted in the Agreement.
Disclaimer
The information contained in this document is subject to change without notice. LogRhythm, Inc. makes no
warranty of any kind with respect to this information. LogRhythm, Inc. specifically disclaims the implied warranty
of merchantability and fitness for a particular purpose. LogRhythm, Inc. shall not be liable for any direct, indirect,
incidental, consequential, or other damages alleged in connection with the furnishing or use of this information.
Trademark
LogRhythm is a registered trademark of LogRhythm, Inc. All other company or product names mentioned may be
trademarks, registered trademarks, or service marks of their respective holders.

LogRhythm Inc.
4780 Pearl East Circle
Boulder, CO 80301
(303) 413-8745
www.logrhythm.com
LogRhythm Customer Support
support@logrhythm.com

© LogRhythm, Inc. All rights reserved


Contents
Overview ................................................................................................................................................................................................................... 4
Background ........................................................................................................................................................................................................ 4
Features................................................................................................................................................................................................................ 4
Solution Architecture ...................................................................................................................................................................................... 5
Prerequisites ....................................................................................................................................................................................................... 5
Install the ELK Stack and the LogRhythm Metrics App.......................................................................................................................... 6
Create a Least-Privilege SQL Server User for Querying LogRhythm Databases ..................................................................... 6
Create User .................................................................................................................................................................................................... 6
Delete User..................................................................................................................................................................................................... 6
Prep the Metrics App Host ........................................................................................................................................................................... 6
Run the Install Script ....................................................................................................................................................................................... 7
Upgrade the Metrics App ............................................................................................................................................................................. 8
Logstash Configuration ...................................................................................................................................................................................... 9
Run the Queries ................................................................................................................................................................................................ 9
Configuration Changes ................................................................................................................................................................................10
Scheduling and Retention ..........................................................................................................................................................................10
Kibana Visualizations and Dashboards ......................................................................................................................................................11
Reporting...........................................................................................................................................................................................................11
Appendix 1: Increase the JVM Memory for Logstash...........................................................................................................................12
Appendix 2: Logstash Query Details ...........................................................................................................................................................12
Alarms Query (alarms.conf) ........................................................................................................................................................................12
Elasticsearch Index Name.......................................................................................................................................................................12
Elasticsearch Index Schema ...................................................................................................................................................................12
Use Cases ......................................................................................................................................................................................................12
Cases Query (cases.conf) .............................................................................................................................................................................13
Elasticsearch Index Name.......................................................................................................................................................................13
Elasticsearch Index Schema ...................................................................................................................................................................13
Definitions ....................................................................................................................................................................................................13
Use Cases ......................................................................................................................................................................................................13
Deployment Log Volume Query (log_volume_deployment.conf) ..............................................................................................14
Elasticsearch Index Name.......................................................................................................................................................................14
Elasticsearch Index Schema ...................................................................................................................................................................14
Use Cases ......................................................................................................................................................................................................14
Data Source Log Volume Query (log_volume_datasource.conf).................................................................................................15
Elasticsearch Index Name.......................................................................................................................................................................15
Elasticsearch Index Schema ...................................................................................................................................................................15
Use Cases ......................................................................................................................................................................................................15

© LogRhythm, Inc. All rights reserved Page 3 of 15


Overview
The LogRhythm Metrics App is a standalone application that extracts LogRhythm LogMart, Case, and Alarm SQL
Server database data to a standalone Elasticsearch instance for analysis and presentation.

Background
The LogRhythm Metrics App gives system integrators, MSSPs, and large enterprises the capability to customize
reporting logs, events, alarms, and other metrics, along with the ability to create highly customized dashboards,
reports, and bespoke views of data captured in the LogRhythm platform. This allows users to create content that is
unique to their service offering, combine data from LogRhythm with their managed solutions for broader visibility
across their full technology portfolio, and provide reporting to their end users that demonstrates measurement
across SLAs and other contractual requirements.
The most dynamic, direct, and flexible dashboard and reporting capabilities are concentrated in two areas:
• Security Operations—alarm counts, alarm histograms, MTTD/MTTR metrics, etc.
• Platform—log processing rates, indexing rates, logs by source, logs by type, etc.

Features
The LogRhythm Metrics App provides:
• A certified and supported SecOps metrics application for supporting custom dashboards, reporting, and
analysis via third-party solutions, specifically Kibana.
• The following capabilities as certified and fully supported components of the app:
o Automatic and consistent extraction of specific metrics data through exposed APIs (preferred) or
directly from SQL Server
o Appropriate transformation of data in support of analytics flexibility and to alleviate data
persistence concerns
o Writing metrics to a separate, dedicated Elasticsearch instance. We use our best efforts to ensure
that when updating the app for new features or in support of new LogRhythm versions, Elastic
indices will not be modified in ways that would break existing analytics integrations.
• Reference architectures and documentation as part of the app, in support of:
o Deploying and configuring the Metrics App, including the Elasticsearch/Logstash/Kibana stack
o Integrating Kibana for custom analytics
o Data retention best practices
o App health monitoring and troubleshooting
• Example Kibana dashboards and widget samples.

© LogRhythm, Inc. All rights reserved Page 4 of 15


Solution Architecture

Figure 1 LogRhythm Metrics App solution architecture

The LogRhythm Metrics App uses Logstash to perform the required ETL (Extract, Transform, and Load) on the SQL
Server data. This makes the solution extensible in the field without needing code changes from LogRhythm to add
or modify functionality. The transformed data repository is a standalone Elasticsearch instance (separate from the
LogRhythm Data Indexer).
As part of this architecture, LogRhythm supports:
• Standalone Elastic instance configuration (with a fixed size and config)
• Logstash configuration and setup
• Logstash queries to ETL data from SQL Server databases—EMDB, LogMart, Alarms, and Case—to the
standalone Elastic instance. Users can transform data to de-normalize, add Entities, perform lookups, and
more
• Sample Kibana searches (4), visualizations (25), and dashboards (4)

Prerequisites
• LogRhythm Enterprise 7.2 or later
• VM or appliance with only the base CentOS 7 64-bit image installed—no Elasticsearch or other
components should be installed. It is recommended that the machine has at least four CPU cores, 16 GB
RAM, and 250 GB disk space.

NOTE: Disk size is dependent on how much data is stored in Elasticsearch. The base install of CentOS,
Java, Elasticsearch, Logstash, and Kibana uses less than 3 GB. If you need to increase the
memory on your VM, see Appendix 1: Increase the JVM Memory for Logstash.

• Network connectivity to the LogRhythm Platform Manager (PM) databases (port 1433)
• Internet connectivity to install (via yum) Java and the ELK stack components—Elasticsearch, Logstash, and
Kibana
• (Optional) Network connectivity to and from any remote workstations that will access the data in the
Elasticsearch instance

© LogRhythm, Inc. All rights reserved Page 5 of 15


Install the ELK Stack and the LogRhythm Metrics App
The following installation and configuration steps are automated in an install script. Before you install the
LogRhythm Metrics App, make sure you have satisfied the prerequisites above.

IMPORTANT: Snapshot your VM before beginning the installation, if applicable.

Create a Least-Privilege SQL Server User for Querying LogRhythm Databases


This section explains how to create a least-privilege SQL Server user (read-only) for Logstash to use when
querying the LogRhythm SQL Server databases. For convenience, the SQL to create and delete the user is
provided as scripts included with the Metrics App installer.

Create User
1. On the LogRhythm Platform Manager, open the Microsoft SQL Server Management Studio and log in as
an administrative user (for example, sa).
2. Click File, click Open, click File, and then select the file lrmetrics_create_leastprivuser.sql. This file is
included in the Metrics App installer.
3. Replace the new user’s password (<CHANGE_ME> in the script).
4. Execute the SQL.
The SQL creates a user called “lrmetricsapp” with the password you have specified. Use these credentials
for the Logstash query files during installation (.conf files in /etc/logstash/conf.d/logrhythm).

Delete User
1. On the LogRhythm Platform Manager, open the Microsoft SQL Server Management Studio and log in as
an administrative user (for example, sa).
2. Click File, click Open, click File, and then select the file lrmetrics_delete_leastprivuser.sql. This file is
included in the Metrics App installer.
3. Execute the SQL.
The SQL removes the lrmetricsapp user from SQL Server.

Prep the Metrics App Host


1. Connect to the Metrics App CentOS VM or appliance as root.
2. To get the IP address assigned by DHCP (if not static) so that you can connect to the machine remotely
via SSH or SFTP, run the "ifconfig” command.
3. (Optional) Configure the hostname:
hostnamectl set-hostname "lrmetrics"

4. (Optional) Add a non-root user with sudo privileges:


useradd lruser
passwd lruser
usermod -aG wheel lruser

5. Log out as root and log back in as lruser.

© LogRhythm, Inc. All rights reserved Page 6 of 15


Run the Install Script

NOTE: The install script is not meant to be run more than once and should not be used to update your
configuration. To change your configuration, see Configuration Changes. To upgrade to a newer
version of the Metrics App, see Upgrade the Metrics App.

1. Connect to the CentOS VM or appliance as root or the user created in the previous section.
2. Use scp or sftp to copy the LogRhythm Metrics App package (.tar.gz file) to a temporary location, such as
/tmp.
3. Un-zip and un-tar the package:
[root@lrmetricshost tmp]# tar -xzf lrmetricsapp_1.0.1.32.tar.gz

4. Run the install.sh script as sudo or root:


[root@lrmetricshost tmp]# cd lrmetricsapp_1.0.1.32

[root@lrmetricshost lrmetricsapp_1.0.1.32]# ./install.sh

LogRhythm Metrics App Installer v1.0.1

Enter the IP address of this machine (LogRhythm Metrics App host): <IP Address>

Enter the IP address of the LogRhythm Platform Manager database: <SQL Server IP
Address>

Enter the user for the LogRhythm Platform Manager database.


This should be the lrmetricsapp least privileged user.
User: <SQL Server Login>

Enter the password for the LogRhythm Platform Manager database: <SQL Server Login
Password>

Starting installation...

The script downloads and installs the required third-party components along with the LogRhythm Metrics
App files. The script also makes the configuration changes required to run the Metrics App.

© LogRhythm, Inc. All rights reserved Page 7 of 15


Upgrade the Metrics App
To upgrade from a previous version of the Metrics App rather than install a new instance:
1. Download the signed LogRhythm Metrics App installer from the LogRhythm Community.
2. Copy the LogRhythm Metrics App installer to a temporary directory on the CentOS VM (or appliance).
3. Un-zip and un-tar the package using the following command:
[root@lrmetricshost tmp]# tar -xzf lrmetricsapp_1.0.1.32.tar.gz

4. Copy the Logstash query files (alarms.conf, cases.conf, log_volume_datasource.conf, and


log_volume_deployment.conf) from the unzip location to the /etc/logstash/conf.d/logrhythm directory.

IMPORTANT: This step overwrites your existing LogRhythm Logstash query files.

5. Edit the query files to change the IP address, user, and password for the SQL Server connection. For
details on editing these files, see the “Configuration Changes” section of the LogRhythm Metrics App User
Guide.
6. Edit the crontab file used to run the queries automatically:
a. Delete the /var/spool/cron/root file:
[root@lrmetricshost tmp]# rm –f /var/spool/cron/root

b. Recreate the /var/spool/cron/root file by running the following five commands as root or editing
the crontab file directly:
touch /var/spool/cron/root
echo '00 04 * * * /usr/bin/curator /etc/curator/cleanup.yml --config
/etc/curator/curator.yml' > /var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_datasource.conf >
/tmp/log_volume_datasource_$(date "+\%Y\%m\%d\%H\%M\%S").log' >>
/var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_deployment.conf >
/tmp/log_volume_deployment_$(date "+\%Y\%m\%d\%H\%M\%S").log' >>
/var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/alarms.conf > /tmp/alarms_$(date
"+\%Y\%m\%d\%H\%M\%S").log' >> /var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/cases.conf > /tmp/cases_$(date
"+\%Y\%m\%d\%H\%M\%S").log' >> /var/spool/cron/root

7. Rerun the SQL Server least privilege scripts:


a. To remove the previous lrmetricsapp user, run the lrmetricsapp_delete_leastprivuser.sql script.
b. To recreate the lrmetricsapp user with updated permissions, run the
lrmetricsapp_create_leastprivuser.sql script.

© LogRhythm, Inc. All rights reserved Page 8 of 15


Logstash Configuration
There are four Logstash queries provided with the LogRhythm Metrics App. These queries run against the
LogRhythm databases and insert the data into their own Elasticsearch indices, which can then be queried to drive
presentations and dashboards in Kibana, Tableau, and other reporting packages. The queries are delivered as
Logstash .conf files and are executed by Logstash on a scheduled basis—for example, once per hour or once per
day. The number of days to retain data can be configured—the default is 30 days.
For more information on what is contained in the Logstash queries, see Appendix 2: Logstash Query Details.

Run the Queries


In addition to being scheduled using cron, the Logstash queries can be run on demand.
1. Run the Logstash queries:
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logrhythm/alarms.conf
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logrhythm/cases.conf
sudo /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_datasource.conf
sudo /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_deployment.conf

The output from the Logstash command is displayed to the standard output by default.
2. To send the output to a file, append “> log_file” to the commands in step 1.
For example, the following command will write the Logstash output to the file /tmp/alarms_conf.log:
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logrhythm/alarms.conf >
/tmp/alarms_conf.log

NOTE: If you want to retain the output from the Logstash queries that are scheduled to run by cron,
be sure to update the Logstash commands in the crontab file.

3. Verify that there is now data in your Elasticsearch instance:


http://10.3.0.139:9200/_cat/indices?v

This should return output showing the four indices in your Elasticsearch instance:

Figure 2 Sample output displaying indices in Elasticsearch instance

© LogRhythm, Inc. All rights reserved Page 9 of 15


Configuration Changes
If the Logstash queries need to be modified after installation, they can be edited directly. The Logstash .conf files
are in the /etc/logstash/conf.d/logrhythm directory and can be edited as root or sudo using vi or a similar test
editor.
The following items can be edited: SQL Server host, SQL Server login, and SQL Server password (each property
appears in two places in its .conf file):
• To modify the SQL Server host, change the jdbc_connection_string property.
• To modify the SQL Server login, change the jdbc_user property.
• To modify the SQL Server password, change the jdbc_password property.

NOTE: The password is entered and saved in plain text. For this reason, using a least privilege, read-
only SQL Server account is recommended. For details, see Create a Least-Privilege SQL Server
User for Querying LogRhythm Databases.

While the query itself can be edited, doing so is not recommended without assistance from LogRhythm Support.

Scheduling and Retention


Logstash queries can be executed manually or configured to run on a recurring, scheduled basis. Upon
installation, the queries are configured as cron jobs to run once per hour. You can specify the execution schedule
by editing the /var/spool/cron/root file. For a quick reference on the crontab file format, see
http://www.adminschoice.com/crontab-quick-reference.
You can also configure the number of days to keep data in the Elasticsearch indices. The default retention period
is the previous 30 days. Each day, LogRhythm databases are queried for the previous day’s data, which is then
added to the index for that day. After the queries run, the indices for the oldest data are deleted from
Elasticsearch. For example, if the TTL is 30 days, then after the queries run for July 31, indices for July 1 are deleted.
The TTL for each query index can be specified individually in the /etc/curator/cleanup.yml file. For each of the
indices, change the value of the unit (default is days) and unit count (default is 30) to the desired TTL.

© LogRhythm, Inc. All rights reserved Page 10 of 15


Kibana Visualizations and Dashboards
The LogRhythm Metrics App comes with 25 Kibana visualizations and 4 dashboards that can be used as is or
modified for customization.

Before importing the Kibana objects, create the required index patterns in Kibana. These index patterns are used
by the provided visualizations.

1. Open the Kibana interface, click the Management tab, and then click Index Patterns.
2. Click +Create Index Pattern.
3. Add the following Index Patterns:
a. Index pattern: alarms*
Time Filter field name: alarm_date
b. Index pattern: cases*
Time Filter field name: date_created
c. Index pattern: logvolume_datasource*
Time Filter field name: stat_date
d. Index pattern: logvolume_deployment*
Time Filter field name: stat_date

4. Import the .json file lrmetricsapp_vis.json, which contains the visualization, searches, and dashboards.
5. If you see a dialog indicating four Index Pattern conflicts, choose the appropriate Index Pattern from the
dropdown for each and then click Confirm all changes.

Reporting
Elastic X-Pack is a set of features that, among other things, enables the ability to export Kibana visualizations and
dashboards as PDF files. Installing X-Pack on the Metrics App host allows exporting reports with the click of a few
buttons in the Kibana UI. Furthermore, X-Pack supports the scheduled execution of dashboards as reports for
exporting.
For instructions on installing X-Pack, see https://www.elastic.co/products/x-pack/reporting.

© LogRhythm, Inc. All rights reserved Page 11 of 15


Appendix 1: Increase the JVM Memory for Logstash
If there is a large amount of data in the LogRhythm SQL Server databases (LogMart databases can be large), then
the Logstash Java VM’s default memory settings—min 256 MB, max 1 GB—may not be large enough. When this
happens, you can increase the memory by editing the jvm.options file in the /etc/logstash directory.

To set the min and max memory to 4 GB:

1. On line 5, change “-Xms256m” to “-Xms4g.”


2. On line 6, change “-Xmx1g” to “-Xmx4g.”

NOTE: This value can be set higher or lower based on the available memory on the VM (or appliance).

Appendix 2: Logstash Query Details

Alarms Query (alarms.conf)


Elasticsearch Index Name
alarms-%{+YYYY.MM.dd}} (for example, alarms-2018.05.09)

Elasticsearch Index Schema


alarm_id
alarm_date
parent_entity_id
parent_entity
entity_id
entity
alarm_rule_id
alarm_rule
alarm_status
priority
last_person_id
last_person
opened_date
time_to_open
investigated_date
time_to_investigate
closed_date
time_to_close

Use Cases
Alarm query fields can be aggregated to implement the following use cases:
• Alarm Count by Name by Hour
• Alarm Count by Status by Hour
• Alarm Metrics
o Time to Open
o Time to Investigate
o Time to Close

© LogRhythm, Inc. All rights reserved Page 12 of 15


Cases Query (cases.conf)
Elasticsearch Index Name
cases-%{+YYYY.MM.dd}} (for example, cases-2018.05.09)

Elasticsearch Index Schema


case_id
case
person_id
person
case_status_id
case_status
priority
incident
case_tags
due_date
date_created
date_updated
date_closed
last_updated_by_id
last_updated_by
case_number
date_earliest_evidience
date_incident
date_mitigated
date_resolved
extrenal_id
resolution_date_updated
resolution_date_updated_by_id
resolution_date_updated_by
time_to_qualify_detect
time_to_investigate
time_to_respond

Definitions
Cases
• Time to Qualify. The time between the date of evidence prompting the creation of a case, and the date of
case creation. If no evidence, then this is unknown.
• Time to Investigate. The time between the creation of a case, and the date of case closure as a non-
incident.
Incidents
• Time to Detect. The time between the date of evidence prompting the creation of a case, and the date of
case creation. If no evidence, then this is unknown.
• Time to Respond. The time between the date of case creation, and the date of Incident mitigation. Only
applies to cases escalated as incidents.

Use Cases
Cases query fields can be aggregated to implement the following use cases:
• Case Count by Status by Hour
• Case Count by Priority by Hour
• Case Count by Tag by Hour
• Incident Count by Status by Hour
• Incident Count by Priority by Hour

© LogRhythm, Inc. All rights reserved Page 13 of 15


• Incident Count by Tag by Hour
• Case Metrics
o Time to Qualify
o Time to Investigate
• Incident Metrics
o Time to Detect
o Time to Respond

Deployment Log Volume Query (log_volume_deployment.conf)


Elasticsearch Index Name
log_volume_deployment-%{+YYYY.MM.dd}} (for example, log_volume_deployment-2018.05.09)

Elasticsearch Index Schema


stats_mediator_counts_id
stat_date (one-hour increments)
parent_entity_id
parent_entity
entity_id
entity
host_id
host
mediator_id
mediator
cluster_id
cluster
count_logs
count_archived_logs
count_events
count_online_logs
count_processed_logs
count_events_forwarded
count_analyzed_logs

Use Cases
Deployment Log Volume query fields can be aggregated to implement the following use cases:
• Messages Collected by Deployment by Hour
• Messages Collected by Entity by Hour
• Messages Collected by Data Processor by Hour
• Messages Collected, Processed, Forwarded for Indexing, and Forwarded as Event by Data Processor by
Hour
• Messages Indexed by Data Indexer (Cluster) by Hour

© LogRhythm, Inc. All rights reserved Page 14 of 15


Data Source Log Volume Query (log_volume_datasource.conf)
Elasticsearch Index Name
log_volume_datasource-%{+YYYY.MM.dd}} (for example, log_volume_datasource-2018.05.09)

Elasticsearch Index Schema


stats_msg_source_counts_id
stat_date (one-hour increments)
parent_entity_id
parent_entity
entity_id
entity
host_id
host
msg_source_id
msg_source
msg_source_type_id
msg_source_type
mediator_id
mediator
count_logs
count_archived_logs
count_events
count_online_logs
count_processed_logs
count_events_forwarded
count_analyzed_logs

Use Cases
Data Source Log Volume query fields can be aggregated to implement the following use cases:
• Messages Collected by Data Source by Hour
• Messages Collected by Data Source Host by Hour
• Messages Collected by Data Source Type by Hour
• Messages Collected by Data Collector by Hour

© LogRhythm, Inc. All rights reserved Page 15 of 15

Вам также может понравиться