Академический Документы
Профессиональный Документы
Культура Документы
1
User Guide
July 9, 2018 – Revision A
LogRhythm_MetricsApp_1.0.1_UserGuide_revA
LogRhythm Inc.
4780 Pearl East Circle
Boulder, CO 80301
(303) 413-8745
www.logrhythm.com
LogRhythm Customer Support
support@logrhythm.com
Background
The LogRhythm Metrics App gives system integrators, MSSPs, and large enterprises the capability to customize
reporting logs, events, alarms, and other metrics, along with the ability to create highly customized dashboards,
reports, and bespoke views of data captured in the LogRhythm platform. This allows users to create content that is
unique to their service offering, combine data from LogRhythm with their managed solutions for broader visibility
across their full technology portfolio, and provide reporting to their end users that demonstrates measurement
across SLAs and other contractual requirements.
The most dynamic, direct, and flexible dashboard and reporting capabilities are concentrated in two areas:
• Security Operations—alarm counts, alarm histograms, MTTD/MTTR metrics, etc.
• Platform—log processing rates, indexing rates, logs by source, logs by type, etc.
Features
The LogRhythm Metrics App provides:
• A certified and supported SecOps metrics application for supporting custom dashboards, reporting, and
analysis via third-party solutions, specifically Kibana.
• The following capabilities as certified and fully supported components of the app:
o Automatic and consistent extraction of specific metrics data through exposed APIs (preferred) or
directly from SQL Server
o Appropriate transformation of data in support of analytics flexibility and to alleviate data
persistence concerns
o Writing metrics to a separate, dedicated Elasticsearch instance. We use our best efforts to ensure
that when updating the app for new features or in support of new LogRhythm versions, Elastic
indices will not be modified in ways that would break existing analytics integrations.
• Reference architectures and documentation as part of the app, in support of:
o Deploying and configuring the Metrics App, including the Elasticsearch/Logstash/Kibana stack
o Integrating Kibana for custom analytics
o Data retention best practices
o App health monitoring and troubleshooting
• Example Kibana dashboards and widget samples.
The LogRhythm Metrics App uses Logstash to perform the required ETL (Extract, Transform, and Load) on the SQL
Server data. This makes the solution extensible in the field without needing code changes from LogRhythm to add
or modify functionality. The transformed data repository is a standalone Elasticsearch instance (separate from the
LogRhythm Data Indexer).
As part of this architecture, LogRhythm supports:
• Standalone Elastic instance configuration (with a fixed size and config)
• Logstash configuration and setup
• Logstash queries to ETL data from SQL Server databases—EMDB, LogMart, Alarms, and Case—to the
standalone Elastic instance. Users can transform data to de-normalize, add Entities, perform lookups, and
more
• Sample Kibana searches (4), visualizations (25), and dashboards (4)
Prerequisites
• LogRhythm Enterprise 7.2 or later
• VM or appliance with only the base CentOS 7 64-bit image installed—no Elasticsearch or other
components should be installed. It is recommended that the machine has at least four CPU cores, 16 GB
RAM, and 250 GB disk space.
NOTE: Disk size is dependent on how much data is stored in Elasticsearch. The base install of CentOS,
Java, Elasticsearch, Logstash, and Kibana uses less than 3 GB. If you need to increase the
memory on your VM, see Appendix 1: Increase the JVM Memory for Logstash.
• Network connectivity to the LogRhythm Platform Manager (PM) databases (port 1433)
• Internet connectivity to install (via yum) Java and the ELK stack components—Elasticsearch, Logstash, and
Kibana
• (Optional) Network connectivity to and from any remote workstations that will access the data in the
Elasticsearch instance
Create User
1. On the LogRhythm Platform Manager, open the Microsoft SQL Server Management Studio and log in as
an administrative user (for example, sa).
2. Click File, click Open, click File, and then select the file lrmetrics_create_leastprivuser.sql. This file is
included in the Metrics App installer.
3. Replace the new user’s password (<CHANGE_ME> in the script).
4. Execute the SQL.
The SQL creates a user called “lrmetricsapp” with the password you have specified. Use these credentials
for the Logstash query files during installation (.conf files in /etc/logstash/conf.d/logrhythm).
Delete User
1. On the LogRhythm Platform Manager, open the Microsoft SQL Server Management Studio and log in as
an administrative user (for example, sa).
2. Click File, click Open, click File, and then select the file lrmetrics_delete_leastprivuser.sql. This file is
included in the Metrics App installer.
3. Execute the SQL.
The SQL removes the lrmetricsapp user from SQL Server.
NOTE: The install script is not meant to be run more than once and should not be used to update your
configuration. To change your configuration, see Configuration Changes. To upgrade to a newer
version of the Metrics App, see Upgrade the Metrics App.
1. Connect to the CentOS VM or appliance as root or the user created in the previous section.
2. Use scp or sftp to copy the LogRhythm Metrics App package (.tar.gz file) to a temporary location, such as
/tmp.
3. Un-zip and un-tar the package:
[root@lrmetricshost tmp]# tar -xzf lrmetricsapp_1.0.1.32.tar.gz
Enter the IP address of this machine (LogRhythm Metrics App host): <IP Address>
Enter the IP address of the LogRhythm Platform Manager database: <SQL Server IP
Address>
Enter the password for the LogRhythm Platform Manager database: <SQL Server Login
Password>
Starting installation...
The script downloads and installs the required third-party components along with the LogRhythm Metrics
App files. The script also makes the configuration changes required to run the Metrics App.
IMPORTANT: This step overwrites your existing LogRhythm Logstash query files.
5. Edit the query files to change the IP address, user, and password for the SQL Server connection. For
details on editing these files, see the “Configuration Changes” section of the LogRhythm Metrics App User
Guide.
6. Edit the crontab file used to run the queries automatically:
a. Delete the /var/spool/cron/root file:
[root@lrmetricshost tmp]# rm –f /var/spool/cron/root
b. Recreate the /var/spool/cron/root file by running the following five commands as root or editing
the crontab file directly:
touch /var/spool/cron/root
echo '00 04 * * * /usr/bin/curator /etc/curator/cleanup.yml --config
/etc/curator/curator.yml' > /var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_datasource.conf >
/tmp/log_volume_datasource_$(date "+\%Y\%m\%d\%H\%M\%S").log' >>
/var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/log_volume_deployment.conf >
/tmp/log_volume_deployment_$(date "+\%Y\%m\%d\%H\%M\%S").log' >>
/var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/alarms.conf > /tmp/alarms_$(date
"+\%Y\%m\%d\%H\%M\%S").log' >> /var/spool/cron/root
echo '*/30 * * * * /usr/share/logstash/bin/logstash -f
/etc/logstash/conf.d/logrhythm/cases.conf > /tmp/cases_$(date
"+\%Y\%m\%d\%H\%M\%S").log' >> /var/spool/cron/root
The output from the Logstash command is displayed to the standard output by default.
2. To send the output to a file, append “> log_file” to the commands in step 1.
For example, the following command will write the Logstash output to the file /tmp/alarms_conf.log:
sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logrhythm/alarms.conf >
/tmp/alarms_conf.log
NOTE: If you want to retain the output from the Logstash queries that are scheduled to run by cron,
be sure to update the Logstash commands in the crontab file.
This should return output showing the four indices in your Elasticsearch instance:
NOTE: The password is entered and saved in plain text. For this reason, using a least privilege, read-
only SQL Server account is recommended. For details, see Create a Least-Privilege SQL Server
User for Querying LogRhythm Databases.
While the query itself can be edited, doing so is not recommended without assistance from LogRhythm Support.
Before importing the Kibana objects, create the required index patterns in Kibana. These index patterns are used
by the provided visualizations.
1. Open the Kibana interface, click the Management tab, and then click Index Patterns.
2. Click +Create Index Pattern.
3. Add the following Index Patterns:
a. Index pattern: alarms*
Time Filter field name: alarm_date
b. Index pattern: cases*
Time Filter field name: date_created
c. Index pattern: logvolume_datasource*
Time Filter field name: stat_date
d. Index pattern: logvolume_deployment*
Time Filter field name: stat_date
4. Import the .json file lrmetricsapp_vis.json, which contains the visualization, searches, and dashboards.
5. If you see a dialog indicating four Index Pattern conflicts, choose the appropriate Index Pattern from the
dropdown for each and then click Confirm all changes.
Reporting
Elastic X-Pack is a set of features that, among other things, enables the ability to export Kibana visualizations and
dashboards as PDF files. Installing X-Pack on the Metrics App host allows exporting reports with the click of a few
buttons in the Kibana UI. Furthermore, X-Pack supports the scheduled execution of dashboards as reports for
exporting.
For instructions on installing X-Pack, see https://www.elastic.co/products/x-pack/reporting.
NOTE: This value can be set higher or lower based on the available memory on the VM (or appliance).
Use Cases
Alarm query fields can be aggregated to implement the following use cases:
• Alarm Count by Name by Hour
• Alarm Count by Status by Hour
• Alarm Metrics
o Time to Open
o Time to Investigate
o Time to Close
Definitions
Cases
• Time to Qualify. The time between the date of evidence prompting the creation of a case, and the date of
case creation. If no evidence, then this is unknown.
• Time to Investigate. The time between the creation of a case, and the date of case closure as a non-
incident.
Incidents
• Time to Detect. The time between the date of evidence prompting the creation of a case, and the date of
case creation. If no evidence, then this is unknown.
• Time to Respond. The time between the date of case creation, and the date of Incident mitigation. Only
applies to cases escalated as incidents.
Use Cases
Cases query fields can be aggregated to implement the following use cases:
• Case Count by Status by Hour
• Case Count by Priority by Hour
• Case Count by Tag by Hour
• Incident Count by Status by Hour
• Incident Count by Priority by Hour
Use Cases
Deployment Log Volume query fields can be aggregated to implement the following use cases:
• Messages Collected by Deployment by Hour
• Messages Collected by Entity by Hour
• Messages Collected by Data Processor by Hour
• Messages Collected, Processed, Forwarded for Indexing, and Forwarded as Event by Data Processor by
Hour
• Messages Indexed by Data Indexer (Cluster) by Hour
Use Cases
Data Source Log Volume query fields can be aggregated to implement the following use cases:
• Messages Collected by Data Source by Hour
• Messages Collected by Data Source Host by Hour
• Messages Collected by Data Source Type by Hour
• Messages Collected by Data Collector by Hour