Вы находитесь на странице: 1из 1249

NXLog User Guide

NXLog Ltd.

2020-06-15 11:36:36 UTC


Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1  

1. About This Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2  

2. About NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3  

2.1. NXLog Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3  

2.2. Enterprise Edition Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5  

2.3. What NXLog is Not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7  

3. System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.1. Event Records and Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8  

3.2. Modules and Routes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10  

3.3. Buffering and Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14  

3.4. Log Processing Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15  

4. Available Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

4.1. Extension Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17  

4.2. Input Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18  

4.3. Processor Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20  

4.4. Output Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20  

4.5. Modules by Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21  

4.6. Modules by Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42  

Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 

5. Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88  

6. Product Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90  

7. System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91  

8. Digital Signature Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92  

8.1. Signature Verification for RPM Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92  

8.2. Signature Verification for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92  

8.3. Signature Verification on macOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92  

9. Red Hat Enterprise Linux & CentOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94  

9.1. Installing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
 

9.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95  

9.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95  

10. Debian & Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96  

10.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96  

10.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97  

10.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97  

11. SUSE Linux Enterprise Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99  

11.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99  

11.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100  

11.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100  

12. FreeBSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


 

12.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


 

12.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  

12.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  

13. OpenBSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


 

13.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


 

13.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105  

13.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105  

14. Microsoft Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107  

14.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


 

14.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110  

14.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110  

14.4. Configure With a Custom MSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112  

15. Microsoft Nano Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113  


15.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
 

15.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113


 

15.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114


 

15.4. Custom Installation Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114


 

16. Apple macOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116


 

16.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116


 

16.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118


 

16.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118


 

17. Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


 

17.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


 

17.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


 

17.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120


 

18. IBM AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


 

18.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


 

18.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


 

18.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


 

19. Oracle Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


 

19.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


 

19.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124


 

19.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125


 

20. Hardening NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126


 

20.1. Running Under a Non-Root User on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126


 

20.2. Configuring SELinux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


 

20.3. Running Under a Custom Account on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132


 

21. Relocating NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


 

21.1. System V Init File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


 

21.2. Systemd Unit File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


 

21.3. NXLog Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138


 

21.4. Modify rpath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138


 

22. Monitoring and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


 

22.1. Monitoring on Unix Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


 

22.2. Monitoring on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


 

Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
 

23. Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144


 

23.1. Global Directives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144


 

23.2. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145


 

23.3. Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145


 

23.4. Constant and Macro Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146


 

23.5. Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147


 

23.6. File Inclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148


 

24. NXLog Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150


 

24.1. Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


 

24.2. Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152


 

24.3. Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157


 

24.4. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


 

24.5. Statistical Counters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162


 

25. Reading and Receiving Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164


 

25.1. Receiving over the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164


 

25.2. Reading from a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


 

25.3. Reading from Files and Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166


 

25.4. Receiving from an Executable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167


 

26. Processing Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168


 

26.1. Parsing Various Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168


 

26.2. Alerting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176


 
26.3. Using Buffers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
 

26.4. Character Set Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


 

26.5. Detecting a Dead Agent or Log Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


 

26.6. Event Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196


 

26.7. Extracting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


 

26.8. Filtering Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205


 

26.9. Format Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207


 

26.10. Log Rotation and Retention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208


 

26.11. Message Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218


 

26.12. Parsing Multi-Line Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222


 

26.13. Rate Limiting and Traffic Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224


 

26.14. Rewriting and Modifying Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225


 

26.15. Timestamps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228


 

27. Forwarding and Storing Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234


 

27.1. Generating Various Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234


 

27.2. Forwarding Over the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236


 

27.3. Sending to Files and Sockets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240


 

27.4. Storing in Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241


 

27.5. Sending to Executables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242


 

28. Centralized Log Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


 

28.1. Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


 

28.2. Collection Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244


 

28.3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245


 

28.4. Data Formats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246


 

29. Encrypted Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247


 

29.1. SSL/TLS Encryption in NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247


 

29.2. OpenSSL Certificate Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


 

30. Reducing Bandwidth and Data Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250


 

30.1. Filtering Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250


 

30.2. Trimming Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251


 

30.3. Compressing During Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254


 

31. Reliable Message Delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258


 

31.1. Crash-Safe Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258


 

31.2. Reliable Network Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259


 

31.3. Protection Against Duplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260


 

OS Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
 

32. IBM AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264


 

33. FreeBSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267


 

34. OpenBSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270


 

35. GNU/Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273


 

36. Apple macOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274


 

37. Oracle Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


 

38. Microsoft Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280


 

Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
 

39. Amazon Web Services (AWS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283


 

39.1. Amazon CloudWatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283


 

39.2. Amazon EC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288


 

39.3. Amazon Simple Storage Service (S3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288


 

40. Apache HTTP Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290


 

40.1. Error Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290


 

40.2. Access Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291


 

41. Apache Tomcat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293


 

42. APC Automatic Transfer Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294


 

42.1. Configuring via the Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295


 
42.2. Configuring via the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
 

43. Apple macOS Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298


 

44. ArcSight Common Event Format (CEF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300


 

44.1. Collecting and Parsing CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300


 

44.2. Generating and Forwarding CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301


 

44.3. Using xm_csv and xm_kvp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302


 

45. Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304


 

46. Brocade Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305


 

47. Check Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307


 

48. Cisco ACS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308


 

49. Cisco ASA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


 

49.1. Forwarding Cisco ASA Logs Over TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


 

49.2. NetFlow From Cisco ASA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316


 

50. Cisco FireSIGHT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321


 

51. Cisco IPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322


 

52. Cloud Instance Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323


 

52.1. Amazon Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323


 

52.2. Azure Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324


 

52.3. Google Compute Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325


 

53. Common Event Expression (CEE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327


 

53.1. Collecting and Parsing CEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327


 

53.2. Generating and Forwarding CEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328


 

54. Dell EqualLogic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330


 

54.1. Configuring via the Group Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332


 

54.2. Configuring via the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333


 

55. Dell iDRAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334


 

55.1. Configuring via the Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336


 

55.2. Configuring via the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337


 

56. Dell PowerVault MD Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339


 

57. DHCP Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343


 

57.1. ISC DHCP Server (DHCPd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343


 

57.2. ISC DHCP Client (dhclient) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343


 

57.3. Windows DHCP Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344


 

57.4. Windows DHCP Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349


 

58. DNS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351


 

58.1. DNS Logging and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351


 

58.2. BIND 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352


 

58.3. Windows DNS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357


 

58.4. Passive DNS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372


 

59. Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375


 

59.1. Configuring Logging in Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375


 

59.2. Receiving Logs From Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375


 

60. Elasticsearch and Kibana. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378


 

60.1. Sending Logs to Elasticsearch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378


 

60.2. Forwarding Logs to Logstash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381


 

61. F5 BIG-IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383


 

61.1. Collecting BIG-IP Logs via TCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383


 

61.2. Collecting BIG-IP Logs via UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387


 

61.3. Using SNMP Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390


 

61.4. BIG-IP High Speed Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395


 

62. File Integrity Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401


 

62.1. Monitoring on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401


 

Monitoring on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402


 

63. FreeRADIUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405


 
64. Graylog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
 

64.1. Configuring GELF UDP Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410


 

64.2. Configuring GELF TCP or TCP/TLS Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411


 

64.3. Collector Sidecar Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414


 

65. HP ProCurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417


 

66. IBM QRadar SIEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


 

66.1. Setting up the QRadar Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


 

66.2. Sending Generic Structured Logs to QRadar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422


 

66.3. Sending Specific Log Types for QRadar to Parse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423


 

66.4. Forwarding Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431


 

67. Linux Audit System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433


 

67.1. Audit Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433


 

67.2. Logging Audit Messages to Local Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435


 

67.3. Using im_linuxaudit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437


 

67.4. Using auditd Userspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438


 

68. Linux System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441


 

68.1. Replacing Rsyslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441


 

68.2. Forwarding Messages via Socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442


 

68.3. Reading Rsyslog Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444


 

69. Log Event Extended Format (LEEF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446


 

69.1. Collecting LEEF Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446


 

69.2. Generating LEEF Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447


 

70. McAfee Enterprise Security Manager (ESM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449


 

70.1. Configuring McAfee ESM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449


 

70.2. Sending Specific Log Types for ESM to Parse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451


 

70.3. Forwarding Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453


 

71. McAfee ePolicy Orchestrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455


 

71.1. Collecting ePO Audit Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455


 

71.2. Collecting VirusScan Enterprise (VSE) Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456


 

71.3. Collecting Data Loss Prevention (DLP) Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458


 

72. Microsoft Active Directory Domain Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460


 

72.1. Active Directory Security Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460


 

72.2. Advanced Security Audit Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461


 

72.3. Troubleshooting Domain Controller Promotions and Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464


 

73. Microsoft Azure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466


 

73.1. Azure Active Directory and Office 365 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466


 

73.2. Azure Operations Management Suite (OMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466


 

73.3. Azure SQL Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470


 

74. Microsoft Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474


 

74.1. Transport Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474


 

74.2. EventLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480


 

74.3. IIS Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481


 

74.4. Audit Logs (nxlog-xchg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481


 

75. Microsoft IIS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482


 

75.1. Configuring Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482


 

75.2. W3C Extended Log File Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483


 

75.3. Configuring IIS HTTP API Error logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485


 

75.4. IIS Log File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486


 

75.5. NCSA Common Log File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487


 

75.6. SMTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488


 

75.7. Automatic Retrieval of IIS Site Log Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490


 

76. Microsoft SharePoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492


 

76.1. Diagnostic Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492


 

76.2. Usage and Health Data Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496


 
76.3. Audit Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
 

76.4. Windows EventLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503


 

76.5. IIS Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503


 

77. Microsoft SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504


 

77.1. Error Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504


 

77.2. Audit Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505


 

77.3. Reading Logs From a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510


 

77.4. Writing Logs to a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514


 

77.5. Setting up ODBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515


 

78. Microsoft System Center Endpoint Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518


 

78.1. EventData Field from Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518


 

78.2. Collecting and Parsing SCEP Data from Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
 

78.3. Collecting and Parsing SCEP Data from an SQL Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
 

79. Microsoft System Center Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525


 

79.1. SCCM Log Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525


 

79.2. Collecting from Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525


 

79.3. Collecting from a Microsoft SQL Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526


 

80. Microsoft System Center Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529


 

80.1. Log Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529


 

80.2. Collecting Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529


 

81. MongoDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532


 

82. Nagios Log Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534


 

82.1. Installation and Configuration of Nagios Log Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534


 

82.2. NXLog Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534


 

82.3. Verifying Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538


 

83. Nessus Vulnerability Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540


 

84. NetApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543


 

84.1. Sending Logs in Syslog Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543


 

84.2. Sending Logs to a Remote File Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546


 

85. .NET Application Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548


 

86. Nginx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552


 

86.1. Error Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552


 

86.2. Access Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554


 

87. Okta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557


 

88. Osquery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558


 

88.1. Using Osquery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558


 

88.2. Configuring Osquery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558


 

88.3. Log Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559


 

88.4. Configuring NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561


 

89. Postfix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563


 

89.1. Configuring Postfix Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563


 

89.2. Collecting and Processing Postfix Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563


 

90. Promise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566


 

90.1. Configuring via Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567


 

90.2. Configuring via Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568


 

91. Rapid7 InsightIDR SIEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569


 

91.1. Configuring InsightIDR for Log Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569


 

91.2. Configuring NXLog for Log Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569


 

91.3. Verifying Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577


 

92. RSA NetWitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578


 

92.1. Configuring NetWitness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578


 

92.2. Configuring NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582


 

92.3. Verifying Collection on NetWitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583


 

93. SafeNet KeySecure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585


 
93.1. Configuring via the Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
 

93.2. Configuring via the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588


 

94. Salesforce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590


 

95. Snare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591


 

95.1. Collecting Snare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591


 

95.2. Generating Snare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592


 

96. Snort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594


 

97. Splunk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596


 

97.1. An Alternative to the Splunk Universal Forwarder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596


 

97.2. Configuring Splunk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605


 

97.3. Sending Generic Structured Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609


 

97.4. Sending Specific Log Types for Splunk to Parse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612


 

98. Symantec Endpoint Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616


 

98.1. MSSQL Server Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616


 

98.2. Embedded Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617


 

99. Synology DiskStation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620


 

100. Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622


 

100.1. BSD Syslog (RFC 3164) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622


 

100.2. IETF Syslog (RFCs 5424-5426) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624


 

100.3. Collecting and Parsing Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625


 

100.4. Filtering Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629


 

100.5. Generating Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631


 

100.6. Extending Syslog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634


 

101. Sysmon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638


 

101.1. Setting up Sysmon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638


 

101.2. Collecting Sysmon Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638


 

101.3. Filtering Sysmon Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640


 

102. Ubiquiti UniFi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642


 

103. VMware vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645


 

103.1. Local vCenter Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645


 

103.2. Remote vCenter Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648


 

104. Windows AppLocker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650


 

105. Windows Command Line Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652


 

105.1. Enabling Command Line Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652


 

106. Windows Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656


 

106.1. About Windows Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656


 

106.2. Collecting Event Log Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658


 

106.3. Filtering Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661


 

106.4. Event IDs to Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664


 

106.5. Forwarding Event Log Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667


 

107. Windows Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672


 

107.1. Traffic Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672


 

107.2. Change Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672


 

107.3. Event Tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673


 

108. Windows Group Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675


 

109. Windows Management Instrumentation (WMI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677


 

109.1. Reading WMI Events From the EventLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677


 

109.2. Reading WMI Events via ETW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678


 

109.3. Reading From WMI Log Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679


 

110. Windows PowerShell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681


 

110.1. Using PowerShell Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681


 

110.2. Logging PowerShell Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685


 

111. Microsoft Windows Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694


 

112. Windows USB Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696


 
112.1. USB Events in Windows Event Log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
 

112.2. USB Events Available via ETW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699


 

112.3. USB Events in Windows Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700


 

112.4. USB Events logged into a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701


 

113. Zeek (formerly Bro) Network Security Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703


 

113.1. About Zeek Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703


 

113.2. Parsing Zeek Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704


 

Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
 

114. Internal Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709


 

114.1. Default Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709


 

114.2. Enable Internal Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709


 

114.3. Raise the Severity Level of Logged Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709


 

114.4. Send Customized Log Messages to the Internal Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709


 

114.5. Send All Fields to the Internal Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710


 

114.6. Send Debug Dump to the Internal Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711


 

114.7. Send Internal Log to STDOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712


 

114.8. Send Internal Log to an Existing Route . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712


 

114.9. Send Information to an External File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712


 

115. Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714


 

115.1. NXLog Fails to Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714


 

115.2. Permission Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714


 

115.3. Connection Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714


 

115.4. Log Format Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715


 

115.5. Data Missing Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715


 

115.6. Processing Unexpectedly Paused or Stopped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715


 

116. Debugging NXLog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718


 

116.1. Generate Core Dumps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718


 

116.2. Inspect Memory Leaks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718


 

Enterprise Edition Reference Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720


 

117. Man Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721


 

117.1. nxlog(8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721


 

117.2. nxlog-processor(8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723


 

118. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725


 

118.1. General Directives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725


 

118.2. Global Directives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727


 

118.3. Common Module Directives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730


 

118.4. Route Directives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738


 

119. Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741


 

119.1. Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741


 

119.2. Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741


 

119.3. Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751


 

119.4. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753


 

119.5. Statistical Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753


 

119.6. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753


 

119.7. Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754


 

119.8. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759


 

120. Extension Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762


 

120.1. Remote Management (xm_admin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762


 

120.2. AIX Auditing (xm_aixaudit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769


 

120.3. Apple System Logs (xm_asl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771


 

120.4. Basic Security Module Auditing (xm_bsm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773


 

120.5. Common Event Format (xm_cef) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778


 

120.6. Character Set Conversion (xm_charconv) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782


 

120.7. Delimiter-Separated Values (xm_csv) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784


 
120.8. Encryption (xm_crypto) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788
 

120.9. External Programs (xm_exec) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792


 

120.10. File Lists (xm_filelist) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794


 

120.11. File Operations (xm_fileop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795


 

120.12. GELF (xm_gelf) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798


 

120.13. Go (xm_go) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803


 

120.14. Grok (xm_grok) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806


 

120.15. Java (xm_java) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808


 

120.16. JSON (xm_json) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814


 

120.17. Key-Value Pairs (xm_kvp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817


 

120.18. LEEF (xm_leef) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825


 

120.19. Microsoft DNS Server (xm_msdns) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828


 

120.20. Multi-Line Parser (xm_multiline) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831


 

120.21. NetFlow (xm_netflow) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839


 

120.22. Radius NPS (xm_nps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841


 

120.23. Pattern Matcher (xm_pattern) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842


 

120.24. Perl (xm_perl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846


 

120.25. Python (xm_python). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849


 

120.26. Resolver (xm_resolver) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852


 

120.27. Rewrite (xm_rewrite) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854


 

120.28. Ruby (xm_ruby). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857


 

120.29. SNMP Traps (xm_snmp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859


 

120.30. Remote Management (xm_soapadmin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863


 

120.31. Syslog (xm_syslog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863


 

120.32. W3C (xm_w3c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875


 

120.33. WTMP (xm_wtmp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879


 

120.34. XML (xm_xml) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880


 

120.35. Compression (xm_zlib) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884


 

121. Input Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890


 

121.1. Process Accounting (im_acct) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890


 

121.2. AIX Auditing (im_aixaudit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892


 

121.3. Azure (im_azure) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893


 

121.4. Batched Compression (im_batchcompress). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896


 

121.5. Basic Security Module Auditing (im_bsm). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898


 

121.6. Check Point OPSEC LEA (im_checkpoint) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899


 

121.7. DBI (im_dbi) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903


 

121.8. Event Tracing for Windows (im_etw) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905


 

121.9. External Programs (im_exec) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908


 

121.10. Files (im_file) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909


 

121.11. File Integrity Monitoring (im_fim). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913


 

121.12. Go (im_go) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916


 

121.13. HTTP(s) (im_http) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919


 

121.14. Internal (im_internal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922


 

121.15. Java (im_java). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924


 

121.16. Kafka (im_kafka) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929


 

121.17. Kernel (im_kernel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930


 

121.18. Linux Audit System (im_linuxaudit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931


 

121.19. Mark (im_mark) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938


 

121.20. EventLog for Windows XP/2000/2003 (im_mseventlog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940


 

121.21. EventLog for Windows 2008/Vista and Later (im_msvistalog) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943


 

121.22. Null (im_null) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950


 

121.23. Oracle OCI (im_oci). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950


 

121.24. ODBC (im_odbc) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951


 

121.25. Packet Capture (im_pcap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954


 
121.26. Perl (im_perl). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
 

121.27. Named Pipes (im_pipe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975  

121.28. Python (im_python) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976  

121.29. Redis (im_redis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978


 

121.30. Windows Registry Monitoring (im_regmon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979  

121.31. Ruby (im_ruby) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983


 

121.32. TLS/SSL (im_ssl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984


 

121.33. Systemd (im_systemd). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987  

121.34. TCP (im_tcp). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991


 

121.35. Test Generator (im_testgen) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993  

121.36. UDP (im_udp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994


 

121.37. Unix Domain Sockets (im_uds). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996  

121.38. Windows Performance Counters (im_winperfcount) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997  

121.39. Windows Event Collector (im_wseventing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999  

121.40. ZeroMQ (im_zmq) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014  

122. Processor Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016


 

122.1. Blocker (pm_blocker) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016


 

122.2. Buffer (pm_buffer) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017


 

122.3. Event Correlator (pm_evcorr) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019  

122.4. Filter (pm_filter) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027


 

122.5. HMAC Message Integrity (pm_hmac) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028  

122.6. HMAC Message Integrity Checker (pm_hmac_check) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030  

122.7. De-Duplicator (pm_norepeat). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032  

122.8. Null (pm_null) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034


 

122.9. Pattern Matcher (pm_pattern) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034  

122.10. Format Converter (pm_transformer) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037  

122.11. Timestamping (pm_ts). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039  

123. Output Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043


 

123.1. Batched Compression (om_batchcompress) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043  

123.2. Blocker (om_blocker) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045


 

123.3. DBI (om_dbi) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046


 

123.4. Elasticsearch (om_elasticsearch) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049  

123.5. EventDB (om_eventdb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053  

123.6. Program (om_exec) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055 

123.7. Files (om_file) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056


 

123.8. Go (om_go) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059


 

123.9. HTTP(s) (om_http). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062


 

123.10. Java (om_java) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066


 

123.11. Kafka (om_kafka) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069


 

123.12. Null (om_null) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071


 

123.13. Oracle OCI (om_oci) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071


 

123.14. ODBC (om_odbc) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072 

123.15. Perl (om_perl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074


 

123.16. Named Pipes (om_pipe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076  

123.17. Python (om_python) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077  

123.18. Raijin (om_raijin). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079


 

123.19. Redis (om_redis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082


 

123.20. Ruby (om_ruby) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083


 

123.21. TLS/SSL (om_ssl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084


 

123.22. TCP (om_tcp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088


 

123.23. UDP (om_udp) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090


 

123.24. UDP with IP Spoofing (om_udpspoof). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092  

123.25. Unix Domain Sockets (om_uds) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095  

123.26. WebHDFS (om_webhdfs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096  


123.27. ZeroMQ (om_zmq) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099
 

NXLog Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101


 

124. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102


 

124.1. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102


 

124.2. Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102


 

125. System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103


 

126. Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104


 

127. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


 

127.1. Installing on Debian Wheezy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


 

127.2. Installing on RHEL 6 & 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


 

127.3. Installing as Docker Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


 

127.4. Deploying on AWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106


 

127.5. Configuring NXLog Manager for Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1110


 

127.6. Configuring NXLog Manager for Cluster Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111


 

127.7. Database Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114


 

127.8. Starting NXLog Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115


 

127.9. NXLog Agent Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115


 

127.10. NXLog Manager Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116


 

127.11. Enabling HTTPS for NXLog Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126


 

127.12. Increasing the Open File Limit for NXLog Manager Using systemd . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
 

127.13. Upgrading NXLog Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132


 

127.14. Host Setup Common Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1133


 

128. Dashboard and Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135


 

128.1. Logging in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135


 

128.2. The Menu Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135


 

128.3. Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136


 

129. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139


 

130. Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142


 

130.1. Pattern Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142


 

130.2. Creating a Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144


 

130.3. Message Classification with Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146


 

130.4. Searching Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147


 

130.5. Exporting and Importing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148


 

130.6. Using Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148


 

131. Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149


 

131.1. Correlation Rulesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149


 

131.2. Correlation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149


 

131.3. Exporting and Importing Correlation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151


 

131.4. Using Correlation Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152


 

132. Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153


 

132.1. Managing Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153


 

133. Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168


 

133.1. Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168


 

133.2. Template Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169


 

134. Agent Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172


 

134.1. Agent Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172


 

134.2. Agent List in a Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172


 

135. Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174


 

135.1. Listing Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174


 

135.2. Creating a CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174


 

135.3. Creating a Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175


 

135.4. Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176


 

135.5. Importing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177


 

135.6. Revoking and Deleting Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177


 
135.7. Renewing a Certificate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
 

135.8. Certificates Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177


 

135.9. Reset Certificates and Encryption Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178


 

136. Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179


 

136.1. Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179


 

136.2. Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181


 

136.3. Mail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183


 

136.4. Config Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183


 

136.5. License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184


 

136.6. User Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185


 

137. Users, Roles, and Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186


 

137.1. Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186


 

137.2. Roles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188


 

137.3. Audit Trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191


 

138. RESTful Web Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193


 

138.1. agentmanager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193


 

138.2. appinfo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193


 

138.3. agentinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194


 

138.4. addagent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195


 

138.5. modifyagent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196


 

138.6. deleteagent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196


 

138.7. certificateinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196


 

138.8. createfield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197


 

NXLog Add-Ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198


 

139. Amazon S3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199


 

139.1. Setting Up Boto3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199


 

139.2. AWS S3 Buckets, Objects, Keys, and Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199


 

139.3. Sending Events to S3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200


 

139.4. Retrieving Events From S3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201


 

140. Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203


 

141. Cisco FireSIGHT eStreamer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205


 

141.1. Configuring the Cisco Defense Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205


 

141.2. Configuring the eStreamer Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206


 

141.3. Configuring NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206


 

142. Cisco Intrusion Prevention Systems (CIDEE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208


 

142.1. Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208


 

142.2. NXLog Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208


 

143. Exchange (nxlog-xchg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211


 

143.1. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211


 

143.2. Exchange Server Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211


 

143.3. nxlog-xchg (Client) Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212


 

143.4. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215


 

143.5. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215


 

144. Microsoft Azure and Office 365 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216


 

144.1. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216


 

144.2. Setup Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217


 

144.3. Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221


 

144.4. NXLog Configuration Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223


 

144.5. Running in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225


 

145. MSI for NXLog Agent Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227


 

146. Okta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228


 

147. Perlfcount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229


 

148. Salesforce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230


 

148.1. General Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230


 
148.2. Authentication and Data Retrieval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
 

148.3. Local Storage and Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231


 

148.4. Data Format and Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232


 

148.5. Configuring NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232


 
Introduction

1
Chapter 1. About This Guide
This Guide is designed to give you all the information and skills you need to successfully deploy and configure
NXLog in your organization. The following chapters provide detailed information about NXLog, including
features, architecture, configuration, and integration with other software and devices. An NXLog Enterprise
Edition Reference Manual is included, as well as documentation for the NXLog Manager.

NXLog is available in two versions, the Community Edition and the Enterprise Edition. Features that are unique to
the Enterprise Edition are noted as such, except in the Reference Manual (the Community Edition Reference
Manual is published separately). For more details about the functionality provided by these two NXLog editions,
see the following chapters (in particular, About NXLog and Available Modules).

This Guide was last updated at 2020-06-15 11:36:36 UTC.

Though most of the content applies to all versions of NXLog Community Edition and NXLog
Enterprise Edition, this Guide was written specifically for NXLog Enterprise Edition version
WARNING
5.0.5876. Some features covered by this book may not be available in earlier versions of
NXLog, and earlier versions of NXLog may behave differently than documented here.

If you would like to copy/paste configuration content from the Guide, please do so using
WARNING the HTML format. It is not possible to guarantee appropriate selection behavior with the
PDF format.

2
Chapter 2. About NXLog
Modern IT infrastructure produces large volumes of event logging data. In a single organization, hundreds or
thousands of different devices, applications, and appliances generate event log messages. These messages
require many log processing tasks, including filtration, classification, correlation, forwarding, and storage. In most
organizations these requirements are met with a collection of scripts and programs, each with its custom format
and configuration. NXLog provides a single, high-performance, multi-platform product for solving all of these
tasks and achieving consistent results.

At NXLog’s inception, there were various logging solutions available, but none with the required features. Most
were single-threaded and Syslog-oriented, without native support for Windows. Work on NXLog began with the
goal of building a modern logger with a multi-threaded design, a clear configuration syntax, multi-platform
support, and clean source code. NXLog was born in 2009 as a closed source product heavily used in several
production deployments. The source code of NXLog Community Edition was released in November 2011.

NXLog can process event logs from thousands of different sources with volumes over 100,000 events per second.
It can accept event logs over TCP, TLS/SSL, and UDP; from files and databases; and in Syslog, Windows EventLog,
and JSON formats. NXLog can also perform advanced processing on log messages, such as rewriting, correlating,
alerting, pattern matching, scheduling, and log file rotation. It supports prioritized processing of certain log
messages, and can buffer messages on disk or in memory to work around problems with input latency or
network congestion. After processing, NXLog can store or forward event logs in any of many supported formats.
Inputs, outputs, log formats, and complex processing are implemented with a modular architecture and a
powerful configuration language.

2.1. NXLog Features


This section gives an overview of some of the key advantages of NXLog over alternative systems.

Multi-platform deployment
Installer packages are provided for multiple platforms, including Linux, Windows, and Android. You can use
NXLog across your entire infrastructure, without resorting to different tools for different platforms.

Client or server operation


Depending on the configuration, NXLog will run as a server, a client, or a combination of both. You have the
freedom to choose the deployment architecture that best meets your needs. For example, NXLog can collect
local log data and forward it, relay data without storing it locally, or save incoming log data to disk.

Many input and output types and formats


NXLog can accept data from many different sources, convert the data internally, and output it to other
destinations. You can use NXLog as a single tool to process all of the different types of logs in your
organization. For example, logs can be collected from files, databases, Unix domain sockets, network
connections, and other sources. BSD Syslog, IETF Syslog, the Snare Agent format, Windows EventLog, JSON,
and other formats are supported. NXLog can likely be configured to read or write logs in your custom
application format, using the NXLog language and provided extension modules.

High performance, scalable architecture


With an event-based architecture for processing tasks in parallel, non-blocking input and output where
possible, and a worker thread pool for incoming log messages, NXLog is designed for high performance on
modern multi-core and multi-processor systems. The input/output readiness notifications provided by most
operating systems are used to efficiently handle large numbers of open files and network connections.

Security
NXLog provides features throughout the application to maintain the security of your log data and systems.
The core can be configured to run as an unprivileged user, and special privileges (such as binding to ports
below 1024) are accessed through Linux capabilities rather than requiring the application to run as root.
TLS/SSL is supported for encrypted, authenticated communications and to prevent data interception or
alteration during transmission.

3
Modular architecture
NXLog has a lightweight, modular architecture, providing a reduced memory footprint and increased
flexibility for different uses. The core handles files, events, and sockets, and provides the configuration
language; modules provide the input, output, and processing capabilities. Because modules use a common
API, you can write new modules to extend the features of NXLog.

Message buffering
Log messages can be buffered in memory or on disk. This increases reliability by holding messages in a
temporary cache when a network connectivity issue or dropout occurs. Conditional buffering can be
configured by using the NXLog language to define relevant conditions. For example, UDP messages may
arrive faster than they can be processed, and NXLog can buffer the messages to disk for processing when the
system is under less load. Conditional buffering can be used to explicitly buffer log messages during certain
hours of the day or when the system load is high.

Prioritized processing
NXLog can be configured to separate high-priority log processing from low-priority log processing, ensuring
that it processes the most important data first. When the system is experiencing high load, NXLog will avoid
dropping important incoming messages. For example, incoming UDP messages can be prioritized to prevent
dropped logs if a high volume of TCP messages overloads the system.

Message durability
Built-in flow control ensures that a blocked output does not cause dropped log messages when buffers are
full. In combination with the previously mentioned parallel processing, buffering, and prioritization, the
possibility of message loss is greatly reduced.

Familiar and powerful configuration syntax


NXLog uses an Apache-style configuration syntax that is easy to read and can be parsed or generated with
scripts. The NXLog language supports advanced scripting and processing capabilities that are usually only
found in full-fledged scripting languages. The syntax is similar to Perl, so users familiar with that language can
learn it easily. It supports polymorphic functions and procedures and regular expressions with captured sub-
strings. Modules can register additional functions and procedures to further extend the capabilities of the
language.

Scheduled tasks and log rotation


NXLog includes a scheduler service. A task can be scheduled from any module without requiring an external
tool such as Cron. Log files can be rotated automatically, on a time-based schedule or according to file size.
The file reader and writer modules in NXLog can detect when an input or output file moves or changes its
name, and re-open the file automatically.

Advanced message processing


NXLog can perform advanced processing actions on log messages, in addition to the core features already
mentioned. By using additional modules, NXLog can solve many related tasks such as message classification,
event correlation, pattern matching, message filtering, message rewriting, and conditional alerting. You can
use a single tool for all log processing functionality.

Offline processing
Sometimes log messages need to be processed in batches for conversion, filtering, or analysis. NXLog
provides an offline mode in which it processes all input and then exits. Because NXLog does not assume that
the event time and processing time are identical, time-based correlation features can be used even during
offline log processing.

International character and encoding support


NXLog supports explicit character set conversion and automatic character set detection. Log messages
received in different character sets can be automatically normalized to a common standard, allowing
messages to be compared across different sources.

4
2.2. Enterprise Edition Features
While the NXLog Community Edition provides all the flexibility and performance of the NXLog engine, the NXLog
Enterprise Edition provides additional enhancements, including modules and core features, as well as regular
hot-fixes and updates. The Enterprise Edition provides the following enhancements.

Additional platform support


In addition to Linux, Windows, and Android, installer packages are provided for the BSDs and the major
variants of Unix (AIX, Solaris, and macOS).

Signed installer packages


Installer packages are certificate signed to ensure that the binaries are not corrupted or compromised.

On-the-wire compression
Log data can be transferred in compressed batches with the im_batchcompress and om_batchcompress
input/output modules. This can help in limited bandwidth scenarios.

UDP source IP address spoofing


Some SIEM and log collection systems use the IP address of the UDP Syslog packet sent by the client. When
used as a server or relay, the om_udpspoof output module can be configured to retain the original IP address
of the sender.

Better control over SSL and TLS


Due to vulnerabilities discovered in the SSL protocols, some protocols may need to be disabled. The various
SSL/TLS networking modules in NXLog Enterprise Edition can be configured to allow only specific protocols
via the SSLProtocol directive. On Windows, NXLog Enterprise Edition can utilize TLSv1.2 while NXLog
Community Edition supports TLSv1.0 only.

ODBC input and output


The ODBC input and output modules, im_odbc and om_odbc, allow log data to be read from or inserted into
any ODBC compliant database. The primary purpose of the im_odbc module is native Windows MSSQL
support to enable log collection from Windows applications that write logs to MSSQL. The om_odbc output
module can be used to insert data into an ODBC database. These modules are available on Windows as well
as Linux.

Remote management
The dedicated xm_admin extension module enables NXLog agents to be managed remotely over a secure
SOAP/JSON SSL connection or to be integrated with existing monitoring and management tools. The
configuration, correlation rules, patterns, and certificates can all be updated remotely from the NXLog
Manager web interface or from scripts. In addition, the NXLog agent and the individual modules can be
stopped/started and log collection statistics can be queried for real-time statistics.

Crash recovery
Additional functionality is provided to guarantee a clean recovery in the case of a system crash, ensuring that
no messages are lost or duplicated.

Event correlation
The pm_evcorr processor module can efficiently solve complex event correlation tasks, with capabilities
similar to what the open-source SEC tool provides.

HTTP(S) protocol support


RESTful services are becoming increasingly popular, even for logging. The im_http and om_http input and
output modules make it possible to send or receive log message data over HTTP or HTTPS.

File integrity and registry monitoring


Several compliance standards mandate file integrity monitoring. With the im_fim input module, NXLog
Enterprise Edition can be used to detect modifications to files or directories. This module is available on

5
Windows as well as Linux. The im_regmon module provides monitoring of the Windows Registry.

Structured data formats


The xm_xml extension module can parse nested XML and data stored in XML attributes. Parsing of nested
JSON has also been implemented in xm_json, and UTF-8 validation can be enforced to avoid parser failures
caused by invalid UTF-8 from other tools.

Native W3C parser


The W3C format is widely used in various Microsoft products and perhaps IIS is the most well-known
producer. Parsing of W3C is possible with the xm_csv extension module, but that requires defining the fields
in the configuration and adjustment when the IIS configuration is changed. The xm_w3c extension module
can automatically parse the logs using the field information stored in the headers. It also supports automatic
parsing of the data format produced by BRO.

More support for SIEM products


The xm_cef and xm_leef modules provide parsing and generation of CEF and LEEF formatted data. CEF
(Common Event Format) was introduced by HP ArcSight and LEEF (Log Event Extended Format) is used by IBM
Security QRadar.

Simplified data processing configuration


Two extension modules help simplify the configuration. The xm_rewrite module allows fields to be renamed,
kept (whitelisted), or deleted (blacklisted). It also supports the Exec directive so log processing logic can be
localized and to avoid duplicated statements. The xm_filelist module provides two functions, contains() and
matches(), which can be invoked to check whether a string is present in a text file. This can be a username, IP
address, or similar. The files are cached in memory and any changes are automatically picked up without the
need to reload NXLog.

Custom input and output modules in Perl


Perl has a vast number of libraries that can be used to easily implement integration with various APIs,
formats, and protocols. The im_perl and om_perl input and output modules make it possible to utilize Perl to
collect and output data without the need to run the code as an external script.

Name resolution
The xm_resolver extension module provides cached DNS lookup functions for translating between IP
addresses and host names. User and group names can also be mapped to/from user and group ids.

Elasticsearch integration
The om_elasticsearch output module allows log data to be loaded directly into an Elasticsearch server without
requiring Logstash.

Check Point LEA input


The im_checkpoint input module enables the remote collection of Check Point firewall logs over the
OPSEC/LEA protocol. This feature is only available in the Linux version.

Redis Support
Redis is often used as an intermediate queue for log data. Two native modules, im_redis and om_redis, are
available to push data to and pull data from Redis servers.

SNMP input
The xm_snmp extension module can be used to parse SNMP traps. The traps can then be handled like regular
log messages: converted to Syslog, stored, forwarded, etc.

Multi-platform support for Windows Event Forwarding


The im_wseventing input module can be used to collect forwarded events from Windows hosts. The Windows
clients can be configured from Group Policy to send Windows EventLog using Windows Event Forwarding.
While NXLog Enterprise Edition can collect Windows EventLog remotely over WMI and MSRPC, this module
provides improved security for collecting from Windows machines in agent-less mode, with support for both
Kerberos and HTTPS data transfer. The im_wseventing module is platform independent and available on Linux

6
as well as Windows.

HDFS output
The om_webhdfs output module is available to support the Hadoop ecosystem.

Windows Performance Counters


The im_winperfcount input module can collect metrics from Windows Performance Counters such as CPU,
disk, and memory statistics.

Reading Windows EventLog files directly


The im_msvistalog module can read .evt, .evtx, and .etl EventLog files directly; this is particularly useful
for forensics purposes.

Additional Windows EventLog data


The im_msvistalog module retrieves the EventData and UserData parts which can contain important data in
some log sources. In addition, SID values in the EventLog Message can be resolved to account names to
produce the same output that EventViewer gives.

Netflow support
The xm_netflow extension module can parse Netflow packets received over UDP. It supports Netflow v1, v5,
v7, v9, and IPFIX.

ZeroMQ support
ZeroMQ is a popular high performance messaging library. The im_zmq and om_zmq modules provide input
and output support for the ZeroMQ protocol.

Regular hot fixes


Unlike NXLog Community Edition which is a volunteer effort, NXLog Enterprise Edition receives regular hot
fixes and enhancements.

2.3. What NXLog is Not


NXLog provides a broad range of features for collecting, processing, forwarding, and storing log data. However,
NXLog is not a SIEM product and does not provide:

• a graphical interface (or "dashboard") for searching logs and displaying reports,
• vulnerability detection or integration with external threat data,
• automatic analysis and correlation algorithms, or
• pre-configured compliance and retention policies.

NXLog does provide processing features that can be used to set up analysis, correlation, retention, and alerting;
NXLog can be integrated with many other products to provide a complete solution for aggregation, analysis, and
storage of log data.

7
Chapter 3. System Architecture
3.1. Event Records and Fields
In NXLog, a log message is an event, and the data relating to that event is collectively an event record. When NXLog
processes an event record, it stores the various values in fields. The following sections describe event records and
fields in the context of NXLog processing.

3.1.1. Event Records


There are many kinds of event records. A few important ones are listed here.

• The most common event record is a single line. Thus the default is LineBased for the InputType and
OutputType directives.
• It is also common for an event record to use a single UDP datagram. NXLog can send and receive UDP events
with the im_udp and om_udp modules.
• Some event records are generated using multiple lines. These can be joined into a single event record with
the xm_multiline module.
• Event records may be stored in a database. Each row in the database represents an event. In this case the
im_odbc and om_odbc modules can be used.
• It is common for structured event records to be formatted in CSV, JSON, or XML formats. The xm_csv,
xm_json, and xm_xml modules provide functions and procedures for parsing these.
• NXLog provides a Binary InputType and OutputType for use when compatibility with other logging software
is not required. This format preserves parsed fields and their types.

In NXLog, each event record consists of the raw event data (in a field named $raw_event) and additional fields
generated during processing and parsing.

3.1.2. Fields
All event log messages contain important data such as user names, IP addresses, and application names.
Traditionally, these logs have been generated as free form text messages prepended by basic metadata like the
time of the event and a severity value.

While this format is easy for humans to read, it is difficult to perform log analysis and filtering on thousands of
free-form logs. In contrast, structured logging provides means for matching messages based on key-value pairs.
With structured logging, an event is represented as a list of key-value pairs. The name of the field is the key and
the field data is the value. NXLog’s core design embraces structured logging. Using various features provided by
NXLog, a message can be parsed into a list of key-value pairs for processing or as part of the message sent to the
destination.

When a message is received by NXLog, it creates an internal representation of the log message using fields. Each
field is typed and represents a particular attribute of the message. These fields passes through the log route, and
are available in each successive module in the chain, until the log message has been sent to its destination.

1. The special $raw_event field contains the raw data received by the input module. Most input and output
modules only transfer $raw_event by default.
2. The core adds a few additional fields by default:
a. $EventReceivedTime (type: datetime) The time when the event is received. The value is not modified if
the field already exists.
b. $SourceModuleName (type: string) The name of the module instance, for input modules. The value is not
modified if the field already exists.

8
c. $SourceModuleType (type: string) The type of module instance (such as im_file), for input modules.
The value is not modified if the field already exists.
3. The input module may add other fields. For example, the im_udp module adds a $MessageSourceAddress
field.
4. Some input modules, such as im_msvistalog and im_odbc, map fields from the source directly to fields in the
NXLog event record.
5. Parsers such as the parse_syslog() procedure will add more fields.
6. Custom fields can be added by using the NXLog language and an Exec directive.
7. The NXLog language or the pm_pattern module can be used to set fields using regular expressions. See
Extracting Data.

When the configured output module receives the log message, in most cases it will use the contents of the
$raw_event field only. If the event’s fields have been modified, it is therefore important to update $raw_event
from the other fields. This can be done with the NXLog language, perhaps using a procedure like to_syslog_bsd().

A field is denoted and referenced in the configuration by a preceding dollar sign ($). See the Fields section in the
Reference Manual for more information.

9
Example 1. Processing a Syslog Message

This example shows a Syslog event and its corresponding fields as processed by NXLog. A few fields are
omitted for brevity.

1. NXLog receives an event:

<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from
192.168.1.60 port 38176 ssh2↵

2. The raw event data is stored in the $raw_event field when NXLog receives a log message. The NXLog
core and input module add additional fields.

{
  "raw_event": "<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user
linda from 192.168.1.60 port 38176 ssh2",
  "EventReceivedTime": "2019-11-22 10:30:13",
  "MessageSourceAddress": "192.168.1.1",

3. The xm_syslog parse_syslog() procedure parses the basic format of the Syslog message, reading from
$raw_event by default. This procedure adds a few more fields:

  "SyslogFacility": "USER",
  "SyslogSeverity": "NOTICE",
  "EventTime": "2019-11-22 10:30:12",
  "Hostname": "myhost",
  "SourceName": "sshd",
  "ProcessID": 8459,
  "Message": "Failed password for invalid user linda from 192.168.1.60 port 38176 ssh2",

4. Further metadata can be extracted from the free-form $Message field with regular expressions or other
methods; see Extracting Data.

  "Status": "failed",
  "AuthenticationMethod": "password",
  "Reason": "invalid user",
  "User": "linda",
  "SourceIPAddress": "192.168.1.60",
  "SourcePort": 38176,
  "Protocol": "ssh2"
}

3.2. Modules and Routes


While the NXLog core is responsible for managing the overall log processing operation, NXLog’s architecture
utilizes loadable modules that provide input, output, and parsing functionality. Different modules can be used
together to create log data routes that meet the requirements of the logging environment. Each route accepts log
messages in a particular format, processes or transforms them, and then outputs them in one of the supported
formats.

Files and sockets are added to the core by the various modules, and the core delegates events when necessary.
Modules also dispatch log events to the core, which passes each one to the appropriate module. In this way, the
core can centrally control all events and the order of their execution making prioritized processing possible. Each
event belonging to the same module instance is executed in sequential order, not concurrently. This ensures that
message order is kept and allows modules to be written without concern for concurrency. Yet because the
modules and routes run concurrently, the global log processing flow remains parallelized.

10
3.2.1. Modules
A module is a foo.so or foo.dll that can be loaded by the NXLog core and provides a particular capability. A
module instance is a configured module that can be used in the configured data flow. For example, the
configuration block for an input module instance begins with <Input instancename>. See the Instance examples
below. A single module can be used in multiple instances. With regard to configuration, a module instance is
often referred to as simply a module.

There are four types of modules.

Input
Functionality for accepting or retrieving log data is provided by input modules. An input module instance is a
source or producer. It accepts log data from a source and produces event records.

An Input Module Instance


1 <Input foo_in>
2 Module im_foo
3 ...
4 </Input>

Output
Output modules provide functionality for sending log data to a local or remote destination. An output module
instance is a sink, destination, or consumer. It is responsible for consuming event records produced by one or
more input module instances.

An Output Module Instance


1 <Output foo_out>
2 Module om_foo
3 ...
4 </Output>

Extension
The NXLog language can be extended with extension modules. Extension module instances do not process
log data directly. Instead, they provide features (usually functions and procedures) that can be used from
other parts of the processing pipeline. Many extension module instances require no directives other than the
Module directive.

Example 2. Using an Extension Module

In this example, the xm_syslog module is loaded by the Extension block. This module provides the
parse_syslog() procedure, in addition to other functions and procedures. In the following Input
instance, the Exec directive calls parse_syslog() to parse the Syslog-formatted event.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/log/messages'
8 Exec parse_syslog();
9 </Input>

Processor
Processor modules offer features for transforming, filtering, or converting log messages. One or more

11
processor module instances can be used in a route between input and output module instances.

A Processor Module Instance


1 <Processor foo>
2 Module pm_foo
3 ...
4 </Processor>

Many processing functions and procedures are available through the NXLog language and
can be accessed through the Exec directive in an Input or Output block without using a
NOTE
separate processor module instance. However, a separate processor module (pm_null,
perhaps) will use a separate worker thread, providing additional processing parallelization.

For a list of available modules, see Available Modules.

3.2.2. Routes
Most log processing solutions are built around the same concept. The input is read from a source, log messages
are processed, and then log data is written to a destination. In NXLog, this path is called a "route" and is
configured with a Route block.

Routes are made up of one or more inputs, zero or more processors, and one or more outputs.

Example 3. A Simple Route

This route accepts input with the in module and sends it to the out module. This is the simplest functional
route.

nxlog.conf
1 <Route r1>
2 Path in => out
3 </Route>

12
Example 4. A Route With a Processor

This route extends the previous example by adding an intermediate processing module proc.

nxlog.conf
1 <Route r2>
2 Path in => proc => out
3 </Route>

Example 5. Advanced Route With Multiple Input/Output Modules

This route uses two input modules and two output modules. Input from in1 and in2 will be combined and
sent to both out1 and out2.

nxlog.conf
1 <Route r3>
2 Path in1, in2 => out1, out2
3 </Route>

13
Example 6. Branching: Two Routes Using One Input Module

A module can be used by multiple routes simultaneously, as in this example. The in1 module instance is
only declared once, but is used by both routes.

nxlog.conf
1 <Route r1>
2 Path in => out1
3 </Route>
4
5 <Route r2>
6 Path in => proc => out2
7 </Route>

3.3. Buffering and Flow Control


NXLog implements several buffering features. Of these, two are particularly important and are enabled by
default: log queues and flow control.

Log Queues
Every processor and output module instance has an input log queue for events that have not yet been
processed by that module instance. When the preceding module has processed an event, it is placed in this
queue. Log queues are enabled by default for all processor and output module instances; adjusting log
queue sizes is the preferred way to control buffering behavior.

Flow Control
NXLog’s flow control functionality provides automatic, zero-configuration handling of many cases where
buffering would otherwise be required. Flow control takes effect when the following sequence of events
occurs in a route:

1. a processor or output module instance is not able to process log data at the incoming rate,
2. that module instance’s log queue becomes full, and

14
3. the preceding input or processor module instance has flow control enabled (which is the default).

In this case, flow control will cause the input or processor module instance to suspend processing until the
succeeding module instance is ready to accept more log data.

For more information about these and other buffering features, including log queue persistence, disabling flow
control, read/write buffers, and examples for specific scenarios, see Using Buffers.

3.4. Log Processing Modes


NXLog can process logs in three modes. Each mode has different characteristics, and you can use any
combination of modes for your overall logging infrastructure.

• Agent-Based Collection: NXLog runs on the system that is generating the log data.
• Agent-Less Collection: Hosts or devices generate log data and send it over the network to NXLog.
• Offline Log Processing: The nxlog-processor(8) tool performs batch log processing.

3.4.1. Agent-Based Collection


With agent-based collection, NXLog runs as an agent on the system that is generating the log data. It collects the
log data and sends it to another NXLog instance over the network.

We recommend agent-based log collection for most use cases. In particular, we recommend this
NOTE mode if you need strong security and reliability or need to transform log data before it leaves
the system on which it was generated.

Agent-based log collection offers several important advantages over agent-less collection.

• Log data can be collected from more sources. For example, you can collect logs directly from files, instead of
relying on a logging process to send log data across the network.
• NXLog’s processing features are available. You can filter, normalize, and rewrite log data before sending it to
a destination, whether a NXLog instance or a log aggregation system. This includes the ability to send
structured log data, such as JSON and key-value pairs.
• You have full control over the transfer of the log data. Messages can be sent using a variety of protocols,
including over TLS/SSL encrypted connections for security. Log data can be sent in compressed batches and
can be buffered if necessary.
• Log collection in this mode is more reliable. NXLog includes delivery guarantees and flow control systems
which ensure your log data reaches its destination. You can monitor the health of the NXLog agent to verify
its operational integrity.

Although agent-based collection has many compelling advantages, it is not well suited to some use cases.

• Many network and embedded systems, such as routers and firewalls, do not support installing third-party
software. In this case it would not be possible to install the NXLog agent.
• Installing the NXLog agent on each system in a large-scale deployment may not be practical compared to
reading from the existing logging daemon on each system.

15
3.4.2. Agent-Less Collection
With this mode of log collection, a server or device sends log data to an NXLog instance over the network, using
its native protocols. NXLog collects and processes the information that it receives.

We recommend agent-less log collection in cases where agent-based log collection is not
NOTE feasible, for example from legacy or embedded systems that do not support installing the
NXLog agent.

Agent-less log collection has the following advantages.

• It is not necessary to install an NXLog agent application on the target system to collect log data from it.
• Generally, a device or system requires only minimal configuration to send log data over the network to an
NXLog instance in its native format.

Agent-less log collection has some disadvantages that should be taken into consideration.

• Agent-less log collection may provide lower performance than agent-based collection. On Windows systems,
the Windows Management Instrumentation process can consume more system resources than the NXLog
agent.
• Reliability is also a potential issue. Since most Syslog log forwarders use UDP to transfer log data, some data
could be lost if the server restarts or becomes unreachable over the network. Unlike agent-based log
collection, you often cannot monitor the health of the logging source.
• Data transfers are less secure when using agent-less collection since most Syslog sources transfer data over
unencrypted UDP.

Agent-less collection is commonly used with the following protocols.

• BSD Syslog (RFC 3164) and IETF Syslog (RFC 5424) sources (see Collecting and Parsing Syslog)
• Windows EventLog sources (with NXLog Enterprise Edition):
◦ The MSRPC protocol, using the im_msvistalog module (see Remote Collection With im_msvistalog)

◦ Windows Event Forwarding, using the im_wseventing module (see Remote Collection With
im_wseventing)

3.4.3. Offline Log Processing


While the other modes process log data in real-time, NXLog can also be used to perform batch log processing.
This is provided by the nxlog-processor(8) tool, which is similar to the NXLog daemon and uses the same
configuration file. However, it runs in the foreground and exits after all input log data has been processed.

Common input sources are files and databases. This tool is useful for log processing tasks such as:

• loading a group of files into a database,


• converting between different formats,
• testing patterns,
• doing offline event correlation, or
• checking HMAC message integrity.

16
Chapter 4. Available Modules
The following modules are provided with NXLog. Modules which are only available in NXLog Enterprise Edition
are noted. For detailed information about which modules are available for specific platforms, see the Modules by
Platform and Modules by Package sections.

4.1. Extension Modules


The following extension (xm_*) modules are available.

Table 1. Available Extension Modules

Module Description
xm_admin — Remote Management Adds secure remote administration capabilities to NXLog using
(Enterprise Edition only) SOAP or JSON over HTTP/HTTPS.

xm_aixaudit — AIX Auditing (Enterprise Parses AIX audit events that have been written to file.
Edition only)

xm_asl — Apple System Logs (Enterprise Parses events in the Apple System Log (ASL) format.
Edition only)

xm_bsm — Basic Security Module Auditing Supports parsing of events written to file in Sun’s Basic Security
(Enterprise Edition only) Module (BSM) Auditing binary format.

xm_cef — CEF (Enterprise Edition only) Provides functions for generating and parsing data in the
Common Event Format (CEF) used by HP ArcSight™ products.

xm_charconv — Character Set Conversion Provides functions and procedures to help you convert strings
between different character sets (code pages).

xm_csv — CSV Provides functions and procedures to help you process data


formatted as comma-separated values (CSV), and to convert CSV
data into fields.

xm_exec — External Program Execution Passes log data through a custom external program for
processing, either synchronously or asynchronously.

xm_filelist — File Lists (Enterprise Edition Implements file-based blacklisting or whitelisting.


only)

xm_fileop — File Operations Provides functions and procedures to manipulate files.

xm_gelf — GELF Provides an output writer function which can be used to generate


output in Graylog Extended Log Format (GELF) for Graylog2 or
GELF compliant tools.

xm_grok — Grok Patterns (Enterprise Edition Provides support for parsing events with Grok patterns.
only)

xm_json — JSON Provides functions and procedures to generate data in JSON


(JavaScript Object Notation) format or to parse JSON data.

xm_kvp — Key-Value Pairs Provides functions and procedures to parse and generate data
that is formatted as key-value pairs.

xm_leef — LEEF (Enterprise Edition only) Provides functions for parsing and generating data in the Log
Event Extended Format (LEEF), which is used by IBM Security
QRadar products.

xm_msdns — DNS Server Debug Log Parses Microsoft Windows DNS Server debug logs
Parsing (Enterprise Edition only)

xm_multiline — Multi-Line Message Parser Parses log entries that span multiple lines.

17
Module Description
xm_netflow — NetFlow (Enterprise Edition Provides a parser for NetFlow payload collected over UDP.
only)

xm_nps — NPS (Enterprise Edition only) Provides functions and procedures for processing data in NPS
Database Format stored in files by Microsoft Radius services.

xm_pattern — Pattern Matcher (Enterprise Applies advanced pattern matching logic to log data, which can
Edition only) give greater performance than normal regular expression
statements. Replaces pm_pattern.

xm_perl — Perl Processes log data using Perl.

xm_python — Python (Enterprise Edition Processes log data using Python.


only)

xm_resolver — Resolver (Enterprise Edition Resolves key identifiers that appear in log messages into more
only) meaningful equivalents, including IP addresses to host names, and
group/user IDs to friendly names.

xm_rewrite — Rewrite (Enterprise Edition Transforms event records by modifying or discarding specific


only) fields.

xm_ruby — Ruby (Enterprise Edition only) Processes log data using Ruby.

xm_snmp — SNMP Traps (Enterprise Edition Parses SNMPv1 and SNMPv2c trap messages.
only)

xm_stdinpw — Passwords on standard Reads passwords on standard input.


input (Enterprise Edition only)

xm_syslog — Syslog Provides helpers that let you parse and output the BSD Syslog
protocol as defined by RFC 3164.

xm_w3c — W3C (Enterprise Edition only) Parses data in the W3C Extended Log File Format, the BRO format,
and Microsoft Exchange Message Tracking logs.

xm_wtmp — WTMP Provides a parser function to process binary wtmp files.

xm_xml — XML Provides functions and procedures to process data that is


formatted as XML.

4.2. Input Modules


The following input (im_*) modules are available.

Table 2. Available Input Modules

Module Description
im_acct — BSD/Linux Process Accounting Collects process accounting logs from a Linux or BSD kernel.
(Enterprise Edition only)

im_aixaudit — AIX Auditing (Enterprise Collects AIX audit events directly from the kernel.
Edition only)

im_azure — Azure (Enterprise Edition only) Collects logs from Microsoft Azure applications.

im_batchcompress — Batched Compression Provides a compressed network transport for incoming messages


over TCP or SSL (Enterprise Edition only) with optional SSL/TLS encryption. Pairs with the
om_batchcompress output module.

im_bsm — Basic Security Module Auditing Collects audit events directly from the kernel using Sun’s Basic
(Enterprise Edition only) Security Module (BSM) Auditing API.

im_checkpoint — Check Point OPSEC Provides support for collecting logs remotely from Check Point
(Enterprise Edition only) devices over the OPSEC LEA protocol.

18
Module Description
im_dbi — DBI Collects log data by reading data from an SQL database using the
libdbi library.

im_etw — Event Tracing for Windows (ETW) Implements ETW controller and consumer functionality in order to
(Enterprise Edition only) collect events from the ETW system.

im_exec — Program Collects log data by executing a custom external program. The


standard output of the command forms the log data.

im_file — File Collects log data from a file on the local file system.

im_fim — File Integrity Monitoring Scans files and directories and reports detected changes.
(Enterprise Edition only)

im_http — HTTP/HTTPS (Enterprise Edition Accepts incoming HTTP or HTTPS connections and collects log
only) events from client POST requests.

im_internal — Internal Collect log messages from NXLog.

im_kafka — Apache Kafka (Enterprise Edition Implements a consumer for collecting from a Kafka cluster.
only)

im_kernel — Kernel (Enterprise Edition only Collects log data from the kernel log buffer.
for some platforms)

im_linuxaudit — Linux Audit System Configures and collects events from the Linux Audit System
(Enterprise Edition only)

im_mark — Mark Outputs 'boilerplate' log data periodically to indicate that the


logger is still running.

im_mseventlog — Windows EventLog for Collects EventLog messages on the Windows platform.


Windows XP/2000/2003

im_msvistalog — Windows EventLog for Collects EventLog messages on the Windows platform.


Windows 2008/Vista and later

im_null — Null Acts as a dummy log input module, which generates no log data.
You can use this for testing purposes.

im_oci — OCI (Enterprise Edition only) Reads log messages from an Oracle database.

im_odbc — ODBC (Enterprise Edition only) Uses the ODBC API to read log messages from database tables.

im_perl — Perl (Enterprise Edition only) Captures event data directly into NXLog using Perl code.

im_pipe — Named Pipes (Enterprise Edition This module can be used to read log messages from named pipes
only) on UNIX-like operating systems.

im_python — Python (Enterprise Edition only) Captures event data directly into NXLog using Python code.

im_redis — Redis (Enterprise Edition only) Retrieves data stored in a Redis server.

im_regmon — Windows Registry Monitoring Periodically scans the Windows registry and generates event
(Enterprise Edition only) records if a change in the monitored registry entries is detected.

im_ruby — Ruby (Enterprise Edition only) Captures event data directly into NXLog using Ruby code.

im_ssl — SSL/TLS Collects log data over a TCP connection that is secured with
Transport Layer Security (TLS) or Secure Sockets Layer (SSL).

im_tcp — TCP Collects log data over a TCP network connection.

im_testgen — Test Generator Generates log data for testing purposes.

im_udp — UDP Collects log data over a UDP network connection.

im_uds — Unix Domain Socket Collects log data over a Unix domain socket (typically /dev/log).

19
Module Description
im_winperfcount — Windows Performance Periodically retrieves the values of the specified Windows
Counters (Enterprise Edition only) Performance Counters to create an event record.

im_wseventing — Windows Event Collects EventLog from Windows clients that have Windows Event
Forwarding (Enterprise Edition only) Forwarding configured.

im_zmq — ZeroMQ (Enterprise Edition only) Provides incoming message transport over ZeroMQ, a scalable
high-throughput messaging library.

4.3. Processor Modules


The following processor (pm_*) modules are available.

Table 3. Available Processor Modules

Module Description
pm_blocker — Blocker Blocks log data from progressing through a route. You can use this
module for testing purposes, to simulate when a route is blocked.

pm_buffer — Buffer Caches messages in an in-memory or disk-based buffer before


forwarding it. This module is useful in combination with UDP data
inputs.

pm_evcorr — Event Correlator Perform log actions based on relationships between events.

pm_filter — Filter Forwards the log data only if the condition specified in the Filter
module configuration evaluates to true. This module has been
deprecated. Use the NXLog language drop() procedure
instead.

pm_hmac — HMAC Message Integrity Protect messages with HMAC cryptographic checksumming. This
(Enterprise Edition only) module has been deprecated.

pm_hmac_check — HMAC Message Integrity Check HMAC cryptographic checksums on messages. This module
Checker (Enterprise Edition only) has been deprecated.

pm_norepeat — Message De-Duplicator Drops messages that are identical to previously-received


messages. This module has been deprecated. This
functionality can be implemented with module variables.

pm_null — Null Acts as a dummy log processing module, which does not


transform the log data in any way. You can use this module for
testing purposes.

pm_pattern — Pattern Matcher Applies advanced pattern matching logic to log data, which can
give greater performance than normal regular expression
statements in Exec directives. This module has been
deprecated. Use the xm_pattern module instead.

pm_transformer — Message Format Provides parsers for various log formats, and converts between
Converter them. This module has been deprecated. Use the xm_syslog,
xm_csv, xm_json, and xm_xml modules instead.

pm_ts — Timestamping (Enterprise Edition Add cryptographic Time-Stamp signatures to messages. This


only) module has been deprecated.

4.4. Output Modules


The following output (om_*) modules are available.

Table 4. Available Output Modules

20
Module Description
om_batchcompress — Batched Provides a compressed network transport for outgoing messages
Compression over TCP or SSL (Enterprise with optional SSL/TLS encryption. Pairs with the
Edition only) im_batchcompress input module.

om_blocker — Blocker Blocks log data from being written. You can use this module for
testing purposes, to simulate when a route is blocked.

om_dbi — DBI Stores log data in an SQL database using the libdbi library.

om_elasticsearch — Elasticsearch (Enterprise Stores logs in an Elasticsearch server.


Edition only)

om_eventdb — EventDB (Enterprise Edition Uses libdrizzle to insert log message data into a MySQL database
only) with a special schema.

om_exec — Program Writes log data to the standard input of a custom external


program.

om_file — File Writes log data to a file on the file system.

om_http — HTTP/HTTPS Send events over HTTP or HTTPS using POST requests.

om_kafka — Apache Kafka (Enterprise Implements a producer for publishing to a Kafka cluster.


Edition only)

om_null — Null Acts as a dummy log output module. The output is not written or
sent anywhere. You can use this module for testing purposes.

om_oci — OCI (Enterprise Edition only) Writes log messages to an Oracle database.

om_odbc — ODBC (Enterprise Edition only) Uses the ODBC API to write log messages to database tables.

om_perl — Perl (Enterprise Edition only) Uses Perl code to handle output log messages from NXLog.

om_pipe — Named Pipes (Enterprise Edition This module allows log messages to be sent to named pipes on
only) UNIX-like operating systems.

om_python — Python (Enterprise Edition Uses Python code to handle output log messages from NXLog.
only)

om_raijin — Raijin (Enterprise Edition only) Stores log messages in a Raijin server.

om_redis — Redis (Enterprise Edition only) Stores log messages in a Redis server.

om_ruby — Ruby (Enterprise Edition only) Uses Ruby code to handle output log messages from NXLog.

om_ssl — SSL/TLS Sends log data over a TCP connection that is secured with
Transport Layer Security (TLS) or Secure Sockets Layer (SSL).

om_tcp — TCP Sends log data over a TCP connection to a remote host.

om_udp — UDP Sends log data over a UDP connection to a remote host.

om_udpspoof — UDP with IP Spoofing Sends log data over a UDP connection, and spoofs the source IP
(Enterprise Edition only) address to make packets appear as if they were sent from another
host.

om_uds — UDS Sends log data to a Unix domain socket.

om_webhdfs — WebHDFS (Enterprise Edition Stores log data in Hadoop HDFS using the WebHDFS protocol.
only)

om_zmq — ZeroMQ (Enterprise Edition only) Provides outgoing message transport over ZeroMQ, a scalable
high-throughput messaging library.

4.5. Modules by Platform

21
4.5.1. AIX 7.1
Table 5. Available Modules in nxlog-5.0.5874-1.aix7.1.ppc.rpm

Package Input Output Processor Extension


nxlog-5.0.5874-1.aix7.1.ppc.rpm im_acct om_batchcom pm_blocker xm_admin
im_aixaudit press pm_buffer xm_aixaudit
im_azure om_blocker pm_evcorr xm_asl
im_batchcomp om_elasticsear pm_filter xm_cef
ress ch pm_hmac xm_charconv
im_exec om_exec pm_hmac_che xm_crypto
im_file om_file ck xm_csv
im_fim om_go pm_norepeat xm_exec
im_go om_http pm_null xm_filelist
im_http om_java pm_pattern xm_fileop
im_internal om_null pm_transform xm_gelf
im_java om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_java
im_pipe om_ssl xm_json
im_redis om_tcp xm_kvp
im_ssl om_udp xm_leef
im_tcp om_udpspoof xm_msdns
im_testgen om_uds xm_multiline
im_udp om_webhdfs xm_netflow
im_uds xm_nps
im_wseventing xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.2. AmazonLinux 2
Table 6. Available Modules in nxlog-5.0.5874_amzn2_aarch64.tar.bz2

22
Package Input Output Processor Extension
nxlog-5.0.5874_amzn2_aarch64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-dbi-5.0.5874_amzn2_aarch64.rpm im_dbi om_dbi

nxlog-java-5.0.5874_amzn2_aarch64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_amzn2_aarch64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_amzn2_aarch64.rpm im_odbc om_odbc

nxlog-pcap-5.0.5874_amzn2_aarch64.rpm im_pcap

nxlog-perl-5.0.5874_amzn2_aarch64.rpm im_perl om_perl xm_perl

nxlog-python-5.0.5874_amzn2_aarch64.rpm im_python om_python xm_python

nxlog-ruby-5.0.5874_amzn2_aarch64.rpm im_ruby om_ruby xm_ruby

nxlog-systemd-5.0.5874_amzn2_aarch64.rpm im_systemd

nxlog-wseventing- im_wseventing
5.0.5874_amzn2_aarch64.rpm

nxlog-zmq-5.0.5874_amzn2_aarch64.rpm im_zmq om_zmq

4.5.3. CentOS 6, RHEL 6


Table 7. Available Modules in nxlog-5.0.5874_rhel6_x86_64.tar.bz2

23
Package Input Output Processor Extension
nxlog-5.0.5874_rhel6_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-checkpoint-5.0.5874_rhel6_x86_64.rpm im_checkpoint

nxlog-dbi-5.0.5874_rhel6_x86_64.rpm im_dbi om_dbi

nxlog-java-5.0.5874_rhel6_x86_64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_rhel6_x86_64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_rhel6_x86_64.rpm im_odbc om_odbc

nxlog-perl-5.0.5874_rhel6_x86_64.rpm im_perl om_perl xm_perl

nxlog-wseventing-5.0.5874_rhel6_x86_64.rpm im_wseventing

4.5.4. CentOS 7, RHEL 7


Table 8. Available Modules in nxlog-5.0.5874_rhel7_x86_64.tar.bz2

24
Package Input Output Processor Extension
nxlog-5.0.5874_rhel7_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-checkpoint-5.0.5874_rhel7_x86_64.rpm im_checkpoint

nxlog-dbi-5.0.5874_rhel7_x86_64.rpm im_dbi om_dbi

nxlog-java-5.0.5874_rhel7_x86_64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_rhel7_x86_64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_rhel7_x86_64.rpm im_odbc om_odbc

nxlog-pcap-5.0.5874_rhel7_x86_64.rpm im_pcap

nxlog-perl-5.0.5874_rhel7_x86_64.rpm im_perl om_perl xm_perl

nxlog-python-5.0.5874_rhel7_x86_64.rpm im_python om_python xm_python

nxlog-ruby-5.0.5874_rhel7_x86_64.rpm im_ruby om_ruby xm_ruby

nxlog-systemd-5.0.5874_rhel7_x86_64.rpm im_systemd

nxlog-wseventing-5.0.5874_rhel7_x86_64.rpm im_wseventing

nxlog-zmq-5.0.5874_rhel7_x86_64.rpm im_zmq om_zmq

4.5.5. CentOS 8, RHEL 8


Table 9. Available Modules in nxlog-5.0.5874_rhel8_x86_64.tar.bz2

25
Package Input Output Processor Extension
nxlog-5.0.5874_rhel8_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-checkpoint-5.0.5874_rhel8_x86_64.rpm im_checkpoint

nxlog-java-5.0.5874_rhel8_x86_64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_rhel8_x86_64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_rhel8_x86_64.rpm im_odbc om_odbc

nxlog-pcap-5.0.5874_rhel8_x86_64.rpm im_pcap

nxlog-perl-5.0.5874_rhel8_x86_64.rpm im_perl om_perl xm_perl

nxlog-python-5.0.5874_rhel8_x86_64.rpm im_python om_python xm_python

nxlog-ruby-5.0.5874_rhel8_x86_64.rpm im_ruby om_ruby xm_ruby

nxlog-systemd-5.0.5874_rhel8_x86_64.rpm im_systemd

nxlog-wseventing-5.0.5874_rhel8_x86_64.rpm im_wseventing

nxlog-zmq-5.0.5874_rhel8_x86_64.rpm im_zmq om_zmq

4.5.6. DEB Generic


Table 10. Available Modules in nxlog-5.0.5874_generic_deb_amd64.deb

26
Package Input Output Processor Extension
nxlog-5.0.5874_generic_deb_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_java om_java pm_transform xm_gelf
im_kafka om_kafka er xm_go
im_kernel om_null pm_ts xm_grok
im_linuxaudit om_pipe xm_java
im_mark om_raijin xm_json
im_null om_redis xm_kvp
im_pipe om_ssl xm_leef
im_redis om_tcp xm_msdns
im_ssl om_udp xm_multiline
im_tcp om_udpspoof xm_netflow
im_testgen om_uds xm_nps
im_udp om_webhdfs xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

4.5.7. Debian Buster


Table 11. Available Modules in nxlog-5.0.5874_debian10_amd64.tar.bz2

27
Package Input Output Processor Extension
nxlog-5.0.5874_debian10_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_debian10_amd64.deb

nxlog-dbi_5.0.5874_debian10_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_debian10_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_debian10_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_debian10_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_debian10_amd64.deb im_pcap

nxlog-perl_5.0.5874_debian10_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_debian10_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_debian10_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_debian10_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_debian10_amd64.deb

nxlog-zmq_5.0.5874_debian10_amd64.deb im_zmq om_zmq

4.5.8. Debian Jessie


Table 12. Available Modules in nxlog-5.0.5874_debian8_amd64.tar.bz2

28
Package Input Output Processor Extension
nxlog-5.0.5874_debian8_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_debian8_amd64.deb

nxlog-dbi_5.0.5874_debian8_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_debian8_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_debian8_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_debian8_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_debian8_amd64.deb im_pcap

nxlog-perl_5.0.5874_debian8_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_debian8_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_debian8_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_debian8_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_debian8_amd64.deb

nxlog-zmq_5.0.5874_debian8_amd64.deb im_zmq om_zmq

4.5.9. Debian Stretch


Table 13. Available Modules in nxlog-5.0.5874_debian9_amd64.tar.bz2

29
Package Input Output Processor Extension
nxlog-5.0.5874_debian9_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_debian9_amd64.deb

nxlog-dbi_5.0.5874_debian9_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_debian9_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_debian9_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_debian9_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_debian9_amd64.deb im_pcap

nxlog-perl_5.0.5874_debian9_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_debian9_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_debian9_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_debian9_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_debian9_amd64.deb

nxlog-zmq_5.0.5874_debian9_amd64.deb im_zmq om_zmq

4.5.10. FreeBSD 11
Table 14. Available Modules in nxlog-5.0.5874_fbsd_x86_64.tgz

30
Package Input Output Processor Extension
nxlog-5.0.5874_fbsd_x86_64.tgz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_kernel om_raijin pm_ts xm_go
im_mark om_redis xm_grok
im_null om_ssl xm_java
im_pcap om_tcp xm_json
im_pipe om_udp xm_kvp
im_redis om_udpspoof xm_leef
im_ssl om_uds xm_msdns
im_tcp om_webhdfs xm_multiline
im_testgen xm_netflow
im_udp xm_nps
im_uds xm_pattern
im_wseventing xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.11. MacOS
Table 15. Available Modules in nxlog-5.0.5874_macos.pkg

31
Package Input Output Processor Extension
nxlog-5.0.5874_macos.pkg im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_kafka pm_transform xm_fileop
im_java om_null er xm_gelf
im_kafka om_pipe pm_ts xm_go
im_kernel om_raijin xm_grok
im_mark om_redis xm_java
im_null om_ssl xm_json
im_pcap om_tcp xm_kvp
im_pipe om_udp xm_leef
im_redis om_udpspoof xm_msdns
im_ssl om_uds xm_multiline
im_tcp om_webhdfs xm_netflow
im_testgen xm_nps
im_udp xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.12. Microsoft Windows 64bit


Table 16. Available Modules in nxlog-5.0.5874_windows_x64.msi

32
Package Input Output Processor Extension
nxlog-5.0.5874_windows_x64.msi im_azure om_batchcom pm_blocker xm_admin
im_batchcomp press pm_buffer xm_aixaudit
ress om_blocker pm_evcorr xm_asl
im_etw om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_exec pm_hmac_che xm_crypto
im_fim om_file ck xm_csv
im_go om_go pm_norepeat xm_exec
im_http om_http pm_null xm_filelist
im_internal om_java pm_pattern xm_fileop
im_java om_kafka pm_transform xm_gelf
im_kafka om_null er xm_go
im_mark om_odbc pm_ts xm_grok
im_mseventlo om_perl xm_java
g om_raijin xm_json
im_msvistalog om_redis xm_kvp
im_null om_ssl xm_leef
im_odbc om_tcp xm_msdns
im_perl om_udp xm_multiline
im_redis om_udpspoof xm_netflow
im_regmon om_webhdfs xm_nps
im_ssl xm_pattern
im_tcp xm_perl
im_testgen xm_resolver
im_udp xm_rewrite
im_winperfcou xm_snmp
nt xm_soapadmi
im_wseventing n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.13. Microsoft Windows Nano


Table 17. Available Modules in nxlog-5.0.5874_nano.zip

33
Package Input Output Processor Extension
nxlog-5.0.5874_nano.zip im_azure om_batchcom pm_blocker java/jni/libjava
im_batchcomp press pm_buffer nxlog
ress om_blocker pm_evcorr xm_admin
im_etw om_elasticsear pm_filter xm_aixaudit
im_exec ch pm_hmac xm_asl
im_file om_exec pm_hmac_che xm_cef
im_fim om_file ck xm_charconv
im_go om_go pm_norepeat xm_crypto
im_http om_http pm_null xm_csv
im_internal om_java pm_pattern xm_exec
im_java om_kafka pm_transform xm_filelist
im_kafka om_null er xm_fileop
im_mark om_odbc pm_ts xm_gelf
im_mseventlo om_perl xm_go
g om_raijin xm_grok
im_msvistalog om_redis xm_java
im_null om_ssl xm_json
im_odbc om_tcp xm_kvp
im_perl om_udp xm_leef
im_redis om_udpspoof xm_msdns
im_regmon om_webhdfs xm_multiline
im_ssl xm_netflow
im_tcp xm_nps
im_testgen xm_pattern
im_udp xm_perl
im_winperfcou xm_resolver
nt xm_rewrite
im_wseventing xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.14. RPM Generic


Table 18. Available Modules in nxlog-5.0.5874_generic_rpm_x86_64.rpm

34
Package Input Output Processor Extension
nxlog-5.0.5874_generic_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_java om_java pm_transform xm_gelf
im_kafka om_kafka er xm_go
im_kernel om_null pm_ts xm_grok
im_linuxaudit om_pipe xm_java
im_mark om_raijin xm_json
im_null om_redis xm_kvp
im_pipe om_ssl xm_leef
im_redis om_tcp xm_msdns
im_ssl om_udp xm_multiline
im_tcp om_udpspoof xm_netflow
im_testgen om_uds xm_nps
im_udp om_webhdfs xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

4.5.15. SLES 12
Table 19. Available Modules in nxlog-5.0.5874_sles12_x86_64.tar.bz2

35
Package Input Output Processor Extension
nxlog-5.0.5874_sles12_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-checkpoint-5.0.5874_sles12_x86_64.rpm im_checkpoint

nxlog-dbi-5.0.5874_sles12_x86_64.rpm im_dbi om_dbi

nxlog-java-5.0.5874_sles12_x86_64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_sles12_x86_64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_sles12_x86_64.rpm im_odbc om_odbc

nxlog-pcap-5.0.5874_sles12_x86_64.rpm im_pcap

nxlog-perl-5.0.5874_sles12_x86_64.rpm im_perl om_perl xm_perl

nxlog-python-5.0.5874_sles12_x86_64.rpm im_python om_python xm_python

nxlog-systemd-5.0.5874_sles12_x86_64.rpm im_systemd

nxlog-wseventing-5.0.5874_sles12_x86_64.rpm im_wseventing

nxlog-zmq-5.0.5874_sles12_x86_64.rpm im_zmq om_zmq

4.5.16. SLES 15
Table 20. Available Modules in nxlog-5.0.5874_sles15_x86_64.tar.bz2

36
Package Input Output Processor Extension
nxlog-5.0.5874_sles15_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog-checkpoint-5.0.5874_sles15_x86_64.rpm im_checkpoint

nxlog-dbi-5.0.5874_sles15_x86_64.rpm im_dbi om_dbi

nxlog-java-5.0.5874_sles15_x86_64.rpm im_java om_java xm_java

nxlog-kafka-5.0.5874_sles15_x86_64.rpm im_kafka om_kafka

nxlog-odbc-5.0.5874_sles15_x86_64.rpm im_odbc om_odbc

nxlog-pcap-5.0.5874_sles15_x86_64.rpm im_pcap

nxlog-perl-5.0.5874_sles15_x86_64.rpm im_perl om_perl xm_perl

nxlog-python-5.0.5874_sles15_x86_64.rpm im_python om_python xm_python

nxlog-ruby-5.0.5874_sles15_x86_64.rpm im_ruby om_ruby xm_ruby

nxlog-systemd-5.0.5874_sles15_x86_64.rpm im_systemd

nxlog-wseventing-5.0.5874_sles15_x86_64.rpm im_wseventing

nxlog-zmq-5.0.5874_sles15_x86_64.rpm im_zmq om_zmq

4.5.17. Solaris 10 i386


Table 21. Available Modules in nxlog-5.0.5874_solaris_x86.pkg.gz

37
Package Input Output Processor Extension
nxlog-5.0.5874_solaris_x86.pkg.gz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_mark om_raijin pm_ts xm_go
im_null om_redis xm_grok
im_pipe om_ssl xm_java
im_redis om_tcp xm_json
im_ssl om_udp xm_kvp
im_tcp om_udpspoof xm_leef
im_testgen om_uds xm_msdns
im_udp om_webhdfs xm_multiline
im_uds xm_netflow
im_wseventing xm_nps
xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.18. Solaris 10 sparc


Table 22. Available Modules in nxlog-5.0.5874_solaris_sparc.pkg.gz

38
Package Input Output Processor Extension
nxlog-5.0.5874_solaris_sparc.pkg.gz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_mark om_raijin pm_ts xm_go
im_null om_redis xm_grok
im_pipe om_ssl xm_java
im_redis om_tcp xm_json
im_ssl om_udp xm_kvp
im_tcp om_udpspoof xm_leef
im_testgen om_uds xm_msdns
im_udp om_webhdfs xm_multiline
im_uds xm_netflow
im_wseventing xm_nps
xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib

4.5.19. Ubuntu 16.04


Table 23. Available Modules in nxlog-5.0.5874_ubuntu16_amd64.tar.bz2

39
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu16_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu16_amd64.deb

nxlog-dbi_5.0.5874_ubuntu16_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_ubuntu16_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_ubuntu16_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_ubuntu16_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_ubuntu16_amd64.deb im_pcap

nxlog-perl_5.0.5874_ubuntu16_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_ubuntu16_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_ubuntu16_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_ubuntu16_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_ubuntu16_amd64.deb

nxlog-zmq_5.0.5874_ubuntu16_amd64.deb im_zmq om_zmq

4.5.20. Ubuntu 18.04


Table 24. Available Modules in nxlog-5.0.5874_ubuntu18_amd64.tar.bz2

40
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu18_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu18_amd64.deb

nxlog-dbi_5.0.5874_ubuntu18_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_ubuntu18_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_ubuntu18_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_ubuntu18_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_ubuntu18_amd64.deb im_pcap

nxlog-perl_5.0.5874_ubuntu18_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_ubuntu18_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_ubuntu18_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_ubuntu18_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_ubuntu18_amd64.deb

nxlog-zmq_5.0.5874_ubuntu18_amd64.deb im_zmq om_zmq

4.5.21. Ubuntu 20.04


Table 25. Available Modules in nxlog-5.0.5874_ubuntu20_amd64.tar.bz2

41
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu20_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib

nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu20_amd64.deb

nxlog-dbi_5.0.5874_ubuntu20_amd64.deb im_dbi om_dbi

nxlog-java_5.0.5874_ubuntu20_amd64.deb im_java om_java xm_java

nxlog-kafka_5.0.5874_ubuntu20_amd64.deb im_kafka om_kafka

nxlog-odbc_5.0.5874_ubuntu20_amd64.deb im_odbc om_odbc

nxlog-pcap_5.0.5874_ubuntu20_amd64.deb im_pcap

nxlog-perl_5.0.5874_ubuntu20_amd64.deb im_perl om_perl xm_perl

nxlog-python_5.0.5874_ubuntu20_amd64.deb im_python om_python xm_python

nxlog-ruby_5.0.5874_ubuntu20_amd64.deb im_ruby om_ruby xm_ruby

nxlog-systemd_5.0.5874_ubuntu20_amd64.deb im_systemd

nxlog- im_wseventing
wseventing_5.0.5874_ubuntu20_amd64.deb

nxlog-zmq_5.0.5874_ubuntu20_amd64.deb im_zmq om_zmq

4.6. Modules by Package


Table 26. Input Modules

42
im_acct nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_aixaudit nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)

im_azure nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

43
im_batchcompress nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_bsm nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)


nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)

im_checkpoint nxlog-checkpoint-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)


nxlog-checkpoint-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-checkpoint-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-checkpoint_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-checkpoint_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-checkpoint_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-checkpoint-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-checkpoint-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-checkpoint_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-checkpoint_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-checkpoint_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_dbi nxlog-dbi-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-dbi-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-dbi-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-dbi_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-dbi_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-dbi_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-dbi-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-dbi-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-dbi_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-dbi_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-dbi_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_etw nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)


nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)

44
im_exec nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_file nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

45
im_fim nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_go nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

46
im_http nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_internal nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

47
im_java nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-java-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-java-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-java-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-java-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-java_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-java_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-java_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-java-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-java-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-java_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-java_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-java_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_kafka nxlog-kafka-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-kafka-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-kafka-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-kafka-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-kafka_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-kafka_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-kafka_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-kafka-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-kafka-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-kafka_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-kafka_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-kafka_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_kernel nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

48
im_linuxaudit nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_mark nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_mseventlog nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)


nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)

im_msvistalog nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)


nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)

49
im_null nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_odbc nxlog-odbc-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-odbc-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-odbc-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-odbc-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-odbc_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-odbc_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-odbc_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-odbc-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-odbc-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-odbc_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-odbc_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-odbc_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_pcap nxlog-pcap-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-pcap-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-pcap-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-pcap_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-pcap_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-pcap_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-pcap-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-pcap-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-pcap_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-pcap_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-pcap_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

50
im_perl nxlog-perl-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-perl-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-perl-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-perl-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-perl_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-perl_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-perl_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-perl-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-perl-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-perl_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-perl_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-perl_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_pipe nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_python nxlog-python-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-python-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-python-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-python_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-python_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-python_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-python-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-python-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-python_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-python_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-python_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

51
im_redis nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_regmon nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)


nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)

im_ruby nxlog-ruby-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-ruby-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-ruby-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-ruby_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-ruby_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-ruby_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-ruby-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-ruby_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-ruby_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-ruby_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_ssl nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

52
im_systemd nxlog-systemd-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-systemd-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-systemd-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-systemd_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-systemd_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-systemd_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-systemd-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-systemd-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-systemd_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-systemd_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-systemd_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_tcp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_testgen nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

53
im_udp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_uds nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_winperfcount nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)


nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)

54
im_wseventing nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-wseventing-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-wseventing-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-wseventing-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-wseventing-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-wseventing_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-wseventing_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-wseventing_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-wseventing-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-wseventing-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-wseventing_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-wseventing_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-wseventing_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

im_zmq nxlog-zmq-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-zmq-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-zmq-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-zmq_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-zmq_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-zmq_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-zmq-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-zmq-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-zmq_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-zmq_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-zmq_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

Table 27. Output Modules


om_batchcompress nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

55
om_blocker nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_dbi nxlog-dbi-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-dbi-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-dbi-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-dbi_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-dbi_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-dbi_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-dbi-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-dbi-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-dbi_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-dbi_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-dbi_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_elasticsearch nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

56
om_eventdb nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_exec nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_file nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

57
om_go nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_http nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

58
om_java nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-java-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-java-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-java-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-java-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-java_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-java_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-java_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-java-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-java-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-java_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-java_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-java_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_kafka nxlog-kafka-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-kafka-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-kafka-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-kafka-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-kafka_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-kafka_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-kafka_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-kafka-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-kafka-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-kafka_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-kafka_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-kafka_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

59
om_null nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_odbc nxlog-odbc-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-odbc-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-odbc-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-odbc-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-odbc_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-odbc_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-odbc_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-odbc-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-odbc-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-odbc_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-odbc_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-odbc_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_perl nxlog-perl-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-perl-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-perl-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-perl-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-perl_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-perl_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-perl_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-perl-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-perl-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-perl_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-perl_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-perl_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

60
om_pipe nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_python nxlog-python-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-python-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-python-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-python_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-python_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-python_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-python-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-python-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-python_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-python_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-python_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_raijin nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

61
om_redis nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_ruby nxlog-ruby-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-ruby-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-ruby-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-ruby_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-ruby_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-ruby_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-ruby-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-ruby_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-ruby_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-ruby_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_ssl nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

62
om_tcp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_udp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

63
om_udpspoof nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_uds nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

64
om_webhdfs nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

om_zmq nxlog-zmq-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-zmq-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-zmq-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-zmq_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-zmq_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-zmq_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-zmq-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-zmq-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-zmq_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-zmq_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-zmq_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

Table 28. Extension Modules


[[modules_by_package_jav nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
a/jni/libjavanxlog]]java/jni/l
ibjavanxlog

65
xm_admin nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_aixaudit nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

66
xm_asl nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_bsm nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)


nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)

xm_cef nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

67
xm_charconv nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_crypto nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

68
xm_csv nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_exec nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

69
xm_filelist nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_fileop nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

70
xm_gelf nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_go nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

71
xm_grok nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_java nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-java-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-java-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-java-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-java-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-java_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-java_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-java_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-java-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-java-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-java_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-java_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-java_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

72
xm_json nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_kvp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

73
xm_leef nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_msdns nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

74
xm_multiline nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_netflow nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

75
xm_nps nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_pattern nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_perl nxlog-perl-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-perl-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-perl-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-perl-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-perl_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-perl_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-perl_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-perl-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-perl-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-perl_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-perl_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-perl_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

76
xm_python nxlog-python-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-python-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-python-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-python_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-python_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-python_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-python-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-python-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-python_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-python_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-python_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_resolver nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_rewrite nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

77
xm_ruby nxlog-ruby-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-ruby-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-ruby-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-ruby_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-ruby_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-ruby_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-ruby-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-ruby_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-ruby_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-ruby_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_snmp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_soapadmin nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

78
xm_syslog nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_w3c nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_wtmp nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)


nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

79
xm_xml nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

xm_zlib nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

Table 29. Processor Modules

80
pm_blocker nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

pm_buffer nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

81
pm_evcorr nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

pm_filter nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

82
pm_hmac nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

pm_hmac_check nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

83
pm_norepeat nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

pm_null nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

84
pm_pattern nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

pm_transformer nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)


nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

85
pm_ts nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)

86
Deployment

87
Chapter 5. Supported Platforms
The following operating systems and architectures are fully supported, except as noted. For more information
about types of log collection that are available for specific platforms, see the corresponding chapter in OS
Support.

Table 30. Supported GNU/Linux Platforms

Operating System Architectures


RedHat Enterprise Linux 6 x86 (see note), x86_64

RedHat Enterprise Linux 7 x86 (see note), x86_64

CentOS Linux 6 x86 (see note), x86_64

CentOS Linux 7 x86 (see note), x86_64

Debian GNU/Linux 8 (Jessie) x86 (see note), x86_64

Debian GNU/Linux 9 (Stretch) x86 (see note), x86_64

Ubuntu 14.04 (Trusty Tahr) x86 (see note), x86_64

Ubuntu 16.04 (Xenial Xerus) x86 (see note), x86_64

Ubuntu 18.04 (Bionic Beaver) x86 (see note), x86_64

SUSE Linux Enterprise Server 11 x86 (see note), x86_64

SUSE Linux Enterprise Server 12 x86 (see note), x86_64

SUSE Linux Enterprise Server 15 x86 (see note), x86_64

Other distributions (See note)

NXLog also provides generic packages compiled against glibc 2.5 to support RPM based legacy
distributions such as Redhat 5.11 and SLES 11 on both 32 and 64 bit hardware.

NOTE
The packages are named as nxlog-X.XX.XXXX_generic_glibc2.5_rpm_x86_64.rpm and
nxlog-X.XX.XXXX_generic_glibc2.5_rpm_i386.rpm respectively and available in the beta
version as well.

For a listing of GNU/Linux-related log sources, see GNU/Linux.

Table 31. Supported BSD Platforms

Operating System Architectures


FreeBSD 11 x86 (see note), x86_64

OpenBSD 6.0 x86 (see note), x86_64

OpenBSD 6.2 x86 (see note), x86_64

For listings of BSD-related log sources, see FreeBSD and OpenBSD.

Under the Technical Support Services Agreement, Linux and BSD binary packages may be provided
NOTE upon request for operating systems that have reached their end-of-life date (like RHEL 5), for
legacy 32-bit hardware, or for less common distributions (such as Linux Mint).

Table 32. Supported Windows Platforms

Operating System Architectures


Microsoft Windows Server 2008 x86_64

88
Operating System Architectures
Microsoft Windows Server 2012 x86_64

Microsoft Windows Server 2016 (Certified) x86_64

Microsoft Windows Server 2019 (Certified) x86_64

Microsoft Windows Nano Server x86_64 (see note)

Microsoft Windows Vista x86_64

Microsoft Windows 7 x86_64

Microsoft Windows 8 x86_64

Microsoft Windows 10 x86_64

For a listing of Windows-related log sources, see Microsoft Windows.

While the im_odbc input module is included in the Windows Nano Server package, currently
NOTE
Microsoft does not provide a reverse forwarder to support the ODBC API.

Table 33. Other Supported Platforms

Operating System Architectures


Apple OS X 10.11 (El Capitan) x86_64

Apple macOS 10.12 (Sierra) x86_64

Apple macOS 10.13 (High Sierra) x86_64

Apple macOS 10.14 (Mojave) x86_64

Docker x86_64

IBM AIX 7.1 PowerPC

IBM AIX 7.2 PowerPC

Oracle Solaris 10 x86, SPARC

Oracle Solaris 11 x86, SPARC

For log sources of the above platforms, see Apple macOS, IBM AIX, and Oracle Solaris.

The following Microsoft Windows operating systems are unsupported due to having reached end-of-life status,
but are known to work with NXLog.

Table 34. End-of-Life Windows Platforms

Operating System Architectures


Microsoft Windows XP x86, x86_64   

Microsoft Windows Server 2000 x86

Microsoft Windows Server 2003 x86, x86_64

89
Chapter 6. Product Life Cycle
NXLog Enterprise Edition, NXLog Community Edition, and NXLog Manager all use the versioning scheme X.Y.Z.

• X denotes the MA JOR release version. Long-term support is provided for each major release when
applicable.
• Y denotes the MINOR version. Minor releases provide backward compatible enhancements and features
during the lifetime of a major release.
• Z denotes the REVISION NUMBER. Hot-fix revisions may be released within the same minor version.

Upgrades within the same major version are backward compatible. Features and changes that may not be
backward compatible are added to major releases only.

For supported products, the end-of-life (EOL) date is at least one year after the next major version is released.

Table 35. End-of-Life for NXLog Products

NXLog Product End-of-Life


NXLog Enterprise Edition 3.x 2019-01-01

NXLog Enterprise Edition 4.x One year after the release of NXLog Enterprise Edition 5.0

NXLog Manager 4.x 2019-01-01

NXLog Manager 5.x One year after the release of NXLog Manager 6.0

NXLog Community Edition No official support

90
Chapter 7. System Requirements
In order to function efficiently, each NXLog product requires a certain amount of available system resources on
the host system. The table below provides general guidelines to use when planning an NXLog deployment. Actual
system requirements will vary based on the configuration and event rate; therefore, both minimum and
recommended requirements are listed. Always thoroughly test a deployment to verify that the desired
performance can be achieved with the system resources available.

These requirements are in addition to the operating system’s requirements, and the
NOTE requirements should be combined with the NXLog Manager’s System Requirements
cumulatively for systems running both NXLog Enterprise Edition and NXLog Manager.

Table 36. NXLog Enterprise Edition Requirements

Minimum Recommende
d
Processor cores 1 2

Memory/RAM 60 MB 250 MB

Disk space 50 MB 150 MB

91
Chapter 8. Digital Signature Verification
Security regulations for organizations may require verifying the identity of software sources as well as the
integrity of the software obtained from those software sources. In order to facilitate such regulation compliance,
and to guarantee the authenticity and integrity of downloaded installer files, NXLog installer packages are
digitally signed.

In some cases, like with RPM packages, a public key is required to verify the digital signature. For this, the Public
PGP Key can be downloaded from NXLog’s public contrib repository.

8.1. Signature Verification for RPM Packages


The procedure is the same for SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and CentOS. However,
there is a slight difference in the output messages as noted below.

This example uses the generic RPM package. Change the name of the package to match the
NOTE
package used in your environment.

1. Import the downloaded NXLog public key into the RPM with the following command:

# rpm --import nxlog-pubkey.asc

2. Verify the package signature with the imported public key using the following command:

# rpm --checksig nxlog-{productVersion}_generic_rpm_x86_64.rpm.

3. The output should look similar to the following examples.

On SUSE Linux Enterprise Server:

nxlog-{productVersion}_generic_rpm_x86_64.rpm: digests signatures OK

On Red Hat Enterprise Linux and CentOS:

nxlog-{productVersion}_generic_rpm_x86_64.rpm: rsa sha1 (md5) pgp md5 OK

8.2. Signature Verification for Windows


To verify the installer package for Windows before installing, follow these steps:

1. Right-click the downloaded installer file, then select Properties.


2. Select the Digital Signatures tab.

NXLog is displayed as a signer for the installer. The algorithm used for the signature and the timestamp is
also visible.

3. In the Signature list, select NXLog, then click Details to display additional information about the signature.

In the General tab, the signer information and countersignatures are displayed. Click on View Certificate to
display the certificate or select the Advanced tab to display signature details.

8.3. Signature Verification on macOS


To verify the installer package for macOS before installing, follow these steps:

92
1. Double-click the installer package.
2. Click on the padlock icon in the upper-right corner of the installer window to display information about the
certificate.

For valid packages a green tick is displayed, indicating the validity of the certificate.

3. Click on the triangle next to Details to display additional information about the certificate.

93
Chapter 9. Red Hat Enterprise Linux & CentOS
9.1. Installing
1. Download the appropriate NXLog installation file from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct file for the target
platform.

Table 37. Available RHEL/CentOS Files

Platform Archive
RHEL 6 or CentOS 6 nxlog-5.0.5876_rhel6_x86_64.tar.bz2

RHEL 7 or CentOS 7 nxlog-5.0.5876_rhel7_x86_64.tar.bz2

Generic RPM nxlog-5.0.5876_generic_rpm_x86_64.rpm

The RHEL 6 and RHEL 7 archives above each contain several RPMs (see Packages in a
RHEL Archive below). These RPMs have dependencies on system-provided RPMs.

NOTE The generic RPM above contains all the libraries (such as libpcre and libexpat) that are
needed by NXLog. The only dependency is libc. However, some modules are not
available (im_checkpoint, for example). The advantage of the generic RPM is that it can
be installed on most RPM-based Linux distributions.

2. Transfer the file to the target server using SFTP or a similar secure method.
3. Log in to the target server and extract the contents of the archive (unless you are using the generic package):

# tar -xf nxlog-5.0.5876_rhel7.x86_64.tar.bz2

Table 38. Packages in a RHEL Archive

Package Description
nxlog-5.0.5876_rhel7.x86_64.rpm The main NXLog package

nxlog-checkpoint-5.0.5876_rhel7.x86_64.rpm Provides the im_checkpoint module

nxlog-dbi-5.0.5876_rhel7.x86_64.rpm Provides the im_dbi and om_dbi modules

nxlog-odbc-5.0.5876_rhel7.x86_64.rpm Provides the im_odbc and om_odbc modules

nxlog-perl-5.0.5876_rhel7.x86_64.rpm Provides the xm_perl, im_perl, and om_perl modules

nxlog-wseventing-5.0.5876_rhel7.x86_64.rpm Provides the im_wseventing module

nxlog-zmq-5.0.5876_rhel7.x86_64.rpm Provides the im_zmq and om_zmq modules

4. Install the NXLog package(s) and their dependencies.


a. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and
NXLOG_GROUP environment variables. During installation a new user and and a new group will be created
based on these environment variables. They will be used for User and Group directives in nxlog.conf,
and for the ownership of some directories under /opt/nxlog. Specifying an already existing user or
group is not supported. The created user and group will be deleted on NXLog removal.

# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2

94
b. If you are installing the nxlog-zmq package, enable the EPEL repository so ZeroMQ dependencies will be
available:

# yum install -y epel-release

c. Use yum to install the required NXLog packages (or the generic package) and dependencies.

# yum install nxlog-5.0.5876_rhel7.x86_64.rpm

5. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on Linux, see the
GNU/Linux summary.
6. Verify the configuration file syntax.

# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

7. Start the service using the service command:

# service nxlog start

8. Check that the NXLog service is running.

# service nxlog status


nxlog (pid 9218) is running...

9.2. Upgrading
To upgrade an NXLog installation to the latest release, use yum as in the installation instructions above.

# yum install nxlog-5.0.5876_rhel7.x86_64.rpm

To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, follow the
installation instructions.

The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.

9.3. Uninstalling
To uninstall NXLog, use yum remove. To remove any packages that were dependencies of NXLog but are not
required by any other packages, include the --setopt=clean_requirements_on_remove=1 option. Verify the
operation before confirming!

# yum remove 'nxlog-*'

This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).

95
Chapter 10. Debian & Ubuntu
10.1. Installing
1. Download the appropriate NXLog installation file from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, download the correct file for the target
platform.

Table 39. Available Debian/Ubuntu Archives

Platform Archive
Debian 8 (Jessie) nxlog-5.0.5876_debian8_amd64.tar.bz2

Debian 9 (Stretch) nxlog-5.0.5876_debian9_amd64.tar.bz2

Ubuntu 14.04 (Trusty Tahr) nxlog-5.0.5876_ubuntu14_amd64.tar.bz2

Ubuntu 16.04 (Xenial Xerus) nxlog-5.0.5876_ubuntu16_amd64.tar.bz2

Ubuntu 18.04 (Bionic Beaver) nxlog-5.0.5876_ubuntu18_amd64.tar.bz2

Generic DEB nxlog-5.0.5876_generic_deb_amd64.deb

2. Transfer the file to the target server using SFTP or a similar secure method.
3. Log in to the target server and extract the contents of the archive (unless you are using the generic package):

# tar -xjf nxlog-5.0.5876_debian9_amd64.tar.bz2

Table 40. Packages in a Debian/Ubuntu Archive

Package Description
nxlog-5.0.5876_amd64.deb The main NXLog package

nxlog-checkpoint-5.0.5876_amd64.deb Provides the im_checkpoint module

nxlog-dbi-5.0.5876_amd64.deb Provides the im_dbi and om_dbi modules

nxlog-odbc-5.0.5876_amd64.deb Provides the im_odbc and om_odbc modules

nxlog-perl-5.0.5876_amd64.deb Provides the xm_perl, im_perl, and om_perl modules

nxlog-wseventing-5.0.5876_amd64.deb Provides the im_wseventing module

nxlog-zmq-5.0.5876_amd64.deb Provides the im_zmq and om_zmq modules

4. Install the NXLog package(s) and their dependencies.


a. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and
NXLOG_GROUP environment variables. During installation a new user and and a new group will be created
based on these environment variables. They will be used for User and Group directives in nxlog.conf,
and for the ownership of some directories under /opt/nxlog. Specifying an already existing user or
group is not supported. The created user and group will be deleted on NXLog removal.

# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2

b. Use dpkg to install the required NXLog packages (or the generic package, if you are using that).

# dpkg -i nxlog-5.0.5876_amd64.deb

96
c. If dpkg returned errors about uninstalled dependencies, use apt-get to install them and complete the
NXLog installation.

# apt-get -f install

5. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on Linux, see the
GNU/Linux summary.
6. Verify the configuration file syntax.

# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

7. Start the service using the service command:

# service nxlog start

8. Check that the NXLog service is running with the service command.

# service nxlog status


● nxlog.service - LSB: logging daemon
  Loaded: loaded (/etc/init.d/nxlog)
  Active: active (running) since Wed 2016-10-19 22:21:36 BST; 3h 49min ago
  Process: 518 ExecStart=/etc/init.d/nxlog start (code=exited, status=0/SUCCESS)
  CGroup: /system.slice/nxlog.service
  └─6297 /opt/nxlog/bin/nxlog
[...]

10.2. Upgrading
To upgrade an NXLog installation to the latest release, or to replace a trial installation of NXLog Enterprise Edition
with a licensed copy, use dpkg as explained in the installation instructions above.

# dpkg -i nxlog-5.0.5876_amd64.deb

When upgrading to a licensed copy with additional NXLog trial packages installed, such as nxlog-
trial-python, use dpkg -i --auto-configure.

NOTE # dpkg -i --auto-configure nxlog-5.0.5876_amd64.deb \


  nxlog-python_5.0.5876_amd64.deb

Make sure to edit this example to include all nxlog-trial packages that are actually installed.

If dpkg returns errors about uninstalled dependencies, resolve with apt-get.

# apt-get -f install

The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4a above). Changing to a different user and group during upgrade is not
supported.

10.3. Uninstalling
To uninstall NXLog, use apt-get. To remove any unused dependencies (system-wide), include the --auto
-remove option. Verify the operation before confirming!

97
# apt-get remove '^nxlog*'

Use apt-get purge instead to also remove configuration files. But in either case, this
procedure may not remove all files that were created in order to configure NXLog, or that were
NOTE
created as a result of NXLog’s logging operations. To find these files, consult the configuration
files that were used with NXLog and check the installation directory (/opt/nxlog).

98
Chapter 11. SUSE Linux Enterprise Server
11.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct archive for your system.

Table 41. Available SLES Files

Platform Archive
SUSE Linux Enterprise Server 11 nxlog-5.0.5876_sles11_x86_64.tar.bz2

SUSE Linux Enterprise Server 12 nxlog-5.0.5876_sles12_x86_64.tar.bz2

SUSE Linux Enterprise Server 15 nxlog-5.0.5876_sles15_x86_64.tar.bz2

The SLES 11, SLES 12 and SLES 12 archives above each contain several RPMs (see
NOTE Packages in an SLES Archive below). These RPMs have dependencies on system-
provided RPMs.

2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.

# tar xjf nxlog-5.0.5876_sles12_x86_64.tar.bz2

Table 42. Packages in an SLES Archive

Package Description
nxlog-5.0.5876_sles12.x86_64.rpm The main NXLog package

nxlog-dbi-5.0.5876_sles12.x86_64.rpm Provides the im_dbi and om_dbi modules

nxlog-odbc-5.0.5876_sles12.x86_64.rpm Provides the im_odbc and om_odbc modules

nxlog-perl-5.0.5876_sles12.x86_64.rpm Provides the xm_perl, im_perl, and om_perl modules

nxlog-wseventing-5.0.5876_sles12.x86_64.rpm Provides the im_wseventing module

4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.

# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2

5. Install the required NXLog packages and their dependencies (this example installs the main NXLog package
only).

# zypper install nxlog-5.0.5876_sles12.x86_64.rpm

6. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on Linux, see the
GNU/Linux summary.
7. Verify the configuration file syntax.

99
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

8. Start the service using the service command:

# systemctl start nxlog.service

9. Check that the NXLog service is running with the systemctl command.

# systemctl | grep nxlog


  nxlog.service loaded active running LSB: logging daemon

11.2. Upgrading
To update an NXLog installation to the latest release, use zypper as in the installation instructions above.

# zypper install nxlog-5.0.5876_sles12.x86_64.rpm

To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, follow the
installation instructions.

The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.

11.3. Uninstalling
To uninstall NXLog, use zypper remove. To remove any packages that were dependencies of NXLog but are not
required by any other packages, include the --clean-deps option. Verify the operation before confirming!

# zypper remove 'nxlog*'

This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).

100
Chapter 12. FreeBSD
12.1. Installing
NXLog is available as a precompiled package for FreeBSD. Follow these steps to install NXLog.

1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-
5.0.5876_fbsd_x86_64.tgz package.

2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server as the root user.
4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.

# setenv NXLOG_USER nxlog2


# setenv NXLOG_GROUP nxlog2

5. Install NXLog with the pkg(7) utility.

# pkg add nxlog-5.0.5876_fbsd_x86_64.tgz


Installing nxlog-5.0.5876-fbsd...
Extracting nxlog-5.0.5876-fbsd: 100%

The installation path is /opt/nxlog. Configuration files are located in /opt/nxlog/etc. The rc init script is
placed in /etc/rc.d/ on installation. An nxlog user account is created, and NXLog will run under this user
by default.

6. Edit the configuration file.

# vi /opt/nxlog/etc/nxlog.conf

General information about configuring NXLog can be found in Configuration. For more details about
configuring NXLog to collect logs on BSD, see the FreeBSD summary.

7. Verify the configuration file syntax.

# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

8. To enable NXLog, add the line nxlog_enable="YES" to /etc/rc.conf. Then manage the NXLog service with
the service(8) utility.

# service nxlog start


# service nxlog status
nxlog is running as pid 83708.
# service nxlog stop
process 83708 stopped

101
12.2. Upgrading
To upgrade NXLog, first remove the old version and then install the new version.

1. Remove the installed version of NXLog with the pkg(7) utility.

# pkg delete nxlog


Checking integrity... done (0 conflicting)
Deinstallation has been requested for the following 1 packages (of 0 packages
in the universe):

Installed packages to be REMOVED:


  nxlog-5.0.5876-fbsd

Number of packages to be removed: 1

The operation will free 39 MiB.

Proceed with deinstalling packages? [y/N]: y


[1/1] Deinstalling nxlog-5.0.5876-fbsd...
[1/1] Deleting files for nxlog-5.0.5876-fbsd: 100%

2. Install the new version as described in the installation instructions above.

# pkg add nxlog-5.0.5876_fbsd_x86_64.tgz


Installing nxlog-5.0.5876-fbsd...
Extracting nxlog-5.0.5876-fbsd: 100%

3. Restart the NXLog service.

# service nxlog restart

12.3. Uninstalling
1. Use the pkg(7) utility to uninstall the NXLog package.

# pkg delete nxlog


Updating database digests format: 100%
Checking integrity... done (0 conflicting)
Deinstallation has been requested for the following 1 packages (of 0 packages
in the universe):

Installed packages to be REMOVED:


  nxlog-5.0.5876-fbsd

Number of packages to be removed: 1

The operation will free 92 MiB.

Proceed with deinstalling packages? [y/N]: y


[1/1] Deinstalling nxlog-5.0.5876-fbsd...
[1/1] Deleting files for nxlog-5.0.5876-fbsd: 100%

The uninstall script will remove NXLog along with the user, group, and files. The pkg utility will not remove
new or modified files.

2. Manually remove the base directory. This will remove any new or modified files left behind by the previous
step.

102
# rm -rf /opt/nxlog

103
Chapter 13. OpenBSD
13.1. Installing
NXLog is available as precompiled packages for OpenBSD. Follow these steps to install NXLog.

1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your
system.

Table 43. Available OpenBSD Packages

Platform Package
OpenBSD 6.0 nxlog-5.0.5876-obsd6_0_x86_64.tgz

OpenBSD 6.2 nxlog-5.0.5876-obsd6_2_x86_64.tgz

2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server as the root user.
4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.

# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2

5. Install NXLog with the pkg_add(1) utility. The OpenBSD package is currently unsigned, use the -D unsigned
flag to install.

# pkg_add -D unsigned nxlog-5.0.5876-obsd6_2_x86_64.tgz


nxlog-5.0.5876-obsd6_2: ok
The following new rcscripts were installed: /etc/rc.d/nxlog
See rcctl(8) for details.

The installation prefix is /opt/nxlog. Configuration files are located in /opt/nxlog/etc. The rc init script is
placed in /etc/rc.d on installation.

6. Edit the configuration file.

# vi /opt/nxlog/etc/nxlog.conf

General information about configuring NXLog can be found in Configuration. For more details about
configuring NXLog to collect logs on BSD, see the OpenBSD summary.

7. Verify the configuration file syntax.

# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

8. Manage the service using the rcctl(8) utility.

104
# rcctl enable nxlog
# rcctl start nxlog
nxlog(ok)
# rcctl stop nxlog
nxlog(ok)
# rcctl disable nxlog

You can also use rcctl(8) to check and set the configuration flags.

# rcctl set nxlog flags -c /tmp/sample-nxlog.conf


# rcctl get nxlog
nxlog_class=daemon
nxlog_flags=-c /tmp/sample-nxlog.conf
nxlog_rtable=0
nxlog_timeout=30
nxlog_user=root
# rcctl reload nxlog

9. Check the NXLog service status using rcctl(8).

# rcctl check nxlog


nxlog(ok)

13.2. Upgrading
To upgrade from a previous NXLog version (whether a licensed copy or trial), use the pkg_add(1) utility. This
example shows an upgrade from version 3.0.1865 to 5.0.5876.

# pkg_add -U nxlog-5.0.5876-obsd6_2_x86_64.tgz
nxlog-3.0.1865-obsd6_2\->5.0.5876-obsd6_2: ok
Read shared items: ok

To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, use pkg_add
with the replace flag (-r).

# pkg_add -r nxlog-5.0.5876-obsd6_2_x86_64.tgz

The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.

13.3. Uninstalling
To uninstall NXLog, follow these steps.

1. Use the pkg_delete(1) utility to remove the nxlog package.

# pkg_delete nxlog
nxlog-5.0.5876-obsd6_2: ok
Read shared items: ok
--- -nxlog-5.0.5876-obsd6_2 -------------------

The uninstall script will remove NXLog along with the user, group, and files. The pkg_delete utility will not
remove new files or modified configuration files.

2. Manually remove the base directory. This will remove any new or modified files left behind by the previous
step.

105
# rm -rf /opt/nxlog

106
Chapter 14. Microsoft Windows
14.1. Installing
First, download the NXLog MSI file from the NXLog website.

1. Log in to your account, then click My account at the top of the page.
2. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your system.

Table 44. Available Windows Installers

Platform Package
Microsoft Windows, 32-bit nxlog-5.0.5876_windows_x86.msi

Microsoft Windows, 64-bit nxlog-5.0.5876_windows_x64.msi

Using the 32-bit installer to install NXLog on a 64-bit system is unsupported and not
recommended. To override the installer check and proceed anyway, use the
WARNING
SKIP_X64_CHECK=1 property (for example, msiexec /i nxlog-
5.0.5876_windows_x64.msi /q SKIP_X64_CHECK=1).

There are several ways that NXLog can be installed on Windows.

• Installing Interactively
• Installing with Msiexec
• Deploying via Group Policy

See also the MSI for NXLog Agent Setup add-on, which provides an example MSI package for bootstrapping
NXLog agents.

The service Startup type of newer versions of NXLog is set to Automatic (Delayed Start)
NOTE instead of Automatic. To change this option, open the service control manager and alter the
Startup type in the General tab.

14.1.1. Installing Interactively


1. Run the installer by double-clicking the MSI file. After accepting the license agreement an option for choosing
an alternate installation directory is presented. Click [ Install ], to start the installation. Click [ Finish ] once it
has completed which will result in the README.txt file being opened by Notepad.
2. Configure NXLog by editing nxlog.conf (by default, C:\Program Files\nxlog\conf\nxlog.conf). General
information about configuring NXLog can be found in Configuration. For more details about configuring
NXLog to collect logs on Windows, see the Microsoft Windows summary.
3. The configuration file syntax can be checked by running the NXLog executable with the -v (verify) option.

> "C:\Program Files\nxlog\nxlog.exe" -v


2017-03-17 08:05:06 INFO configuration OK

4. Start NXLog by opening the Service Manager, finding the nxlog service in the list, and starting it. To run it in
the foreground instead, invoke the nxlog.exe executable with the -f command line argument.
5. Open the NXLog log file (by default, C:\Program Files\nxlog\data\nxlog.log) with Notepad and check
for errors.

107
Some text editors (such as Wordpad) use exclusive locking and will refuse to open the log
NOTE
file while NXLog is running.

14.1.2. Installing with Msiexec


Msiexec can be used for performing an unattended installation of NXLog. This command does not prompt the
user at all, but it must be run as administrator.

> msiexec /i nxlog-5.0.5876_windows_x64.msi /q

To allow Windows to prompt for administrator privileges, but otherwise install unattended, use /qb instead.

> msiexec /i nxlog-5.0.5876_windows_x64.msi /qb

To specify a non-default installation directory, use the INSTALLDIR property.

> msiexec /i nxlog-5.0.5876_windows_x64.msi /q INSTALLDIR="C:\nxlog"

14.1.3. Deploying via Group Policy


For large deployments, it may be convenient to use Group Policy to manage the NXLog installation.

These steps were tested with a Windows Server 2016 domain controller and a Windows 7 client.
NOTE There are multiple ways to configure NXLog deployment with Group Policy. The required steps
for your network may vary from those listed below.

1. Log on to the server as an administrator.


2. Set up an Active Directory group for computers requiring an NXLog installation. NXLog will be automatically
installed and configured on each computer in this group.
a. Open the Active Directory Users and Groups console (dsa.msc).

b. Under the domain, right-click on Computers and click New › Group.

c. Provide a name for the group (for example, nxlog). Use the Security group type and Global context (or
the context suitable for your case).
d. Add computers to the group by selecting one or more, clicking Actions › Add to a group…, and entering
the group name (nxlog).
3. Create a network share for distributing the NXLog files.
a. Create a folder in the desired location (for example, C:\nxlog-dist).

b. Set up the folder as a share: right-click, select Properties, open the Sharing tab, and click [ Share… ].
c. Add the group (nxlog) and click [ Share ]. Take note of the share name provided by the wizard, it will be
needed later (for example, \\WINSERV1\nxlog-dist).
d. Copy the required files to the shared folder. If using NXLog Manager, this will include at least three files:
nxlog-5.0.5876_windows_x64.msi, managed.conf, and CA certificate agent-ca.pem. If not using
NXLog Manager, use a custom nxlog.conf instead of managed.conf, omit the CA certificate, and include
any other files required by the configuration.

NOTE
The file managed.conf is located in the C:\Program Files\nxlog\conf\nxlog.d\ directory. Prior
to NXLog version 5, it had the name log4ensics.conf and was located in the C:\Program
Files\nxlog\conf\ directory.

108
4. Create a Group Policy Object (GPO) for the NXLog deployment.
a. Open the Group Policy Management console (gpmc.msc).

b. In the console tree, under Domains, right-click on your domain and click Create a GPO in this domain,
and Link it here…; this will create a GPO under the Group Policy Objects folder and link it to the
domain.
c. Name the GPO (for example, nxlog) and click [ OK ].

d. Select the newly created GPO in the tree.


e. In the Security Filtering list, add the Active Directory group created in step 2 (nxlog). Remove anything
else.
f. Right-click on the GPO and click Edit. The Group Policy Management Editor console will be opened for
editing the GPO.
5. Add the NXLog MSI to the GPO.

Figure 1. Configured NXLog GPO

a. Under Computer Configuration › Policies › Software Settings, right-click Software installation. Click
New › Package… to create a deployment package for NXLog.
b. Browse to the network share and open the nxlog-5.0.5876_windows_x64.msi package. It is important
to use the Uniform Naming Convention (UNC) path (for example, \\WINSERV1\nxlog-dist) so the file
will be accessible by remote computers.
c. Select the Assigned deployment method.
6. Add the required files to the GPO by following these steps for each file.
a. Under Computer Configuration › Preferences › Windows Settings, right-click on Files. Click New ›
File.
b. Select the Replace action in the drop-down.
c. Choose the source file on the network share (for example, \\WINSERV1\nxlog-dist\managed.conf or
\\WINSERV1\nxlog-dist\agent-ca.pem).

d. Type in the destination path for the file (for example, C:\Program
Files\nxlog\conf\nxlog.d\managed.conf or C:\Program Files\nxlog\cert\agent-ca.pem).

e. Check Apply once and do not reapply under the Common tab for files that should only be deployed

109
once. This is especially important for managed.conf because NXLog Manager will write configuration
changes to that file.
f. Click [ OK ] to create the File in the GPO.
7. After the Group Policy is updated on the clients and NXLog is installed, one more reboot will be required
before the NXLog service starts automatically.

For more information about Group Policy, see the following TechNet and MSDN articles:

• Group Policy for Beginners,


• Group Policy Planning and Deployment Guide,
• Step-by-Step Guide to Understanding the Group Policy Feature Set, and
• Step-by-Step Guide to Software Installation and Maintenance.

14.2. Upgrading
To upgrade NXLog to the latest release, or to replace a trial installation of NXLog Enterprise Edition with a
licensed copy, follow these steps.

1. Run the new MSI installer as described in the Installing section (interactively, with Msiexec, or via Group
Policy). The installer will detect the presence of the previous version and perform the upgrade within the
current installation directory.

To upgrade from v3.x, uninstall the previous version before installing the new version (see
Uninstalling). This is necessary to transition from a per-user to a per-machine installation.
NOTE This check can be skipped by passing the SKIP_PERUSER_CHECK property (such as msiexec
/i nxlog-5.0.5876_windows_x64.msi /q SKIP_PERUSER_CHECK=1). Note that using
SKIP_PERUSER_CHECK is unsupported and not recommended.

If the Services console (services.msc) is running, the installer may request the computer
NOTE to be rebooted or display a permission denied error. Please ensure that the Services
console is not running before attempting an upgrade.

2. Start the upgraded NXLog service via the Services console (services.msc) or by rebooting the system.
Check the log file (by default, C:\Program Files\nxlog\data\nxlog.log) to verify logging is working as
expected.

For Group Policy deployments, follow these steps:

1. Download the new MSI package as described in the Installing introduction.


2. Place the new MSI in the distribution share (see Create a network share).
3. Add this MSI as a new package to the NXLog GPO (follow the steps under Add the NXLog MSI).
4. Right-click on the new package and click Properties. Open the Upgrades tab, click [ Add… ], select the
previous version from the list, and click [ OK ].

If you want to downgrade to a previous version of NXLog, you will need to manually uninstall the
NOTE
current version first. See Uninstalling.

14.3. Uninstalling
NXLog can be uninstalled in several different ways.

• From the Control Panel (not discussed here).

110
• By using msiexec and the original NXLog MSI.

• Via the GPO it was originally deployed with in an AD Domain environment.


• Via a downloadable batch script.

In addition to the above, NXLog provides a method to remove the Windows Registry traces after uninstalling.

NXLog v3.x installers will remove log4ensics.conf and nxlog.conf during the
WARNING uninstallation process, even if they have been modified. If these files need to be preserved,
they should be backed up to another location before uninstalling NXLog v3.x.

14.3.1. Uninstalling with msiexec


Uninstall NXLog using msiexec with the following command:

> msiexec /x nxlog-5.0.5876_windows_x64.msi /qb

This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed (except for v3.x
NOTE
installers as noted above). You may wish to remove the installation directory (by default,
C:\Program Files\nxlog) once the uninstallation process has completed.

14.3.2. Uninstalling via Group Policy


For Group Policy deployments, follow these steps:

1. Open the Group Policy Object (GPO) originally created for installation (see Create a Group Policy Object).
2. For each NXLog version that has been deployed, right-click the package and either:
◦ click All Tasks › Remove…, and choose the Immediately uninstall removal method; or

◦ click Properties, open the Deployment tab, and check Uninstall this application when it falls out of
the scope of management.

In this case, NXLog will be uninstalled when the GPO is no longer applied to the
NOTE computer. An additional action will be required, such as removing the selected
computer(s) from the nxlog group created in Set up an Active Directory group.

14.3.3. Remove the Traces of NXLog


After uninstalling NXLog there will be some traces left in the Windows Registry. NXLog provides a list of Windows
Registry entries to be removed in a form of a .reg file. Download the reg-entries.reg file from the public contrib
repository of NXLog. It needs to be used as an argument for the Registry Editor regedit.exe.

To remove the possibly left Windows Registry entries, use the following command:

> regedit.exe /S reg-entries.reg

14.3.4. Uninstalling With the uninstall-x64.bat Script


The script combines the steps of the Uninstalling with msiexec and Remove the Traces of NXLog procedures as
well as prompts for the removal of the installation directory.

To complete the procedure, the following files need to be present in the same directory:

111
• uninstall-x64.bat - The main script.

• reg-entries.reg - The list of Windows Registry entries to remove.

• The exact version of the MSI installer, with which NXLog was installed.

The necessary files can be downloaded from the windows-uninstall directory of NXLog’s public contrib repository.

To start the automatic uninstall and trace removal procedure, use the following command:

> uninstall-x64.bat nxlog-{productVersion}_windows_x64.msi

The Readme.MD file in the public contrib repository explains details of the script operation.

14.4. Configure With a Custom MSI


NXLog can be configured using a custom built MSI package. The MSI will install the CA certificate and chosen
custom configuration files. The package can be deployed alongside the NXLog MSI. For more information, see
the MSI for NXLog Agent Setup add-on.

Deployment via Group Policy already provides a way to deploy the configuration files. For this
NOTE reason, it might be more preferable to configure NXLog via GPO instead of creating a custom
MSI as described in this section.

112
Chapter 15. Microsoft Nano Server
15.1. Installing
Follow these steps to deploy NXLog on a Nano Server system.

Microsoft Nano Server does not support the installation of MSI files. In its place, Microsoft
introduced the APPX format. The sandboxing and isolation imposed by the APPX format was
NOTE
found to be an unnecessary complication when deploying NXLog; therefore, users are provided
with a ZIP file that allows for manual installation instead.

1. Download the NXLog ZIP archive from the NXLog website.


a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, download nxlog-5.0.5876_nano.zip.

2. Transfer the NXLog ZIP file to the Microsoft Nano Server. One way to do so is to use WinRM and the Copy-
Item cmdlet. Uncompress the ZIP file at C:\Program Files\nxlog using the Expand-Archive cmdlet as
shown below.

PS C:\tmp> Expand-Archive -Path nxlog-5.0.5876_nano.zip -DestinationPath 'C:\Program


Files\nxlog'

3. To register NXLog as a service, navigate to the installation directory and execute the following.

PS C:\Program Files\nxlog> .\nxlog.exe -i

4. Configure NXLog by editing the C:\Program Files\nxlog\nxlog.conf file. General information about
configuring NXLog can be found in Configuration. For more details about configuring NXLog to collect logs on
Windows, see the Microsoft Windows summary.

Because Microsoft Nano Server does not have a native console editor, the configuration file
NOTE must be edited on a different system and then transferred to the Nano Server. Alternatively,
a third party editor could be installed.

5. Verify the configuration file syntax.

PS C:\Program Files\nxlog> .\nxlog.exe -v -c nxlog.conf


2018-09-12 19:15:55 INFO configuration OK

NXLog in now installed, registered, and configured. The NXLog service can be started by running Start-Service
nxlog.

15.2. Upgrading
To upgrade NXLog to the latest release, follow these steps.

1. Stop the NXLog service by issuing the command Stop-Service nxlog.

2. Back up any configuration files that have been altered, such as nxlog.conf, managed.conf, and any
certificates.
3. Either delete the nxlog directory and follow the installation procedure again or use the -Force parameter
when extracting the NXLog ZIP file. There is no need to register the service again.

PS C:\tmp> Expand-Archive -Force -Path nxlog-5.0.5876_nano.zip -DestinationPath 'C:\Program


Files\nxlog'

113
4. Restore any configuration files and certificates.
5. Start the NXLog service by running Start-Service nxlog.

15.3. Uninstalling
To uninstall NXLog, follow this procedure.

1. Stop the NXLog service by issuing the command Stop-Service nxlog.

2. Unregister the NXLog service by navigating to the NXLog directory and running .\nxlog.exe -u.

3. Delete the NXLog directory.

15.4. Custom Installation Options


This section deals with installation options outside the typical scenario.

The following installation options require altering the Windows Registry. Incorrect modifications
NOTE could potentially render the system unusable. Always double check the commands and ensure
it will be possible to revert to a known working state before altering the registry.

15.4.1. Installing in a Custom Directory


NXLog can be installed in a non-default location on Nano Server.

1. Follow the same installation procedure outlined above, but choose a different DestinationPath when
expanding the ZIP file. Also register the NXLog service as shown above.
2. At this point the registry entry for the NXLog service needs to be altered. View the current setting:

PS C:\> Get-ItemProperty -Path "HKLM:\System\CurrentControlSet\Services\nxlog"

Type : 16
Start : 2
ErrorControl : 0
ImagePath : "c:\Program Files\nxlog\nxlog.exe" -c "c:\Program Files\nxlog\nxlog.conf"
DisplayName : nxlog
DependOnService : {eventlog}
ObjectName : LocalSystem
PSPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\nxlog
PSParentPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services
PSChildName : nxlog
PSDrive : HKLM
PSProvider : Microsoft.PowerShell.Core\Registry

3. The value of the ImagePath parameter needs to be modified in order to update the location of both the
NXLog executable and the configuration file. For example, if NXLog is installed in C:\nxlog, run the following
command to update the registry key.

PS C:\> Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Services\nxlog" -Name "ImagePath"


-Value '"C:\nxlog\nxlog.exe" -c "C:\nxlog\nxlog.conf"'

4. The configuration file (nxlog.conf) also needs to be edited to reflect this change to a non-default installation
directory. Make sure define ROOT points to the correct location.

114
15.4.2. Service Startup Type
The service Startup type of newer versions of NXLog defaults to Automatic (Delayed Start) instead of
Automatic. This is controlled by the DelayedAutostart parameter. To revert back to the old behavior, run the
following command.

PS C:\> Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Services\nxlog" -Name


"DelayedAutostart" -Value 0

115
Chapter 16. Apple macOS
16.1. Installing
To install NXLog under macOS, follow the steps below. You will need administrator privileges to complete the
installation process.

1. Download the appropriate NXLog install package from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your
system.

Table 45. Available macOS Packages

Platform Package
macOS 10.14 and earlier (pre-Catalina) nxlog-5.0.5876_macos-precatalina.pkg

macOS 10.15 and later nxlog-5.0.5876_macos.pkg

2. Optional: To change the NXLog user and group for the installation, create a /tmp/.nxlog file with the
following command. During installation a new user and a new group will be created using the values
specified in this command. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.

$ echo 'nxlog2:nxlog2' > /tmp/.nxlog

3. Install the NXLog package. You can do the installation interactively or with the command line installer.

◦ To install interactively, double-click the NXLog package.

As of version 4.5 the installer should be signed with our developer certificate. If you see the following
message with an earlier version, go to System Preferences › Security & Privacy and click [ Open
Anyway ], then follow the instructions shown by the installer.

"nxlog-5.0.5876_macos.pkg" can’t be opened because it is from an


unidentified developer.

116
◦ To install the package using the command line installer, run the following command.

$ sudo installer -pkg nxlog-5.0.5876_macos.pkg -target /


Password:
installer: Package name is nxlog-5.0.5876-macos-x86
installer: Upgrading at base path /
installer: The upgrade was successful.

Upon installation, all NXLog files are placed under /opt/nxlog. The launchd(8) script is installed in
/Library/LaunchDaemons/com.nxlog.plist and has the KeepAlive flag set to true (launchd will
automatically restart NXLog). NXLog log files are managed by launchd and can be found in /var/log/.

4. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on macOS, see the
Apple macOS summary.
5. Verify the configuration file syntax.

$ sudo /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

6. To apply your changes, stop NXLog with the following command. The launchd manager will restart the
daemon and the new configuration will be loaded.

$ sudo launchctl stop com.nxlog

7. To permanently stop NXLog, the service must be unloaded.

$ sudo launchctl unload /Library/LaunchDaemons/com.nxlog.plist

117
16.2. Upgrading
To upgrade NXLog, follow the installation instructions.

The installation script will not modify the existing configuration files. After the installation has completed, NXLog
will restart automatically.

The same user and group will be used for the upgrade that were used for the original
NOTE installation (see installation step 2 above). Changing to a different user and/or group during
upgrade is not supported.

16.3. Uninstalling
To properly uninstall NXLog, follow these steps.

1. Start the uninstaller script as user root.

This will remove custom configuration files, certificates, and any other files in the listed
WARNING
directories. Save these files to another location first if you do not wish to discard them.

$ sudo bash /opt/nxlog/bin/uninstaller -y

NOTE Use the -n switch (instead of -y) if you would like to preserve user data.

2. Delete user data if you are sure it will not be needed anymore.

$ sudo rm -rf /opt/nxlog

To manually uninstall NXLog, follow these steps below.

1. Unload the daemon.

$ sudo launchctl unload /Library/LaunchDaemons/com.nxlog.plist

2. Delete the nxlog user and group that were created during installation. If a non-default user/group were used
during installation (see installation step 2 above), remove those instead.

$ sudo dscl . -delete "/Groups/nxlog"


$ sudo dscl . -delete "/Users/nxlog"

3. Remove NXLog files.

This will remove custom configuration files, certificates, and any other files in the listed
WARNING
directories. Save these files to another location first if you do not wish to discard them.

$ sudo rm -rf /opt/nxlog /Library/LaunchDaemons/com.nxlog.plist \


  /var/log/nxlog.std* && \
  sudo pkgutil --forget com.nxlog.agent

118
Chapter 17. Docker
17.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-
5.0.5876_docker.tar.gz archive (which is based on CentOS 7).

2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.

$ tar -xzf nxlog-5.0.5876_docker.tar.gz

Table 46. Files in the Docker Archive

Package Description
Dockerfile The main NXLog Docker definition file

README.md Readme for building NXLog Docker image

nxlog-5.0.5876_rhel7_x86_64.tar.bz2 The NXLog RHEL7 package

4. Configure NXLog. Custom configuration files can be placed in the build directory of the NXLog Docker
version, before the build. Every file ending with .conf will be copied into the Docker image and placed in the
/opt/nxlog/etc directory.

If there is already a configuration file inside the /opt/nxlog/etc directory, it will be


NOTE
overwritten with the custom one.

5. Build the NXLog Docker image.


◦ The standalone version of NXLog Docker image can be built with this command.

$ docker build -t nxlog .

◦ It is also possible to specify the IP address of an NXLog Manager instance at build time. In this case,
NXLog will connect automatically at startup. Before build, the CA certificate file, exported from NXLog
Manager in PEM format and named agent-ca.pem, must be placed in the Docker build directory.

$ docker build -t nxlog --build-arg NXLOG_MANAGER=<NXLOG-MANAGER-IP> .

6. Run the container using the docker command.

$ docker run -p <HostPort>:<ContainerPort> -d nxlog

7. Check that the NXLog container is running with the docker command.

$ docker ps | grep nxlog


a3b4d6240e9d nxlog "/opt/nxlog/bin/nx..." 7 seconds ago Up 6 seconds 0.0.0.0:1514->1514/tcp
cranky_perlman
[...]

17.2. Upgrading
The upgrade process consists of creating a new NXLog Docker image build and running a new container instance

119
with the newly built image.

1. Follow steps 1-5 above to build a new Docker image.


2. Get the container ID of the running NXLog instance and stop the running container.

$ docker ps | grep nxlog


$ docker stop <containerID>

3. Run the new container using the docker command.

$ docker run -p <HostPort>:<ContainerPort> -d nxlog

4. Check that the new NXLog container is running.

$ docker ps | grep nxlog


a3b4d6240e9d nxlog "/opt/nxlog/bin/nx..." 7 seconds ago Up 6 seconds 0.0.0.0:1514->1514/tcp
cranky_perlman
[...]

5. Any old containers and images that are no longer needed can be removed with docker rm -v
<containerID> and docker rmi <imageID>, respectively. See Uninstalling below for more information.

17.3. Uninstalling
The uninstallation process of the NXLog Docker version is simply removing the running container and the image.

1. Get the container ID of the running NXLog instance and stop the running container.

$ docker ps | grep nxlog


$ docker stop <containerID>

2. Remove the stopped container.

$ docker rm -v <containerID>

3. Any other remaining containers that are not running can be listed with docker ps -a, and removed.

$ docker ps -a | grep nxlog


$ docker rm -v <containerID>

4. Finally, list and remove NXLog Docker images.

$ docker images
$ docker rmi <containerID>

120
Chapter 18. IBM AIX
18.1. Installing
1. Download the appropriate NXLog installer package from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-5.0.5876_aix_ppc.rpm
package.
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Install the required NXLog package.
a. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and
NXLOG_GROUP environment variables. During installation a new user and and a new group will be created
based on these environment variables. They will be used for User and Group directives in nxlog.conf,
and for the ownership of some directories under /opt/nxlog. Specifying an already existing user or
group is not supported. The created user and group will be deleted on NXLog removal.

# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2

b. Use rpm to install the package.

# rpm -ivh nxlog-5.0.5876_aix_ppc.rpm

4. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on AIX, see the IBM
AIX summary.
5. Verify the configuration file syntax.

# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

6. Start the service using the init script in /opt/nxlog/etc:

# ./init start

18.2. Upgrading
To update an NXLog installation to the latest release, use rpm as in the installation instructions above.

# rpm -Uvh nxlog-5.0.5876_aix_ppc.rpm

The rpm package manager creates a backup of an existing nxlog.conf file as


NOTE
nxlog.conf.rpmsave under the /opt/nxlog/etc/ directory.

The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 3a above). Changing to a different user and group during upgrade is not
supported.

18.3. Uninstalling
To uninstall NXLog use rpm with the -e option.

121
# rpm -e nxlog

This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).

122
Chapter 19. Oracle Solaris
19.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › My downloads tab, choose the correct archive for your system.

Table 47. Available Solaris Files

Platform Archive
Solaris 10/11 x86 archive nxlog-5.0.5876_solaris_x86.pkg.gz

Solaris 10/11 SPARC archive nxlog-5.0.5876_solaris_sparc.pkg.gz

2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.

$ gunzip nxlog-5.0.5876_solaris_sparc.pkg.gz

4. Optional: To change the NXLog user and group for the installation, create a
/var/sadm/install/admin/nxlog-user_group file with the following command. During installation a new
user and and a new group will be created based on the names specified. They will be used for User and
Group directives in nxlog.conf, and for the ownership of some directories under /opt/nxlog. Specifying an
already existing user or group is not supported. The created user and group will be deleted on NXLog
removal.

$ echo 'nxlog2:nxlog2' > /var/sadm/install/admin/nxlog-user_group

5. Install the NXLog package.


◦ For interactive installation, issue the following command and answer y (yes) to the questions.

$ sudo pkgadd -d nxlog-5.0.5876.pkg NXnxlog

◦ For a quiet install, use an administration file. Place the file (nxlog-adm in this example) in the
/var/sadm/install/admin/ directory.

$ sudo pkgadd -n -a nxlog-adm -d nxlog-5.0.5876.pkg NXnxlog

123
nxlog-adm
mail=
instance=overwrite
partial=nocheck
runlevel=nocheck
idepend=nocheck
rdepend=nocheck
space=quit
setuid=nocheck
conflict=nocheck
install
action=nocheck
basedir=/opt/nxlog
networktimeout=60
networkretries=3
authentication=quit
keystore=/var/sadm/security
proxy=

6. Configure NXLog by editing /opt/nxlog/etc/nxlog.conf. General information about configuring NXLog


can be found in Configuration. For more details about configuring NXLog to collect logs on Solaris, see the
Oracle Solaris summary.
7. Verify the configuration file syntax.

$ sudo /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK

8. Check that the NXLog service is running with the svcs command.

$ svcs nxlog
 online 12:40:37 svc:system/nxlog:default

9. Manage the NXLog service with svcadm (restart the service to load the edited configuration file).

$ sudo svcadm restart nxlog


$ sudo svcadm enable nxlog
$ sudo svcadm disable nxlog

To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same
NOTE
version, follow the same installation instructions (use instance=overwrite as shown).

19.2. Upgrading
19.2.1. Updating to a Minor Release
To update an NXLog installation to the latest minor release, remove the old version and then install the new
version.

1. Before removing the old version, run the backup script from /opt/nxlog/bin/backup. The backup script will
create a backup directory in /opt (the directory will be named according to this format: /opt/nxlog-
backup-YYYYMMDD_hhmmss).

$ sudo bash /opt/nxlog/bin/backup

2. To uninstall NXLog, use pkgrm as shown in the uninstallation instructions below.

124
$ sudo pkgrm NXnxlog

3. To install the new NXLog release, use pkgadd as in the installation instructions above.

$ sudo pkgadd -d nxlog-5.0.5876.pkg NXnxlog

4. After reinstalling NXLog, use the restore script from the latest backup directory to restore data to the new
NXLog installation.

$ sudo bash /opt/nxlog-backup-20180101_000001/restore

5. Optional: To discard the backup files, remove the backup directory.

$ sudo rm -rf /opt/nxlog-backup-20180101_000001

19.2.2. Upgrading 4.x to 5.x


To upgrade an NXLog 4.x installation to a 5.x release, remove the old version, install the new version, and
perform additional manual configuration steps.

1. Perform steps 1-3 from Updating to a Minor Release. Do not use restore (step 4).

2. Manually migrate the necessary parts of the backup content to the new installation.

From NXLog version 5.0, the configuration file log4ensics.conf changed to managed.conf and it is in a
different location. This file contains NXLog Manager related configuration.

NOTE nxlog.conf shipped with v5.0 has NXLog Manager integration disabled by default.

Table 48. Configuration to migrate

v 4.x v 5.0

/opt/nxlog-backup- /opt/nxlog/etc/nxlog.d/managed.con
date_time/lib/nxlog/log4ensics.conf f

/opt/nxlog-backup-date_time/nxlog/cert/* /opt/nxlog/var/lib/nxlog/cert/

3. Optional: To discard the backup files, remove the backup directory.

$ sudo rm -rf /opt/nxlog-backup-20180101_000001

19.3. Uninstalling
To uninstall NXLog, use pkgrm. To remove the package files from the client’s file system, include the -A option.

$ sudo pkgrm NXnxlog

This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).

125
Chapter 20. Hardening NXLog
20.1. Running Under a Non-Root User on Linux
NXLog can be configured to improve security by running as a non-root user. The User and Group global
directives specify the user and group for the NXLog process to run as. On Linux installations, NXLog is configured
by default to run as the nxlog user and nxlog group as shown below.

Running as nxlog:nxlog
1 User nxlog
2 Group nxlog

Some operations require privileges that are normally not available to the nxlog user. In this case, the simplest
solution is to configure NXLog to retain full root privileges by removing the User and Group directives from the
configuration. This is not recommended, however; it is more secure to grant only the required privileges and to
avoid running NXLog as root. See the following sections for more information.

20.1.1. Reading From /var/log


By default, the nxlog user will not have access to files in /var/log. If your Linux distribution uses a group other
than root for the log files, you can use that group with the Group directive. Otherwise, reconfigure your system
logger (Rsyslog for example) to create files with the necessary ownership. See Reading Rsyslog Log Files for more
information.

20.1.2. UDP Spoofing and Binding to Ports Below 1024


NXLog requires special privileges if configured to perform UDP source address spoofing (with om_udpspoof) or
to bind to a port below 1024 (for example to accept incoming Syslog messages on port 514). Consider the
following solutions.

Use built-in capability support


NXLog will automatically set the corresponding Linux capability before dropping root privileges.

Set the capability manually


For binding to ports below 1024, use the CAP_NET_BIND_SERVICE capability. For the UDP source address
spoofing, use the CAP_NET_RAW capability.

Example 7. Setting Linux Capabilities

This command sets the CAP_NET_BIND_SERVICE capability for the NXLog executable.

# setcap cap_net_bind_service+ep /opt/nxlog/bin/nxlog

This command sets both the CAP_NET_BIND_SERVICE and the CAP_NET_RAW capabilities.

# setcap cap_net_bind_service,cap_net_raw=+ep /opt/nxlog/bin/nxlog

Verify with this command, or by adding the -v (verify) flag to the setcap command.

# getcap /opt/nxlog/bin/nxlog

20.1.3. Reading the Kernel Log


NXLog requires special privileges to read from the Linux kernel log with the im_kernel module. Consider the
following solutions.

126
Use built-in capability support
NXLog will automatically set the Linux CAP_SYS_ADMIN capability before dropping root privileges.

Set the capability manually


Use the CAP_SYS_ADMIN capability or the CAP_SYSLOG capability (since Linux 2.6.37). See Setting Linux
Capabilities.

20.2. Configuring SELinux


To further harden NXLog, SELinux can optionally be used. SELinux improves security by providing mandatory
access controls on Linux. This section provides an overview for creating an SELinux policy for NXLog. The
resulting policy will provide the permissions necessary for the NXLog deployment to operate as configured, with
SELinux enabled on the host.

The process is divided into two parts. First, a base policy is created. Then the policy is deployed and tailored to
the specific requirements of the current NXLog configuration.

20.2.1. Base Policy


The base policy file can be generated with the SELinux Policy Generation Tool (which requires a graphical
environment) or with the SELinux CLI development utilities.

In either case, the following policy files are generated:

nxlog.te
Base policy information; this file defines all the types and rules for a particular domain.

nxlog.fc
File system information; this file defines he security contexts that are applied to files when the policy is
installed.

nxlog.if
Interface information; this file defines the default file context for the system.

nxlog.sh
A helper shell script for compiling and deploying the policy module and fixing the labeling on the system; for
use only on the target system.

nxlog_selinux.spec
A specification file that can be used to generate an RPM package from the policy, useful for deploying the
policy on another system later. This spec file is generated on RPM-based systems only.

20.2.1.1. Base Policy Using Policy Generation Tool (GUI)


1. Install the SELinux Policy Generation Tool package.

On Red Hat based systems run the following command:

$ sudo yum install rpm-build policycoreutils-gui

On Debian based systems run the following command:

$ sudo apt-get install policycoreutils-gui

2. Start the SELinux Policy Generation Tool from the system launcher.
3. In the first screen, select Standard Init Daemon for the policy type, then click [ Forward ].

127
4. On the second screen, enter the following details for the application and user role, then click [ Forward ].

Name
A custom name for the role (for example, nxlog)

Executable
The path to the NXLog executable (for example, /opt/nxlog/bin/nxlog)

Init script
The path of the NXLog system init script (for example, /etc/rc.d/init.d/nxlog)

5. On the third screen, enter the TCP and UDP used by the NXLog deployment, then click [ Forward ]. If the
ports are unknown or not yet determined, then leave these fields blank; they can be customized later.

128
6. On the fourth screen, select the appropriate application traits for NXLog, then click [ Forward ]. The default
configuration requires only the Interacts with the terminal trait. For collecting Syslog messages or creating
files in /tmp, include the appropriate traits.

7. On the fifth screen, specify all the arbitrary files and directories that the NXLog installation should have
access to, then click [ Forward ]. The default configuration requires only the NXLog system directory,
/opt/nxlog. Include the paths of any custom log files that NXLog needs to access.

129
8. Additional SELinux configuration values can be set on the sixth screen. None of these are required for NXLog.
Click [ Forward ] to continue.
9. The policy files are generated on the final screen. Click [ Save ] to write the policy to disk.

20.2.1.2. Base Policy Using sepolicy (CLI)


1. Install the SELinux Policy Core Policy Devel Utilities package.

On Red Hat based systems run the following command:

$ sudo yum install rpm-build policycoreutils-devel

On Debian based systems run the following command:

$ sudo apt-get install policycoreutils-dev selinux-policy-default

2. The base policy can be generated with the following command.

$ sepolicy generate -n nxlog --init /opt/nxlog/bin/nxlog -w /opt/nxlog

Additional managed directories can be added to the policy by passing to the -w parameter
NOTE
the full directory paths separated by spaces (for example, -w /opt/nxlog /var/log).

3. The policy files are generated when the command exits successfully; the policy is written to the current
working directory.

20.2.2. Deploying and Customizing the Policy


In this section, the base policy generated in the previous section will be applied and then customized with
appropriate rules for NXLog operation as configured. To accomplish this, SELinux will be set to permissive mode
and then the audit2allow tool will be used to generate additional SELinux rules based on the resulting audit logs.

When set to permissive mode, SELinux generates alerts rather than actively blocking actions
WARNING as it does in enforcing mode. Because this reduces system security, it is recommended that
this be done in a test environment.

1. Make sure that NXLog is correctly configured with all required functionality.

130
2. Stop the NXLog service.
3. Transfer the files containing your SELinux base policy to the target system. All the files should be in the same
directory.
4. Apply the SELinux base policy by executing the policy script. This script will compile the policy module, set the
appropriate security flags on the directories specified, and install the policy.

$ sudo ./nxlog.sh

You may see the error message libsemanage.add_user: user system_u not in
password file. This is caused by a bug in the selinux-policy RPM or selinux-policy-
default DEB package and does not affect the policy at all. It has been fixed in later
releases.
NOTE

You may see the error message InvalidRBACRuleType: a is not a valid RBAC rule
type. This is from a bug in the policycoreutils package. It only affects man page
generation, which is not generated in this case. This has been fixed in later releases.

5. Verify that the new policy is installed.

$ sudo semodule -l | grep nxlog

6. Set SELinux to permissive mode. All events which would have been prevented by SELinux will now be
permitted and logged to /var/log/audit/audit.log (including events not related to NXLog).

$ sudo setenforce 0

7. Start and then stop the NXLog service. Any actions taken by NXLog that are not permitted by the policy will
result in events logged by the Audit system. Run audit2allow -a -l -w to view all policy violations (with
descriptions) since the last policy reload.

$ sudo systemctl start nxlog


$ sudo systemctl stop nxlog

Example 8. Audit Logs

If NXLog has been configured to listen on TCP port 1514, but the appropriate rules are not specified in
the current SELinux policy, then various audit events will be generated when the NXLog process
initializes and binds to that port. These events can be viewed from the Audit log file directly, with
ausearch, or with audit2allow (as shown below).

$ sudo audit2allow -a -l -w
type=AVC msg=audit(1524239322.612:473): avc: denied { listen } for pid=5697 comm="nxlog"
lport=1514 scontext=system_u:system_r:nxlog_t:s0 tcontext=system_u:system_r:nxlog_t:s0
tclass=tcp_socket
  Was caused by:
  Missing type enforcement (TE) allow rule.

  You can use audit2allow to generate a loadable module to allow this access.

Additional log messages will be generated for any other file or network action not permitted by the
SELinux policy. These actions would all be denied by SELinux when set to enforcing mode.

8. Use the helper script --update option to add additional rules to the policy based on logged policy violations
with the nxlog context. Review the suggested changes and press y to update the policy. If no changes are
required, the script will exit zero.

131
$ sudo ./nxlog.sh --update

Example 9. Updating the Policy

The script will offer to add any required rules. The following output corresponds to the example in the
previous step.

$ sudo ./nxlog.sh --update


Found avc's to update policy with

require {
  type nxlog_rw_t;
  type nxlog_t;
  class capability dac_override;
  class tcp_socket { bind create listen setopt };
  class file execute;
  class capability2 block_suspend;
}

#============= nxlog_t ==============


allow nxlog_t nxlog_rw_t:file execute;
allow nxlog_t self:capability dac_override;
allow nxlog_t self:capability2 block_suspend;
allow nxlog_t self:tcp_socket { bind create listen setopt };
corenet_tcp_bind_generic_node(nxlog_t)
corenet_tcp_bind_unreserved_ports(nxlog_t)
Do you want these changes added to policy [y/n]?

9. Set the SELinux policy to enforcing mode. This can be set permanently in /etc/selinux/config.

$ sudo setenforce 1

10. Reboot the system.

20.3. Running Under a Custom Account on Windows


On Windows, the NXLog installer sets up the NXLog service to run under the local system account. This
procedure describes how to configure a system service for NXLog that runs under a dedicated svc-nxlog user
account. This approach can improve security by limiting the privileges that NXLog requires to run.

In enterprise environments managed by Group Policy, the dedicated user account and its
NOTE
permissions must be managed by the domain administrator.

1. Create a new user account. Open the Computer Management console (compmgmt.msc), expand Local
Users and Groups and right-click on Users. Select New User… from the context menu.

132
2. Enter the svc-nxlog user name, description, and password; enable the Password never expires check box;
and click [Create].

3. Open the Services console (services.msc), right-click the nxlog service, and select Properties.

4. Under the Log On tab, select the This Account radio button, click [Browse…], select the svc-nxlog user
account, and enter the password. Then click [OK]. Windows will warn you that the service must be restarted.

133
5. Open the Local Security Settings console (secpol.msc), expand Local Policies, then select User Rights
Assignment in the left pane.

6. Right-click the Log on as a service policy and click Properties.

134
7. Click [Add User or Group…] and select the new user. The new user should appear in the list. Click [OK].

8. Add the new user to the the Manage auditing and security log policy also.
9. In Windows Explorer, browse to the NXLog installation directory (by default, C:\Program Files
(x86)\nxlog on 64-bit systems), right-click, and select Properties. Under the Security tab, select the new
user from the Group or user names list. Check Allow for the following permissions, and then click [OK].

◦ Modify
◦ Read & Execute
◦ List Folder Contents
◦ Read
◦ Write

135
10. In the Services console (services.msc), right-click the nxlog service and select Restart.

11. Check the NXLog log files for start-up errors. Successful startup should look like this:

nxlog.log
2016-11-16 16:53:10 INFO nxlog-5.0.5876 started↵
2016-11-16 16:53:10 INFO connecting to 192.168.40.43↵
2016-11-16 16:53:12 INFO successfully connected to 192.168.40.43:1514↵
2016-11-16 16:53:12 INFO successfully connected to agent manager at 192.168.40.43:4041 in SSL
mode↵

On some Windows systems, this procedure may result in the following access denied error
when attempting to access the Windows EventLog:

WARNING ignoring source as it cannot be subscribed to (error code: 5)


NOTE

In this case, wevtutil can be used to set ACLs on the Windows EventLog. For more details, see
the Giving Non Administrators permission to read Event Logs Windows 2003 and Windows 2008
TechNet article.

136
Chapter 21. Relocating NXLog
While not officially supported, it is possible to relocate NXLog to a different directory than where it was installed
originally. The procedure shown below assumes that NXLog was installed normally, using the system’s package
manager. While it is also possible to manually extract the files from the package and perform a manual
installation in a custom directory, this is not covered here but the basic principals are the same. This procedure
has been tested in GNU/Linux systems and should work in any system that supports run-time search paths.

Both relocation and manual installation can result in a non-functional NXLog agent.
Furthermore, subsequent update and removal using the system’s package manager may
WARNING
not work correctly. Follow this procedure at your own risk. This is not recommended for
inexperienced users.

Move the NXLog directory structure to the new location. Though not required, it is best to keep the original
directory structure. Then proceed to the following sections.

NOTE In the examples that follow, NXLog is being relocated from /opt/nxlog to /opt/nxlog_new.

21.1. System V Init File


For systems that manage services with System V, edit the NXLog init file. This file can normally be found at
/etc/init.d/nxlog. Modify the init file so that the $BASE variable reflects the new directory. Update the
$pidfile, $nxlog, and $conf variables to reference $BASE. Finally, reassign the $nxlog variable to include the
configuration file. This must be done after any tests to the binary executable. The init file should look similar to
the following.

/etc/init.d/nxlog
BASE=/opt/nxlog_new

pidfile=$BASE/var/run/nxlog/nxlog.pid
nxlog=$BASE/bin/nxlog
conf=$BASE/etc/nxlog.conf

test -f $nxlog || exit 0

nxlog="$nxlog -c $conf"

On systems that use a hybrid System V and systemd, reload the init files by executing the following command.

# systemctl daemon-reload

21.2. Systemd Unit File


For systems using systemd, the /lib/systemd/system/nxlog.service unit file must be edited and the paths
updated.

137
nxlog.service
[Service]
Type=simple
User=root
Group=root
PIDFile=/opt/nxlog_new/var/run/nxlog/nxlog.pid
ExecStartPre=/opt/nxlog_new/bin/nxlog -v -c /opt/nxlog_new/etc/nxlog.conf
ExecStart=/opt/nxlog_new/bin/nxlog -f -c /opt/nxlog_new/etc/nxlog.conf
ExecStop=/opt/nxlog_new/bin/nxlog -s -c /opt/nxlog_new/etc/nxlog.conf
ExecReload=/opt/nxlog_new/bin/nxlog -r -c /opt/nxlog_new/etc/nxlog.conf
KillMode=process

Reload the modified unit files by executing the following command.

# systemctl daemon-reload

21.3. NXLog Configuration File


The configuration file of NXLog itself must be modified to reflect the directory relocation, as well as any changes
in the directory structure. For most cases, running the following command will update the configuration file.

# sed -i s,/opt/nxlog,/opt/nxlog_new,g /opt/nxlog_new/etc/nxlog.conf

Alternatively, the file can be manually edited as shown below.

nxlog.conf
 1 define BASE /opt/nxlog_new
 2 define CERTDIR %BASE%/var/lib/nxlog/cert
 3 define CONFDIR %BASE%/etc/nxlog.d
 4 define LOGDIR %BASE%/var/log/nxlog
 5 define LOGFILE "%LOGDIR%/nxlog.log"
 6
 7 SpoolDir %BASE%/var/spool/nxlog
 8
 9 # default values:
10 PidFile %BASE%/var/run/nxlog/nxlog.pid
11 CacheDir %BASE%/var/spool/nxlog
12 ModuleDir %BASE%/lib/nxlog/modules

Depending on the architecture and whether system supplied libraries are used, NXLog will store
NOTE
the modules under a different directory such as %BASE%/libexec/nxlog/modules.

21.4. Modify rpath


Depending on the NXLog package used, the run-time search path of the binaries must be changed. This is
relevant for the generic versions of NXLog in which the libraries are statically linked against the binaries. To list
the shared libraries used by NXLog, use the ldd command with the full path to the nxlog binary

# ldd /opt/nxlog_new/bin/nxlog

The output should look similar to this:

138
  linux-vdso.so.1 => (0x00007ffc15d36000)
  libpcre.so.1 => /opt/nxlog/lib/libpcre.so.1 (0x00007ff7f311e000)
  libdl.so.2 => /lib64/libdl.so.2 (0x00007ff7f2f14000)
  libcap.so.2 => /lib64/libcap.so.2 (0x00007ff7f2d0f000)
  libapr-1.so.0 => /opt/nxlog/lib/libapr-1.so.0 (0x00007ff7f2ad9000)
  librt.so.1 => /lib64/librt.so.1 (0x00007ff7f28d0000)
  libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff7f2699000)
  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff7f247d000)
  libc.so.6 => /lib64/libc.so.6 (0x00007ff7f20bb000)
  /lib64/ld-linux-x86-64.so.2 (0x00007ff7f336d000)
  libattr.so.1 => /lib64/libattr.so.1 (0x00007ff7f1eb6000)
  libfreebl3.so => /lib64/libfreebl3.so (0x00007ff7f1cb3000)

Notice that libpcre and libapr are pointing to the included libraries in /opt/nxlog/lib/. To change the run-
time search path of the binaries, a tool such as chrpath or patchelf can be used.

Depending on the distribution, chrpath may have a limitation on the path length for the -r
<path> | --replace <path> option: "The new path must be shorter or the same length as the
current path."

NOTE # chrpath -r /opt/nxlog_new/lib /opt/nxlog_new/bin/nxlog

Returns this error:

  /opt/nxlog_new/bin/nxlog: RUNPATH=/opt/nxlog/lib
  new rpath '/opt/nxlog_new/lib' too large; maximum length 14

If your system has the chrpath limitation documented above, skip to Modifying rpath with patchelf.

21.4.1. Modifying rpath with chrpath


If the current rpath is longer than the new rpath, issue the following command if the nxlog binary is in your
current path. Otherwise, the final chrpath argument needs to include the appropriate relative or absolute path
to nxlog (as implemented in the example above).

# chrpath -r /opt/nxlog_new/lib nxlog

Upon success, a message similar to this will be output:

nxlog: RPATH=/opt/nxlog/lib:/home/builder/workspace/nxlog3-rpm-generic-amd64/rpmbuild/BUILD/nxlog-
deps/opt/nxlog/lib
nxlog: new RPATH: /opt/nxlog_new/lib

NXLog modules are also linked against statically included libraries. Therefore, if the run-time search path of the
binaries required a change, then the rpath of the modules needs updated as well. To change the run-time
search path of all the modules (or binaries) in a directory, use a command like this.

# chrpath -r /opt/nxlog_new/lib *

NXLog is now successfully relocated to a new directory.

21.4.2. Modifying rpath with patchelf


If chrpath is not an option for modifying rpath, using patchelf as follows will achieve the same goal:

# patchelf --set-rpath /opt/nxlog_new/lib /opt/nxlog_new/bin/nxlog

139
On success the command prompt returns with no message, or if this is the first time patchelf has been run
after installation, the following warning will be shown:

warning: working around a Linux kernel bug by creating a hole of 1748992 bytes in ‘nxlog’

To confirm the modification of rpath, run ldd again on the binary. The new path should displayed in the output:

# ldd /opt/nxlog_new/bin/nxlog
  linux-vdso.so.1 => (0x00007ffc15d36000)
  libpcre.so.1 => /opt/nxlog_new/lib/libpcre.so.1 (0x00007ff7f311e000)
  libdl.so.2 => /lib64/libdl.so.2 (0x00007ff7f2f14000)
  libcap.so.2 => /lib64/libcap.so.2 (0x00007ff7f2d0f000)
  libapr-1.so.0 => /opt/nxlog_new/lib/libapr-1.so.0 (0x00007ff7f2ad9000)
  librt.so.1 => /lib64/librt.so.1 (0x00007ff7f28d0000)
  libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff7f2699000)
  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff7f247d000)
  libc.so.6 => /lib64/libc.so.6 (0x00007ff7f20bb000)
  /lib64/ld-linux-x86-64.so.2 (0x00007ff7f336d000)
  libattr.so.1 => /lib64/libattr.so.1 (0x00007ff7f1eb6000)
  libfreebl3.so => /lib64/libfreebl3.so (0x00007ff7f1cb3000)

NXLog modules are also linked against statically included libraries. Therefore, if the run-time search path of the
binaries required a change, then the rpath of the modules needs updated as well. Unlike chrpath which accepts
a (*) wildcard for all modules (or binaries) in a given directory, patchelf can only be run on a single file. To
automate the process of changing rpath on multiple files, a shell script will need to be written if relocating
NXLog will need to be done on a regular basis, or on more than one installation.

140
Chapter 22. Monitoring and Recovery
Considerable resources continue to be invested in maintaining the quality and reliability of NXLog. However, due
to the complexity of modern software, producing bug-free software is practically impossible. This section
describes potential ways to automatically recover from an NXLog crash. Note that there are other monitoring
solutions besides these presented here which may also be of interest.

22.1. Monitoring on Unix Platforms


Monit can both monitor and recover NXLog after a crash. It supports macOS, Solaris, and several Linux
distributions. Monit can be installed directly from your distribution’s package manager—see Installation on the
Monit wiki for more information about the various installation options. Precompiled binaries can also be found
here.

While Monit can monitor and react to several conditions, the configuration presented here instructs Monit to
restart NXLog after a crash. To do so, include the following in the Monit configuration. It may be necessary to edit
the paths to match your installation. Then restart Monit.

/etc/monit/monitrc
check process nxlog with pidfile /opt/nxlog/var/run/nxlog/nxlog.pid
  start program = "/etc/init.d/nxlog start"
  stop program = "/etc/init.d/nxlog stop"

On recent Linux distributions employing systemd, the start and stop directives should use
NOTE
systemd calls instead (for example, /bin/systemctl start nxlog).

To simulate an NXLog crash, terminate the nxlog process by issuing the following command (where <PID>
represents the current nxlog process ID).

# kill -9 <PID>

22.2. Monitoring on Windows


The Service Control Manager (SCM) in Microsoft Windows includes recovery options for each service. In the list of
available services, find nxlog and right-click it. Select Properties and then choose the Recovery tab. As shown
below, there are a number of different recovery options which can be configured.

141
Figure 2. Recovery settings in the SCM

Newer versions of NXLog enable automatic recovery during installation. For older versions,
NOTE automatic recovery can be enabled by manually editing the values under the Recovery tab of
the SCM.

To simulate an NXLog crash, execute the following in PowerShell (where <PID> represents the process ID of
NXLog).

PS> Taskkill /PID <PID> /F

142
Configuration

143
Chapter 23. Configuration Overview
NXLog uses Apache style configuration files. The configuration file is loaded from its default location, or it can be
explicitly specified with the -c command line argument.

The configuration file is comprised of blocks and directives. Blocks are similar to XML tags containing multiple
directives. Directive names are not case-sensitive but arguments sometimes are. A directive and its argument
must be specified on the same line. Values spanning multiple lines must have the newline escaped with a
backslash (\). A typical case for this is the Exec directive. Blank lines and lines starting with the hash mark (#) are
ignored. Configuration directives referring to a file or a path can be quoted with double quotes (") or single
quotes ('). This applies to both global and module directives.

The configuration file can be logically divided into three parts: global parameters, module instances, and route
instances.

Example 10. Configuration File Structure

This configuration exemplifies the logical structure. The global parameters section contains two directives.
The modules section contains both an input and output instance. The route section contains a single route
with a path directing a single input to a single output.

nxlog.conf
 1 # Global section
 2 User nxlog
 3 Group nxlog
 4
 5 # Modules section
 6 <Input in>
 7 Module im_null
 8 </Input>
 9
10 <Output out>
11 Module om_null
12 </Output>
13
14 # Route section
15 <Route r>
16 Path in => out
17 </Route>

23.1. Global Directives


The global section contains directives that control the overall behavior of NXLog.

The LogFile directive sets a destination file for NXLog internal logs. If this directive is unset, the log file is disabled
and internal NXLog logs are not written to file (unless configured via the im_internal module). See also Rotating
the Internal Log File.

With the User and Group directives set, NXLog will drop root privileges after starting and run under the specified
user and group. These directives are ignored if running on Windows.

After starting, NXLog will change its working directory to the directory specified by the SpoolDir directive. Non-
absolute paths in the configuration will be relative to this directory.

See the Reference Manual for a complete list of available global directives.

144
23.2. Modules
NXLog will only load modules which are specified in the configuration file and used in an active route. A module
instance is specified according to its corresponding module type (Extension, Input, Processor, or Output).
Each module instance must have a unique name and a Module directive. The following is a skeleton
configuration block for an input module.

nxlog.conf
1 <Input instancename>
2 Module im_module
3 ...
4 </Input>

For more details about module instance names, see Configuration in the Reference Manual.

23.3. Routes
Routes define the flow and processing order of the log messages. Each route instance must have a unique name
and a Path.

Example 11. Route Block

This Route instance, named example, takes logs from Input module instances named in1 and in2,
processes the logs with the proc Processor module instance, and sends the resulting logs to both Output
module instances out1 and out2. These named module instances must be defined elsewhere in the
configuration file.

nxlog.conf
1 <Route example>
2 Path in1, in2 => proc => out1, out2
3 </Route>

For more details about route instance names, see Configuration in the Reference Manual.

If no Route block is specified in the configuration, NXLog will automatically generate a route, with all the Input
and Output instances specified in a single path.

145
Example 12. An Automatic Route Block

NXLog can use a configuration with no Route block, such as the following.

nxlog.conf
 1 <Input in1>
 2 Module im_null
 3 </Input>
 4
 5 <Input in2>
 6 Module im_null
 7 </Input>
 8
 9 <Output out1>
10 Module om_null
11 </Output>
12
13 <Output out2>
14 Module om_null
15 </Output>

The following Route block will be generated automatically.

nxlog.conf (Generated Route)


1 <Route r>
2 Path in1, in2 => out1, out2
3 </Route>

23.4. Constant and Macro Definitions


A define is useful if there are many instances in the configuration where the same value must be used. Typically,
defines are used for directories and hostnames. In such cases the value can be configured with a single
definition. In addition to constants, other strings like code snippets or parser rules can be defined in this way.

An NXLog define works in a similar way to the C language, where the pre-processor substitutes the value in
places where the macro is used. The NXLog configuration parser replaces all occurrences of the defined name
with its value, and then after this substitution the configuration check occurs.

146
Example 13. Using Defines

This example shows the use of two defines: BASEDIR and IGNORE_DEBUG. The first is a simple constant, and
its value is used in two File directives. The second is an NXLog language statement, it is used in an Exec
directive.

nxlog.conf
 1 define BASEDIR /var/log
 2 define IGNORE_DEBUG if $raw_event =~ /debug/ drop();
 3
 4 <Input messages>
 5 Module im_file
 6 File '%BASEDIR%/messages'
 7 </Input>
 8
 9 <Input proftpd>
10 Module im_file
11 File '%BASEDIR%/proftpd.log'
12 Exec %IGNORE_DEBUG%
13 </Input>

The define directive can be used for statements as shown above, but multiple statements should be specified
using a code block, with curly braces ({}), to result in the expected behavior.

Example 14. Incorrect Use of a Define

The following example shows an incorrect use of the define directive. After substitution, the drop()
procedure will always be executed; only the warning message will be logged conditionally.

nxlog.conf (incorrect)
1 define ACTION log_warning("dropping message"); drop();
2
3 <Input in>
4 Module im_file
5 File '/var/log/messages'
6 Exec if $raw_event =~ /dropme/ %ACTION%
7 </Input>

To avoid this problem, the action should be defined using a code block.

1 define ACTION { log_warning("dropping message"); drop(); }

23.5. Environment Variables


The envvar directive works like define except that the value is retrieved from the environment. This makes it
possible to reference the environment variable as if it was a define. This directive is only available in NXLog
Enterprise Edition.

147
Example 15. Using Environment Variables

This is similar to the previous example using a define, but here the value is fetched from the environment.

nxlog.conf
1 envvar BASEDIR
2
3 <Input in>
4 Module im_file
5 File '%BASEDIR%/messages'
6 </Input>

23.6. File Inclusion


NXLog provides several features for including configuration directives and blocks from separate files or from
executables.

The SpoolDir directive does not take effect until after the configuration has been parsed, so
relative paths specified with these directives are relative to the working directory where NXLog
NOTE was started from. Generally, it is recommended to use absolute paths. If desired, define
directives can be used to simulate relative paths (see Using Defines to Include a Configuration
File).

With the include directive it is possible to specify a file or set of files to be included in the current NXLog
configuration.

Example 16. Including a Configuration File

This example includes the contents of the /opt/nxlog/etc/syslog.conf file in the current configuration.

nxlog.conf
1 include /opt/nxlog/etc/syslog.conf

Example 17. Using Defines to Include a Configuration File

In this example, two define directives are used to include an eventlog.conf configuration file on Windows
by defining parts of the path to this file.

nxlog.conf
1 define ROOT C:\Program Files (x86)\nxlog
2 define CONFDIR %ROOT%\conf
3 include %CONFDIR%\eventlog.conf

The include directive also supports filenames containing the wildcard character (*). For example, multiple .conf
files could be saved in the nxlog.d directory—or some other custom configuration directory—and then
automatically included in the NXLog configuration in ascending alphabetical order along with the nxlog.conf
file.

Each included file might contain a small set of configuration information focused exclusively on a single log
source. This essentially establishes a modular design for maintaining larger configurations. One benefit of this
modular configuration approach is the ability to add/remove .conf files to/ from such a directory for
enabling/disabling specific log sources without ever needing to modify the main nxlog.conf configuration.

148
This solution could be used to specify OS-specific configuration snippets (like windows2003.conf) or application-
specific snippets (such as syslog.conf).

Including subdirectories inside the configuration directory is not supported, neither are wildcarded directories.

Example 18. Including a Configuration Directory on Linux

This example includes all .conf files located under the /opt/nxlog/etc/nxlog.d path.

nxlog.conf
1 include /opt/nxlog/etc/nxlog.d/*.conf

The files can also be included using the define directive.

nxlog.conf
define CONFDIR /opt/nxlog/etc/nxlog.d
include %CONFDIR%/*.conf

Example 19. Including a Configuration Directory on Windows

This example includes all .conf files from the nxlog.d folder on Windows.

nxlog.conf
1 include C:\Program Files\nxlog\conf\nxlog.d\*.conf

The files can also be included using the define directive.

nxlog.conf
1 define CONFDIR C:\Program Files\nxlog\conf\nxlog.d
2 include %CONFDIR%\*.conf

With the include_stdout directive, an external command can be used to provide configuration content. There are
many ways this could be used, including fetching, decrypting, and validating a signed configuration from a
remote host, or generating configuration content dynamically.

Example 20. Using an Executable to Generate Configuration

Here, a separate script is responsible for fetching the NXLog configuration.

nxlog.conf
1 include_stdout /opt/nxlog/etc/fetch_conf.sh

149
Chapter 24. NXLog Language
The NXLog core has a built-in interpreted language. This language can be used to make complex decisions or
build expressions in the NXLog configuration file. Code written in the NXLog language is similar to Perl, which is
commonly used by developers and administrators for log processing tasks. When NXLog starts and reads its
configuration file, directives containing NXLog language code are parsed and compiled into pseudo-code. If a
syntax error is found, NXLog will print the error. This pseudo-code is then evaluated at run-time, as with other
interpreted languages.

The features of the NXLog language are not limited to those in the NXLog core: modules can register functions
and procedures to supplement built-in functions and procedures (see the xm_syslog functions, for example).

Due to the simplicity of the language there is no error handling available to the user, except for
function return values. If an error occurs during the execution of the NXLog pseudo-code,
usually the error is printed in the NXLog logs. If an error occurs during log message processing it
NOTE is also possible that the message will be dropped. If sophisticated error handling or more
complex processing is required, additional message processing can be implemented in an
external script or program via the xm_exec module, in a dedicated NXLog module, or in Perl via
the xm_perl module.

The NXLog language is described in five sections.

Types
All fields and other expressions in the NXLog language are typed.

Expressions
An expression is evaluated to a value at run-time and the value is used in place of the expression. All
expressions have types. Expressions can be used as arguments for some module directives.

Statements
The evaluation of a statement will cause a change in the state of the NXLog engine, the state of a module
instance, or the current event. Statements often contain expressions. Statements are used as arguments for
the Exec module directive, where they are then executed for each event (unless scheduled).

Variables
Variables store data persistently in a module instance, across multiple event records.

Statistical Counters
NXLog provides statistical counters with various algorithms that can be used for realtime analysis.

150
Example 21. Statements vs. Configurations

While this Guide provides many configuration examples, in some cases only statement examples are given.
Statements must be used with the Exec directive (or Exec block). The following statement example shows
one way to use the parsedate() function.

1 if $raw_event =~ /^(\w{3} \d{2} \d{2}:\d{2}:\d{2})/


2 $EventTime = parsedate($1);

The following configuration example uses the above statement in an Exec block.

nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/app.log'
4 <Exec>
5 if $raw_event =~ /^(\w{3} \d{2} \d{2}:\d{2}:\d{2})/
6 $EventTime = parsedate($1);
7 </Exec>
8 </Input>

24.1. Types
The NXLog language is a typed language. Fields, literals, and other expressions evaluate to values with specific
types. This allows for stricter type-safety syntax checking when parsing the configuration. Note that fields and
some functions can return values with types that can only be determined at run-time.

The language provides only simple types. Complex types such as arrays and hashes (associative
NOTE arrays) are not supported. The language does support the undefined value similar to that in
Perl. See the xm_perl module if you require more complex types.

A log’s format must be parsed before its individual parts can be used for processing (see Fields). But even after
the message has been parsed into its parts, additional processing may still be required, for example, to prepare a
timestamp for comparison with another timestamp. This is a situation where typing is helpful: by converting all
timestamps to the datetime type they can be easily compared—and converted back to strings later if required—
using the functions and procedures provided. The same applies to other types.

151
Example 22. Typed Fields in a Syslog Event Record

The following illustrates the four steps NXLog performs with this configuration as it manually processes a
Syslog event record using only regular expressions on the core field $raw_event and the core function
parsedate().

nxlog.conf
 1 <Input in>
 2 # 1. New event record created
 3 Module im_udp
 4 Host 0.0.0.0
 5 Port 514
 6 <Exec>
 7 # 2. Timestamp parsed from Syslog header
 8 if $raw_event =~ /^(\w{3} \d{2} \d{2}:\d{2}:\d{2})/
 9 {
10 # 3. parsedate() function converts from string to datetime
11 $EventTime = parsedate($1);
12 # 4. Datetime fields compared
13 if ( $EventReceivedTime - $EventTime ) > 60000000
14 log_warning('Message delayed more than 1 minute');
15 }
16 </Exec>
17 </Input>

1. NXLog creates a new event record for the incoming log message. The new event record contains the
$raw_event string type field, with the contents of the entire Syslog string.

2. A regular expression is used to parse the timestamp from the event. The captured sub-string is a string
type, not a datetime type.
3. The parsedate() function converts the captured string to a datetime type.
4. Two datetime fields are compared to determine if the message was delayed during delivery. The
datetime type $EventReceivedTime field is added by NXLog to each event when it is received.

Normally the parse_syslog() procedure (provided by the xm_syslog extension module)


would be used to parse a Syslog event. It will create fields with the appropriate types
NOTE
during parsing, eliminating the need to directly call the parsedate() function. See Collecting
and Parsing Syslog.

For a full list of types, see the Reference Manual Types section. For NXLog language core functions that can be
used to work with types, see Functions. For functions and procedures that can work with types related to a
particular format, see the module corresponding to the required format.

24.2. Expressions
An expression is a language element that is dynamically evaluated to a value at run-time. The value is then used
in place of the expression. Each expression evaluates to a type, but not always to the same type.

The following language elements are expressions: literals, regular expressions, fields, operations, and functions.

Expressions can be bracketed by parentheses ( ) to help improve code readability.

152
Example 23. Using Parentheses (Round Brackets) Around Expressions

There are three statements below, one per line. Each statement contains multiple expressions, with
parentheses added in various ways.

1 if 1 + 1 == (1 + 1) log_info("2");
2 if (1 + 1) == (1 + 1) log_info("2");
3 if ((1 + 1) == (1 + 1)) log_info("2");

Expressions are often used in statements.

Example 24. Using an Expression in a Statement

This simple statement uses the log_info() procedure with an expression as its argument. In this case the
expression is a literal.

1 log_info('This message will be logged.');

Here is a function (also an expression) that is used in the same procedure. It generates an internal event
with the current time when each event is processed.

1 log_info(now());

Expressions can be used with module directives that support them.

Example 25. Expressions for Directives

The File directive of the om_file module supports expressions. This allows the output filename to be set
dynamically for each individual event.

nxlog.conf
1 <Output out>
2 Module om_file
3 File "/var/log/nxlog/out_" + strftime($EventTime, "%Y%m%d")
4 </Output>

See Using Dynamic Filenames for more information.

24.2.1. Literals
A literal is a simple expression that represents a fixed value. Common literals include booleans, integers, and
strings. The type of literal is detected by the syntax used to declare it.

NOTE This section demonstrates the use of literals by using examples with assignment statements.

Boolean literals can be declared using the constants TRUE or FALSE. Both are case-insensitive.

Setting Boolean Literals


1 $Important = FALSE;
2 $Local = true;

Integer literals are declared with an unquoted integer. Negative integers, hexademical notation, and base-2
modifiers (Kilo, Mega, and Giga) are supported.

153
Setting Integer Literals
1 $Count = 42;
2 $NegativeCount = -42;
3 $BigCount = 42M;
4 $HexCount = 0x2A;

String literals are declared by quoting characters with single or double quotes. Escape sequences are available
when using double quotes.

Setting String Literals


1 $Server = 'Alpha';
2 $Message = 'This is a test message.';
3 $NumberAsString = '12';
4 $StringWithNewline = "This is line 1.\nThis is line 2.";

For a list of all available literals, see the Reference Manual Literals section.

24.2.2. Regular Expressions


NXLog supports regular expressions for matching, parsing, and modifying event records. In the context of the
NXLog language, a regular expression is an expression that is evaluated to a boolean value at run-time. Regular
expressions can be used to define complex search and replacement patterns for text matching and substitution.

Examples in this section use only simple patterns. See Extracting Data and other topic-specific
NOTE
sections for more extensive examples.

Matching can be used with an if statement to conditionally execute a statement.

Example 26. Matching a Field With a Regular Expression

The event record will be discarded if the $raw_event field matches the regular expression.

1 if $raw_event =~ /TEST: / drop();

Regular expression matching can also be used for extensive parsing, by capturing sub-strings for field
assignment.

Example 27. Parsing Fields With a Regular Expression

If the $raw_event field contains the regular expression, the two fields will be set to the corresponding
captured sub-strings.

1 if $raw_event =~ /TEST(\d): (.+)/


2 {
3 $TestNumber = $1;
4 $TestName = $2;
5 }

Regular expression matching also supports named capturing groups. This can be useful when writing long
regular expressions. Each captured group is automatically added to the event record as a field with the same
name.

154
Example 28. Named Capturing Groups

This regular expression uses the named groups TestNumber and TestName to add corresponding
$TestNumber and $TestName fields to the event record.

1 if $raw_event =~ /TEST(?<TestNumber>\d): (?<TestName>.+)/


2 {
3 $Message = $TestNumber + ' ' + $TestName;
4 }

Regular expression substitution can be used to modify a string. In this case, the regular expression follows the
form s/pattern/replace/. The result of the expression will be assigned to the field to the left of the operator.

Example 29. Performing Substitution Using a Regular Expression

The first regular expression match will be removed from the $raw_event field.

1 $raw_event =~ s/TEST: //;

Global substitution is supported with the /g modifier. Without the /g modifier, only the first match in the string
will be replaced.

Example 30. Global Regular Expression Substitution

Every whitespace character in the $AlertType field will be replaced with an underscore (_).

1 $AlertType =~ s/\s/_/g;

A statement can be conditionally executed according to the success of a regular expression substitution.

Example 31. Regular Expression Substitution With Conditional Execution

If the substitution succeeds, an internal log message will also be generated.

1 if $Hostname =~ s/myhost/yourhost/ log_info('Updated hostname');

For more information, see the following sections in the Reference Manual: Regular Expressions, =~, and !~.

24.2.3. Fields
When NXLog receives a log message, it creates an event record for it. An event record is a set of fields (see Fields
for more information). A field is an expression which evaluates to a value with a specific type. Each field has a
name, and in the NXLog language it is represented with the dollar sign ($) prepended to the name of the field,
like Perl’s scalar variables.

Fields are only available in an evaluation context which is triggered by a log message. For example, using a value
of a field in the Exec directive of a Schedule block will result in a run-time error because the scheduled execution
is not triggered by a log message.

Because it is through fields that the NXLog language accesses the contents of an event record, they are
frequently referenced. The following examples show some common ways that fields are used in NXLog
configurations.

155
Example 32. Assigning a Value to a Field

This statement uses assignment to set the $Department field on log messages.

1 $Department = 'customer-service';

Example 33. Testing a Field Value

If the $Hostname field does not match, the message will be discarded with the drop() procedure.

1 if $Hostname != 'webserver' drop();

Example 34. Using a Field in a Procedure

This statement will generate an internal event if $SeverityValue integer field is greater than 2 (NXLog
INFO severity). The generated event will include the contents of the $Message field.

1 if $SeverityValue > 2 log_warning("ALERT: " + $Message);

24.2.4. Operations
Like other programming languages and especially Perl, the NXLog language has unary operations, binary
operations, and the conditional ternary operation. These operations are expressions and evaluate to values.

Unary Operations
Unary operations work with a single operand and evaluate to a boolean value.

Example 35. Using a Unary Operation

This statement uses the defined operator to log a message only if the $Hostname field is defined in the
event record.

1 if defined $Hostname log_info('Event received');

Binary Operations
Binary operations work with two operands and evaluate to a value. The type of the evaluated value depends
on the type of the operands. Execution might result in a run-time error if the type of the operands are
unknown at compile time and then evaluate to types which are incompatible with the binary operation when
executed.

Example 36. Using Binary Operations

This statement uses the == operator to drop the event if the $Hostname field matches.

1 if $Hostname == 'testbox' drop();

Here, the + operator is used to concatenate two strings.

1 log_info('Event received from ' + $Hostname);

Ternary Operation
The conditional or ternary operation requires three operands. The first is an expression that evaluates to a

156
boolean. The second is an expression that is evaluated if the first expression is TRUE. The third is an
expression that is evaluated if the first expression is FALSE.

Example 37. Using the Ternary Operation

This statement sets the $Important field to TRUE if $SeverityValue is greater than 2, or FALSE
otherwise. The parentheses are optional and have been added here for clarity.

1 $Important = ( $SeverityValue > 2 ? TRUE : FALSE );

For a full list of supported operations, see the Reference Manual Operations section.

24.2.5. Functions
A function is an expression which always returns a value. A function cannot be used without using its return
value. Functions can be polymorphic: the same function can take different argument types.

Many NXLog language features are provided through functions. As with other types of expressions, and unlike
procedures, a function never modifies the state of the NXLog engine, the state of the module, or the current
event.

See the list of core functions. Modules can provide additional functions for use with the NXLog language.

Example 38. Function Calls

These statements use the now() function (returning the current time) and the hostname() function
(returning the hostname of the system running NXLog) to set fields.

1 $EventTime = now();
2 $Relay = hostname();

Here, any event with a $Message field over 4096 bytes causes an internal log to be generated.

1 if size($Message) > 4096 log_info('Large message received.');

24.3. Statements
The evaluation of a statement will usually result in a change in the state of the NXLog engine, the state of a
module, or the log message.

Statements are used with the Exec module directive. A statement is terminated by a semicolon (;).

Example 39. Using a Statement with Exec

With this input configuration, an internal NXLog log message will be generated for each message received.

nxlog.conf
1 <Input in>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 Exec log_info("Message received on UDP port 514");
6 </Input>

Multiple statements can be specified, these will be evaluated and executed in order. Statements can also be

157
given on multiple lines by using line continuation or by enclosing the statements in an Exec block.

Example 40. Using Multiple Statements with Exec

This configuration generates an internal log message and sets the $File field.

nxlog.conf
1 <Input in1>
2 Module im_file
3 File '/var/log/app.log'
4 Exec log_info("App message read from log"); $File = file_name();
5 </Input>

This is the same, but the backslash (\) is used to continue the Exec directive to the next line.

nxlog.conf
1 <Input in2>
2 Module im_file
3 File '/var/log/app.log'
4 Exec log_info("App message read from log"); \
5 $File = file_name();
6 </Input>

The following configuration is functionally equivalent to the previous configuration above. However, by
creating an Exec block, multiple statements can be specified without the need for a backslash (\) line
continuation at the end of each line.

nxlog.conf
1 <Input in3>
2 Module im_file
3 File '/var/log/app.log'
4 <Exec>
5 log_info("App message read from log");
6 $File = file_name();
7 </Exec>
8 </Input>

Statements can also be executed based on a schedule by using the Exec directive of a Schedule block. The Exec
directive is slightly different in this example. Because its execution depends solely on a schedule instead of any
incoming log events, there is no event record that can be associated with it. The $File field assignment in the
example above would be impossible.

158
Example 41. Using a Statement in a Schedule

This input instance will generate an hourly internal log event.

nxlog.conf
1 <Input syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Schedule>
6 When @hourly
7 Exec log_info("The syslog_udp input module instance is active.");
8 </Schedule>
9 </Input>

NOTE Similar functionality is implemented by the im_mark module.

24.3.1. Assignment
Each event record is made up of fields, and assignment is the primary way that a value is written to a field in the
NXLog language. The assignment operation is declared with an equal sign (=). This operation loads the value
from the expression evaluated on the right into an event record field on the left.

Example 42. Using Field Assignment

This input instance uses assignment operations to add two fields to each event record.

nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/messages'
4 <Exec>
5 $Department = 'processing';
6 $Tier = 1;
7 </Exec>
8 </Input>

24.3.2. Block
Statements can be declared inside a block by surrounding them with curly braces ({}). A statement block in the
configuration is parsed as if it were a single statement. Blocks are typically used with conditional statements.

159
Example 43. Using Statement Blocks

This statement uses a block to execute two statements if the $Message field matches.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File '/var/log/messages'
 4 <Exec>
 5 if $Message =~ /^br0:/
 6 {
 7 log_warning('br0 interface state changed');
 8 $Tag = 'network';
 9 }
10 </Exec>
11 </Input>

24.3.3. Procedures
While functions are expressions that evaluate to values, procedures are statements that perform actions. Both
functions and procedures can take arguments. Unlike functions, procedures never return values. Instead, a
procedure modifies its argument, the state of the NXLog engine, the state of a module, or the current event.
Procedures can be polymorphic: the same procedure can take different argument types.

Many NXLog language features are provided through procedures. See the list of available procedures. Modules
can provide additional procedures for use with the NXLog language.

Example 44. Using a Procedure

This example uses the parse_syslog() procedure, provided by the xm_syslog module, to parse each Syslog-
formatted event record received via UDP.

nxlog.conf
1 <Input in>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 Exec parse_syslog();
6 </Input>

24.3.4. If-Else
The if or conditional statement allows a statement to be executed based on the boolean value of an expression.
When the boolean is TRUE, the statement is executed. An optional else keyword can be followed by another
statement to be executed if the boolean is FALSE.

160
Example 45. Using If Statements

This example uses an if statement and the drop() procedure to discard any event that matches the regular
expression.

nxlog.conf
1 <Input in1>
2 Module im_file
3 File '/var/log/messages'
4 Exec if $raw_event =~ /junk/ drop();
5 </Input>

Here, any event not matching the regular expression will be dropped.

nxlog.conf
1 <Input in2>
2 Module im_file
3 File '/var/log/messages'
4 Exec if not ($raw_event =~ /important/) drop();
5 </Input>

Finally, this statement shows more extensive use of the if statement, with an else clause and blocks defined
by curly braces ({}).

nxlog.conf
 1 <Input in3>
 2 Module im_file
 3 File '/var/log/messages'
 4 <Exec>
 5 if $raw_event =~ /alert/
 6 {
 7 log_warning('Detected alert message');
 8 }
 9 else
10 {
11 log_info('Discarding non-alert message');
12 drop();
13 }
14 </Exec>
15 </Input>

24.4. Variables
While NXLog provides fields for storing data during the processing of an event, they are only available for the
duration of that event record and can not be used to store a value across multiple events. For this purpose,
module variables can be used. A variable stores a value for the module instance where it is set. It can only be
accessed from the same module where it was created: a variable with the same name is a different variable
when referenced from another module.

Each module variable can be created with an expiry value or an infinite lifetime. If an expiry is used, the variable
will be destroyed automatically when the lifetime expires. This can be used as a garbage collection method or to
reset variable values automatically.

A module variable is referenced by a string value and can store a value of any type. Module variables are
supported by all modules. See the create_var(), delete_var(), set_var(), and get_var() procedures.

161
Example 46. Using Module Variables

If the number of login failures exceeds 3 within 45 seconds, then an internal log message is generated.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File '/var/log/messages'
 4 <Exec>
 5 if $Message =~ /login failure/
 6 {
 7 if not defined get_var('login_failures')
 8 { # create the variable if it doesn't exist
 9 create_var('login_failures', 45);
10 set_var('login_failures', 1);
11 }
12 else
13 { # increase the variable and check if it is over the limit
14 set_var('login_failures', get_var('login_failures') + 1);
15 if get_var('login_failures') >= 3
16 log_warning(">= 3 login failures within 45 seconds");
17 }
18 }
19 </Exec>
20 </Input>

The pm_evcorr module is recommended instead for this case. This algorithm does not
reliably detect failures because the lifetime of the variable is not affected by set_var(). For
example, consider login failures at 0, 44, 46, and 47 seconds. The lifetime of the variable
will be set when the first failure occurs, causing the variable to be cleared at 45 seconds.
NOTE
The variable is created with a new expiry at 46 seconds, but then only two failures are
noticed. Also, this method can only work in real-time because the timing is not based on
values available in the log message (although the event time could be stored in another
variable).

24.5. Statistical Counters


Like variables, statistical counters provide data storage for a module instance. Counters only support integers, but
a counter can use an algorithm to recalculate its value every time it is updated or read. With NXLog Enterprise
Edition v4.x and earlier, a statistical counter will only return a value if the time specified in the interval argument
has elapsed since it was created. Statistical counters can also be created with a lifetime. When a counter expires,
it is destroyed, like module variables.

A statistical counter can be created with the create_stat() procedure call. After it is created, it can be updated with
the add_stat() procedure call. The value of the counter can be read with the get_stat() function call. Note that the
value of the statistical counter is only recalculated during these calls, rather than happening automatically. This
can result in some slight distortion of the calculated value if the add and read operations are infrequent.

A time value can also be specified during creation, updating, and reading. This makes it possible for statistical
counters to be used with offline log processing.

162
Example 47. Using Statistical Counters

This input configuration uses a Schedule block and a statistical counter with the RATEMAX algorithm to
calculate the maximum rate of events over a 1 hour period. An internal log message is generated if the rate
exceeds 500 events/second at any point during the 1 hour period.

nxlog.conf
 1 <Input in>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 <Exec>
 6 parse_syslog();
 7 if defined get_stat('eps') add_stat('eps', 1, $EventReceivedTime);
 8 </Exec>
 9 <Schedule>
10 Every 1 hour
11 <Exec>
12 create_stat('eps', 'RATEMAX', 1, now(), 3600);
13 if get_stat('eps') > 500
14 log_info('Inbound TCP rate peaked at ' + get_stat('eps')
15 + ' events/second during the last hour');
16 </Exec>
17 </Schedule>
18 </Input>

163
Chapter 25. Reading and Receiving Logs
This chapter discusses log sources that you may need to use with NXLog, including:

• log data received over the network,


• events stored in databases,
• messages read from files, and
• data retrieved using executables.

25.1. Receiving over the Network


This section provides information and examples about receiving log messages from the network over various
protocols.

UDP
The im_udp module handles incoming messages over UDP.

Example 48. Using the im_udp Module

This input module instance shows the im_udp module configured with the default options: localhost
only and port 514.

nxlog.conf
1 <Input udp>
2 Module im_udp
3 Host localhost
4 Port 514
5 </Input>

The UDP protocol does not guarantee reliable message delivery. It is recommended to use
the TCP or SSL transport modules instead if message loss is a concern.
NOTE
Though NXLog was designed to minimize message loss even in the case of UDP, adjusting
the kernel buffers may reduce the likelihood of UDP message loss on a system under heavy
load. The Priority directive in the Route block can also help.

TCP
The im_tcp module handles incoming messages over TCP. For TLS/SSL, use the im_ssl module.

Example 49. Using the im_tcp Module

This input module instance accepts TCP connections from any host on port 1514.

nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 </Input>

SSL/TLS
The im_ssl module handles incoming messages over TCP with SSL/TLS security.

164
Example 50. Using the im_ssl Module

The following input module instance listens for SSL/TLS encrypted incoming logs on port 6514. The
certificate file paths are specified relative to a previously defined CERTDIR.

nxlog.conf
1 <Input in>
2 Module im_ssl
3 Host 0.0.0.0
4 Port 6514
5 CAFile %CERTDIR%/ca.pem
6 CertFile %CERTDIR%/client-cert.pem
7 CertKeyFile %CERTDIR%/client-key.pem
8 </Input>

Syslog
To receive Syslog over the network, use one of the network modules above, coupled with xm_syslog. Syslog
parsing is not required if you only need to forward or store the messages as they are. See also Accepting
Syslog via UDP, TCP, or TLS.

Example 51. Receiving Syslog over TCP with Octet-Framing

With this example configuration, NXLog listens for messages on TCP port 1514. The xm_syslog extension
module provides the Syslog_TLS InputType (for octet-framing) and the parse_syslog() procedure for
parsing Syslog messages.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 # "Syslog_TLS" is for octet framing and may be used with TLS/SSL
10 InputType Syslog_TLS
11 Exec parse_syslog();
12 </Input>

25.2. Reading from a Database


With the im_dbi and im_odbc modules it is possible to read logs directly from database servers. The im_dbi
module can be used on POSIX systems where libdbi is available. The im_odbc module, available in NXLog
Enterprise Edition, can be used with ODBC compatible databases on Windows, Linux, and Unix.

165
Example 52. Using the im_dbi Module

This example uses libdbi and the MySQL driver to read records from the logdb database.

nxlog.conf
 1 <Input in>
 2 Module im_dbi
 3 Driver mysql
 4 Option host 127.0.0.1
 5 Option username mysql
 6 Option password mysql
 7 Option dbname logdb
 8 SQL SELECT id, facility, severity, hostname, timestamp, application, \
 9 message FROM log
10 </Input>

Example 53. Using the im_odbc Module

Here, the mydb database is accessed via ODBC.

nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=mssql;database=mydb;
4 SQL SELECT RecordNumber as id, DateOccured as EventTime, \
5 data as Message from logtable WHERE RecordNumber > ?
6 </Input>

25.3. Reading from Files and Sockets


Files
The im_file module can be used to read logs from files. See also Reading Syslog Log Files.

Example 54. Using the im_file Module

This example reads from the specified file without performing any additional processing.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/var/log/messages"
4 </Input>

Unix Domain Socket


Use the im_uds module to read from a Unix domain socket. See also Accepting Syslog via /dev/log.

166
Example 55. Using the im_uds Module

With this configuration, NXLog will read messages from the /dev/log socket. NXLog’s flow control
feature must be disabled in this case (see the FlowControl directive in the Reference Manual).

nxlog.conf
1 <Input in>
2 Module im_uds
3 UDS /dev/log
4 FlowControl FALSE
5 </Input>

25.4. Receiving from an Executable


The im_exec module can be used to read logs from external programs and scripts over a pipe.

Example 56. Using the im_exec Module

This example uses the tail command to read messages from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input in>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>

167
Chapter 26. Processing Logs
This chapter deals with various tasks that might be required after a log message is received by NXLog.

26.1. Parsing Various Formats


After an input module has received a log message and generated an event record for it, there may be additional
parsing required. This parsing can be implemented by a dedicated module, or in the NXLog language with
regular expression and other string manipulation functionality.

The following sections provide configuration examples for parsing log formats commonly used by applications.

26.1.1. Common & Combined Log Formats


The Common Log Format (or NCSA Common Log Format) and Combined Log Format are access log formats used
by web servers. These are the same, except that the Combined Log Format uses two additional fields.

Common Log Format Syntax


host ident authuser [date] "request" status size↵

Combined Log Format Syntax


host ident authuser [date] "request" status size "referer" "user-agent"↵

If a field is not available, a hyphen (-) is used as a placeholder.

Table 49. Fields

Field Description
host IP address of the client

ident RFC 1413 identity of the client

authuser Username of the user accessing the document (not applicable for public documents)

date Timestamp of the request

request Request line received from the client

status HTTP status code returned to the client

size Size of the object returned to the client (measured in bytes)

referer URL from which the user was referred

user-agent User agent string sent by the client

168
Example 57. Parsing the Common Log Format

This configuration uses a regular expression to parse the fields in each record. The parsedate() function is
used to convert the timestamp string into a datetime type for later processing or conversion as required.

nxlog.conf
<Input access_log>
  Module im_file
  File "/var/log/apache2/access.log"
  <Exec>
  if $raw_event =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
  \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)/
  {
  $Hostname = $1;
  if $2 != '-' $AccountName = $2;
  $EventTime = parsedate($3);
  $HTTPMethod = $4;
  $HTTPURL = $5;
  $HTTPResponseStatus = $6;
  if $7 != '-' $FileSize = $7;
  }
  </Exec>
</Input>

169
Example 58. Parsing the Combined Log Format

This example is like the previous one, except it parses the two additional fields unique to the Combined Log
Format. An om_file instance is also shown here which has been configured to discard all events not related
to the user john and write the remaining events to a file in JSON format.

nxlog.conf
<Extension _json>
  Module xm_json
</Extension>

<Input access_log>
  Module im_file
  File "/var/log/apache2/access.log"
  <Exec>
  if $raw_event =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
  \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
  \ \"([^\"]+)\"/
  {
  $Hostname = $1;
  if $2 != '-' $AccountName = $2;
  $EventTime = parsedate($3);
  $HTTPMethod = $4;
  $HTTPURL = $5;
  $HTTPResponseStatus = $6;
  if $7 != '-' $FileSize = $7;
  if $8 != '-' $HTTPReferer = $8;
  if $9 != '-' $HTTPUserAgent = $9;
  }
  </Exec>
</Input>

<Output out>
  Module om_file
  File '/var/log/john_access.log'
  <Exec>
  if not (defined($AccountName) and ($AccountName == 'john')) drop();
  to_json();
  </Exec>
</Output>

For information about using the Common and Combined Log Formats with the Apache HTTP Server, see Apache
HTTP Server.

26.1.2. Parsing Syslog Events


The xm_syslog module provides the parse_syslog() procedure, which will parse a BSD or IETF Syslog formatted
raw event to create fields in the event record.

170
Example 59. Parsing a Syslog Event With parse_syslog()

This example shows a Syslog event as it is received via UDP and processed by the parse_syslog() procedure.

Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵

The following configuration loads the xm_syslog extension module and then uses the Exec directive to
execute the parse_syslog() procedure for each event.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_null
14 </Output>

This results in the following fields being added to the event record by parse_syslog().

Table 50. Syslog Fields Added by parse_syslog()

Field Value
$Message Failed password for invalid user linda from 192.168.1.60 port 38176 ssh2

$SyslogSeverityValue 6

$SyslogSeverity INFO

$SeverityValue 2

$Severity INFO

$SyslogFacilityValue 4

$SyslogFacility AUTH

$EventTime 2016-11-22 10:30:12

$Hostname myhost

$SourceName sshd

$ProcessID 8459

26.1.3. Field Delimited Formats (CSV)


Files containing fields and their values delimited by commas, spaces, or semicolons are commonly created and
consumed using such formats. The xm_csv module can both generate and parse these formats. Multiple xm_csv
instances can be used to reorder, add, remove, or modify fields before outputting to a different CSV format.

171
Example 60. Complex CSV Format Conversion

This example reads from the input file and parses it with the parse_csv() procedure from the csv1 instance
where the field names, types, and order within the record are defined. The $date field is then set to the
current time and the $number field is set to 0 if it is not already defined. Finally, the to_csv() procedure from
the csv2 instance is used to generate output with the additional date field, a different delimiter, and a
different field order.

nxlog.conf
 1 <Extension csv1>
 2 Module xm_csv
 3 Fields $id, $name, $number
 4 FieldTypes integer, string, integer
 5 Delimiter ,
 6 </Extension>
 7
 8 <Extension csv2>
 9 Module xm_csv
10 Fields $id, $number, $name, $date
11 Delimiter ;
12 </Extension>
13
14 <Input filein>
15 Module im_file
16 File "/tmp/input"
17 <Exec>
18 csv1->parse_csv();
19 $date = now();
20 if not defined $number $number = 0;
21 csv2->to_csv();
22 </Exec>
23 </Input>
24
25 <Output fileout>
26 Module om_file
27 File "/tmp/output"
28 </Output>

Input Sample
1, "John K.", 42
2, "Joe F.", 43

Output Sample
1;42;"John K.";2011-01-15 23:45:20
2;43;"Joe F.";2011-01-15 23:45:20

26.1.4. JSON
The xm_json module provides procedures for generating and parsing log data in JSON format.

172
Example 61. Using the xm_json Module for Parsing JSON

This example reads JSON-formatted data from file with the im_file module. Then the parse_json() procedure
is used to parse the data, setting each JSON field to a field in the event record.

nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/app.json"
8 Exec parse_json();
9 </Input>

Example 62. Using the xm_json Module for Generating JSON

Here, the to_json() procedure is used to write all the event record fields to $raw_event in JSON format. This
is then written to file using the om_file module.

nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/json.log"
8 Exec to_json();
9 </Output>

26.1.5. W3C Extended Log File Format


See the specification draft of the W3C format. The dedicated xm_w3c parser module can be used to process W3C
formatted logs. See also the W3C section in the Microsoft IIS chapter.

Log Sample
#Version: 1.0↵
#Date: 2011-07-01 00:00:00↵
#Fields: date time cs-method cs-uri↵
2011-07-01 00:34:23 GET /foo/bar1.html↵
2011-07-01 12:21:16 GET /foo/bar2.html↵
2011-07-01 12:45:52 GET /foo/bar3.html↵
2011-07-01 12:57:34 GET /foo/bar4.html↵

173
Example 63. Parsing W3C Format With xm_w3c

This configuration reads the W3C format log file and parses it with the xm_w3c module. The fields in the
event record are converted to JSON and the logs are forwarded via TCP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension w3c_parser>
 6 Module xm_w3c
 7 </Extension>
 8
 9 <Input w3c>
10 Module im_file
11 File '/var/log/httpd-log'
12 InputType w3c_parser
13 </Input>
14
15 <Output tcp>
16 Module om_tcp
17 Host 192.168.12.1
18 Port 1514
19 Exec to_json();
20 </Output>

The W3C format can also be parsed with the xm_csv module if using NXLog Community Edition.

174
Example 64. Parsing W3C Format With xm_csv

The following configuration reads a W3C file and tokenizes it with the CSV parser. Header lines starting with
a leading hash mark (#) are ignored. The $EventTime field is set from the parsed date and time fields.

The fields in the xm_csv module instance below must be updated to correspond with the
NOTE
fields in the W3C file to be parsed.

nxlog.conf
 1 <Extension w3c_parser>
 2 Module xm_csv
 3 Fields $date, $time, $HTTPMethod, $HTTPURL
 4 FieldTypes string, string, string, string
 5 Delimiter ' '
 6 EscapeChar '"'
 7 QuoteChar '"'
 8 EscapeControl FALSE
 9 UndefValue -
10 </Extension>
11
12 <Extension _json>
13 Module xm_json
14 </Extension>
15
16 <Input w3c>
17 Module im_file
18 File '/var/log/httpd-log'
19 <Exec>
20 if $raw_event =~ /^#/ drop();
21 else
22 {
23 w3c_parser->parse_csv();
24 $EventTime = parsedate($date + " " + $time);
25 }
26 </Exec>
27 </Input>

26.1.6. XML
The xm_xml module can be used for generating and parsing structured data in XML format.

Example 65. Using the xm_xml Module for Parsing XML

This configuration uses the im_file module to read from file. Then the parse_xml() procedure parses the
XML into fields in the event record.

nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/app.xml"
8 Exec parse_xml();
9 </Input>

175
Example 66. Using the xm_xml Module for Generating XML

Here, the fields in the event record are used by the to_xml() procedure to generate XML, which is then
written to file by the om_file module.

nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/logs.xml"
8 Exec to_xml();
9 </Output>

26.2. Alerting
NXLog can be configured to generate alerts when specific conditions are met. Here are some ways alerting could
be implemented.

26.2.1. Sending Messages to an External Program


The om_exec module can pipe messages to an external program or script, which will be executed once the
om_exec module has started. The external script is required to run continuously until the om_exec module is
stopped and the pipe is closed. This functionality can be used for alerting.

Example 67. Using om_exec with an External Alerter

In this example Output, all messages not matching the regular expression are dropped, and remaining
messages are piped to a custom alerter script.

nxlog.conf
1 <Output out>
2 Module om_exec
3 Command /usr/local/sbin/alerter
4 Arg -
5 Exec if not ($raw_event =~ /alertcondition/) drop();
6 </Output>

Without the Exec directive above, all messages received by the module would be passed to the alerter
script as defined by the Command directive. The optional Arg directive passes its value to the Command
script.

See also Sending to Executables.

26.2.2. Invoking a Program for Each Message


The xm_exec module provides two procedures, exec() and exec_async(), for spawning an external program or
script. The script is executed once for each call, and is expected to terminate when it has finished processing the
message.

176
Example 68. Using xm_exec with an External Alerter

In this example Input, each message matching the regular expression is piped to a new instance of
alerter, which is executed asynchronously (does not block additional processing by the calling module).

nxlog.conf
 1 <Extension _exec>
 2 Module xm_exec
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 <Exec>
10 if $raw_event =~ /alertcondition/
11 exec_async("/usr/local/sbin/alerter");
12 </Exec>
13 </Input>

Example 69. Using xm_exec to Send an Email

In this example, an email is sent using exec_async() when the regular expression condition is met.

nxlog.conf
 1 <Extension _exec>
 2 Module xm_exec
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 <Exec>
10 if $raw_event =~ /alertcondition/
11 {
12 exec_async("/bin/sh", "-c", 'echo "' + $Hostname + '\n\nRawEvent:\n' +
13 $raw_event + '"|/usr/bin/mail ' +
14 '-a "Content-Type: text/plain; charset=UTF-8" ' +
15 '-s "ALERT" user@domain.com');
16 }
17 </Exec>
18 </Input>

26.2.3. Generate an Internal NXLog Log Message


NXLog can be configured to generate an internal log event when a specific condition is met. Internal log events
can be generated with various severity levels using the log_error(), log_warning(), log_info(), and log_debug()
procedures. Internal log messages will be written to the file specified by the global LogFile directive (according to
the configured LogLevel) and will be generated by the im_internal module.

NOTE DEBUG level events are not generated by the im_internal module.

177
Example 70. Using log_warning() for Alerting

If a message matches the regular expression, an internal log event is generated with level WARNING.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/var/log/app.log"
4 Exec if $raw_event =~ /alertcondition/ log_warning("ALERT");
5 </Input>

26.3. Using Buffers


The following sections describe the various types of buffering features provided by NXLog and give examples for
configuring buffering in specific scenarios.

26.3.1. Read and Write Buffers


Input and output module instances have read and write buffers, respectively. These buffers can be configured
for a particular module instance with the BufferSize directive.

Example 71. Read/Write Buffers in a Simple Route

This example shows the default read and write buffers used by NXLog for a simple route. Each buffer is
limited to 65,000 bytes.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File '/tmp/in.log'
 4
 5 # Set read buffer size, in bytes (default)
 6 BufferSize 65000
 7 </Input>
 8
 9 <Output tcp>
10 Module om_tcp
11 Host 192.168.1.1
12
13 # Set write buffer size, in bytes (default)
14 BufferSize 65000
15 </Output>
16
17 <Route r>
18 Path file => tcp
19 </Route>

178
26.3.2. Log Queues
Every processor and output module instance has an input log queue for events that have not yet been processed
by that module instance. When the preceding module has processed an event, it is placed in this queue. Because
log queues are enabled by default for all processor and output module instances, they are the preferred way to
adjust buffering behavior.

The size of a module instance’s log queue can be configured with the LogqueueSize directive.

Example 72. A Log Queue in a Basic Route

This example shows the default log queue used by NXLog in a simple route. Up to 100 events will be placed
in the queue to be processed by the om_batchcompress instance.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 </Input>
 4
 5 <Output batch>
 6 Module om_batchcompress
 7 Host 192.168.2.1
 8
 9 # Set log queue size, in events (default)
10 LogqueueSize 100
11 </Output>
12
13 <Route r>
14 Path eventlog => batch
15 </Route>

By default, log queues are stored in memory. NXLog can be configured to persist log queues to disk with the
PersistLogqueue directive. NXLog will further sync all writes to a disk-based queue with SyncLogqueue. These
directives can be used to prevent data loss in case of interrupted processing—at the expense of reduced
performance—and can be used both globally or for a particular module. For more information, see Reliable
Message Delivery.

Any events remaining in the log queue will be written to disk when NXLog is stopped, regardless
NOTE
of the value of PersistLogqueue.

179
Example 73. A Persistent Log Queue

In this example, the om_elasticsearch instance is configured with a persistent and synced log queue. Each
time an event is added to the log queue, the event will be written to disk and synced before processing
continues.

nxlog.conf
 1 <Input acct>
 2 Module im_acct
 3 </Input>
 4
 5 <Output elasticsearch>
 6 Module om_elasticsearch
 7 URL http://192.168.2.2:9200/_bulk
 8
 9 # Set log queue size, in events (default)
10 LogqueueSize 100
11
12 # Use persistent and synced log queue
13 PersistLogqueue TRUE
14 SyncLogqueue TRUE
15 </Output>
16
17 <Route r>
18 Path acct => elasticsearch
19 </Route>

26.3.3. Flow Control


To effectively leverage buffering, it is important to understand NXLog’s flow control feature. Flow control has no
effect unless the following sequence of events occurs in a route:

1. a processor or output module instance is not able to process log data at the incoming rate,
2. that module instance’s log queue becomes full, and
3. the input or processor module instance responsible for feeding the log queue has flow control enabled.

In this case, flow control will cause the input or processor module instance to suspend processing until the
succeeding module instance is ready to accept more log data.

180
Example 74. Flow Control Enabled

This example shows the NXLog’s default flow control behavior with a basic route. Events are collected from
the Windows Event Log with im_msvistalog and forwarded with om_tcp. The om_tcp instance will be blocked
if the destination is unreachable or the network can not handle the events quickly enough.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3
 4 # Flow control enabled (default)
 5 FlowControl TRUE
 6 </Input>
 7
 8 <Output tcp>
 9 Module om_tcp
10 Host 192.168.1.1
11 </Output>
12
13 <Route r>
14 Path eventlog => tcp
15 </Route>

The om_tcp instance is unable to connect to the destination host and its log queue is full. Because the
im_msvistalog instance has flow control enabled and the next module in the route is blocked, it has been
paused. No events will be read from the Event Log until the tcp instance becomes unblocked.

Flow control is enabled by default, and can be set globally or for a particular module instance with the
FlowControl directive. Generally, flow control provides automatic, zero-configuration handling of cases where
buffering would otherwise be required. However, there are some situations where flow control should be
disabled and buffering should be explicitly configured as required.

181
Example 75. Flow Control Disabled

In this example, Linux Audit messages are collected with im_linuxaudit and forwarded with om_http. Flow
control is disabled for im_linuxaudit to prevent processes from being blocked due to an Audit backlog. To
avoid loss of log data in this case, the LogqueueSize directive could be used as shown in Increasing the Log
Queue Size to Protect Against UDP Message Loss.

nxlog.conf
 1 <Input audit>
 2 Module im_linuxaudit
 3 <Rules>
 4 -D
 5 -w /etc/passwd -p wa -k passwd
 6 </Rules>
 7
 8 # Disable flow control to prevent Audit backlog
 9 FlowControl FALSE
10 </Input>
11
12 <Output http>
13 Module om_http
14 URL http://192.168.2.1:8080/
15 </Output>
16
17 <Route r>
18 Path audit => http
19 </Route>

The om_http instance is unable to forward log data, and its log queue is full. Because it has flow control
disabled, the im_linuxaudit instance remains active and continues to process log data. However, all events
will be discarded until the om_http log queue is no longer full.

26.3.4. The pm_buffer Module


Log queues are enabled by default for processor and output modules instances, and are the preferred way to
configure buffering behavior in NXLog. However, for cases where additional features are required, the pm_buffer
module can be used to add a buffer instance to a route in addition to the above buffers normally used by NXLog.

Additional features provided by pm_buffer include:

• both memory- and disk-based buffering types,


• a buffer size limit measured in kilobytes,
• a WarnLimit threshold that generates a warning message when crossed, and
• functions for querying the status of a pm_buffer buffer instance.

182
In a disk-based pm_buffer instance, events are not written to disk unless the log queue of the
succeeding module instance is full. For this reason, a disk-based pm_buffer instance does not
NOTE reduce peformance in the way that a persistent log queue does. Additionally, pm_buffer (and
other processor modules) should not be used if crash-safe processing is required; see Reliable
Message Delivery.

Example 76. Using the pm_buffer Module

This example shows a route with a large disk-based buffer provided by the pm_buffer module. A warning
message will be generated when the buffer size crosses the threshold specified.

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3 </Input>
 4
 5 <Processor buffer>
 6 Module pm_buffer
 7 Type Disk
 8
 9 # 40 MiB buffer
10 MaxSize 40960
11
12 # Generate warning message at 20 MiB
13 WarnLimit 20480
14 </Processor>
15
16 <Output ssl>
17 Module om_ssl
18 Host 10.8.0.2
19 CAFile %CERTDIR%/ca.pem
20 CertFile %CERTDIR%/client-cert.pem
21 CertKeyFile %CERTDIR%/client-key.pem
22 </Output>
23
24 <Route r>
25 Path udp => buffer => ssl
26 </Route>

The SSL/TLS destination is unreachable, and the disk-based buffer is filling.

26.3.5. Other Buffering Functionality


Buffering in NXLog is not limited to the functionality covered above. Other modules implement or provide
additional buffering-related features, such as the ones listed below. (This is not intended to be an exhaustive list.)

• The UDP modules (im_udp, om_udp, and om_udpspoof) can be configured to set the socket buffer size
(SO_RCVBUF or SO_SNDBUF) with the respective SockBufSize directive.

183
• The external program and scripting support found in some modules (like im_exec, im_perl, im_python,
im_ruby, om_exec, om_perl, om_python, and om_ruby) can be used to implement custom buffering
solutions.
• Some modules (such as om_batchcompress, om_elasticsearch, and om_webhdfs) buffer events internally in
order to forward events in batches.
• The pm_blocker module can be used to programmatically block or unblock the log flow in a route, and in this
way control buffering. Or it can be used to test buffering.
• The om_blocker module can be used to test buffering behavior by simulating a blocked output.

Example 77. All Buffers in a Basic Route

The following diagram shows all buffers used in a simple im_udp => om_tcp route. The socket buffers are
only applicable to networking modules.

26.3.6. Receiving Logs via UDP


Because UDP is connectionless, log data sent via plain UDP must be accepted immediately. Otherwise the log
data is lost. For this reason, it is important to add a buffer if there is any possibility of the route becoming
blocked. This can be done by increasing the log queue size of the following module instance or adding a
pm_buffer instance to the route.

184
Example 78. Increasing the Log Queue Size to Protect Against UDP Message Loss

In this configuration, log messages are accepted with im_udp and forwarded with om_tcp. The log queue
size of the output module instance is increased to 5000 events to buffer messages in case the output
becomes blocked. To further reduce the risk of data loss, the socket buffer size is increased with the
SockBufSize directive and the route priority is increased with Priority.

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3
 4 # Raise socket buffer size
 5 SockBufSize 150000000
 6 </Input>
 7
 8 <Output tcp>
 9 Module om_tcp
10 Host 192.168.1.1
11
12 # Keep up to 5000 events in the log queue
13 LogqueueSize 5000
14 </Output>
15
16 <Route udp_to_tcp>
17 Path udp => tcp
18
19 # Process events in this route first
20 Priority 1
21 </Route>

The output is blocked because the network is not able to handle the log data quickly enough.

185
Example 79. Using a pm_buffer Instance to Protect Against UDP Message Loss

Instead of raising the size of the log queue, this example uses a memory-based pm_buffer instance to
buffer events when the output becomes blocked. A warning message will be generated if the buffer size
exceeds the specified WarnLimit threshold.

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3
 4 # Raise socket buffer size
 5 SockBufSize 150000000
 6 </Input>
 7
 8 <Processor buffer>
 9 Module pm_buffer
10 Type Mem
11
12 # 5 MiB buffer
13 MaxSize 5120
14
15 # Warn at 2 MiB
16 WarnLimit 2048
17 </Processor>
18
19 <Output http>
20 Module om_http
21 URL http://10.8.1.1:8080/
22 </Output>
23
24 <Route udp_to_http>
25 Path udp => buffer => http
26
27 # Process events in this route first
28 Priority 1
29 </Route>

The HTTP destination is unreachable, the http instance log queue is full, and the buffer instance is filling.

26.3.7. Reading Logs From /dev/log


Syslog messages can be read from the /dev/log/ socket with the im_uds module. However, if the route
becomes blocked and the im_uds instance is suspended, the syslog() system call will cause blocking in programs
attempting to log a message. To prevent that, flow control should be disabled.

With flow control disabled, events will be discarded if the route becomes blocked and the route’s log queues
become full. To reduce the risk of lost log data, the log queue size of a succeeding module instance in the route
can be increased. Alternatively, a pm_buffer instance can be used as in the second UDP example above.

186
Example 80. Buffering Syslog Messages From /dev/log

This configuration uses the im_uds module to collect Syslog messages from the /dev/log socket, and the
xm_syslog parse_syslog() procedure to parse them.

To prevent the syslog() system call from blocking as a result of the im_uds instance being suspended, the
FlowControl directive is set to FALSE. The LogqueueSize directive raises the log queue limit of the output
instance to 5000 events. The Priority directive indicates that this route’s events should be processed first.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input dev_log>
 6 Module im_uds
 7 UDS /dev/log
 8 Exec parse_syslog();
 9
10 # This module instance must never be suspended
11 FlowControl FALSE
12 </Input>
13
14 <Output elasticsearch>
15 Module om_elasticsearch
16 URL http://192.168.2.1:9022/_bulk
17
18 # Keep up to 5000 events in the log queue
19 LogqueueSize 5000
20 </Output>
21
22 <Route syslog_to_elasticsearch>
23 Path dev_log => elasticsearch
24
25 # Process events in this route first
26 Priority 1
27 </Route>

The Elasticsearch server is unreachable and the log queue is filling. If the log queue becomes full, events
will be discarded.

26.3.8. Forwarding Logs From File


Because flow control will pause an im_file instance automatically, it is normally not necessary to use any
additional buffering when reading from files. If the route is blocked, the file will not be read until the route
becomes unblocked. If the im_file SavePos directive is set to TRUE (the default) and NXLog is stopped, the file
position of the im_file instance will be saved and used to resume reading when NXLog is started.

187
Example 81. Forwarding From File With Default Buffering

This configuration reads log messages from a file with im_file and forwards them with om_tcp. No extra
buffering is necessary because flow control is enabled.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File '/tmp/in.log'
 4
 5 # Enable flow control (default)
 6 FlowControl TRUE
 7
 8 # Save file position on exit (default)
 9 SavePos TRUE
10 </Input>
11
12 <Output tcp>
13 Module om_tcp
14 Host 10.8.0.2
15 </Output>
16
17 <Route r>
18 Path file => tcp
19 </Route>

The TCP destination is unreachable, and the im_file instance is paused. No messages will be read from the
source file until the om_tcp instance becomes unblocked.

Sometimes, however, there is a risk of the input log file becoming inaccessible while the im_file instance is
suspended (due to log rotation, for example). In this case, the tcp log queue size can be increased (or a
pm_buffer instance added) to buffer more events.

188
Example 82. Forwarding From File With Additional Buffering

In this example, log messages are read from a file with im_file and forwarded with om_tcp. The om_tcp log
queue size has been increased in order to buffer more events because the source file may be rotated away.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File '/tmp/in.log'
 4 </Input>
 5
 6 <Output tcp>
 7 Module om_tcp
 8 Host 192.168.1.1
 9
10 # Keep up to 2000 events in the log queue
11 LogqueueSize 2000
12 </Output>
13
14 <Route r>
15 Path file => tcp
16 </Route>

The TCP destination is unreachable and the om_tcp instance is blocked. The im_file instance will continue to
read from the file (and events will accumulate) until the tcp log queue is full; then it will be paused.

26.3.9. Discarding Events


NXLog’s flow control mechanism ensures that input module instances will pause until all output module
instances can write. This can be problematic in some situations when discarding messages is preferable to
blocking. For this case, flow control can be disabled or the drop() procedure can be used in conjunction with the
pm_buffer module. These two options differ somewhat in behavior, as described in the examples below.

189
Example 83. Disabling Flow Control to Selectively Discard Events

This example sends UDP input to two outputs, a file and an HTTP destination. If the HTTP transmission is
slower than the rate of incoming UDP packets or the destination is unreachable, flow control would
normally pause the im_udp instance. This would result in dropped UDP packets. In this situation it is better
to selectively drop log messages in the HTTP route than to lose them entirely. This can be accomplished by
simply disabling flow control for the input module instance.

This configuration will also continue to send events to the HTTP destination in the unlikely
event that the om_file output blocks. In fact, the input will remain active even if both
NOTE
outputs block (though in this particular case, because UDP is lossy, messages will be lost
regardless of whether the im_udp instance is suspended).

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3
 4 # Never pause this instance
 5 FlowControl FALSE
 6 </Input>
 7
 8 <Output http>
 9 Module om_http
10 URL http://10.0.0.3:8080/
11
12 # Increase the log queue size
13 LogqueueSize 2000
14 </Output>
15
16 <Output file>
17 Module om_file
18 File '/tmp/out.log'
19 </Output>
20
21 <Route udp_to_tcp>
22 Path udp => http, file
23 </Route>

The HTTP destination cannot accept events quickly enough. The om_http instance is blocked and its log
queue is full. New events are not being added to the HTTP output queue but are still being written to the
output file.

Example 84. Selectively Discarding Events With pm_buffer and drop()

190
In this example, process accounting logs collected by im_acct are both forwarded via TCP and written to file.
A separate route is used for each output. A pm_buffer instance is used in the TCP route, and it is configured
to discard events with drop() if its size goes beyond a certain threshold. Thus, the pm_buffer instance will
never become full and will never cause the im_acct instance to pause—events will always be written to the
output file.

Because the im_acct instance has flow control enabled, it will be paused if the om_file
NOTE
output becomes blocked.

nxlog.conf
 1 <Input acct>
 2 Module im_acct
 3
 4 # Flow control enabled (default)
 5 FlowControl TRUE
 6 </Input>
 7
 8 <Processor buffer>
 9 Module pm_buffer
10 Type Mem
11 MaxSize 1000
12 WarnLimit 800
13 Exec if buffer_size() >= 80k drop();
14 </Processor>
15
16 <Output tcp>
17 Module om_tcp
18 Host 192.168.1.1
19 </Output>
20
21 <Output file>
22 Module om_file
23 File '/tmp/out.log'
24 </Output>
25
26 <Route udp_to_tcp>
27 Path acct => buffer => tcp
28 </Route>
29
30 <Route udp_to_file>
31 Path acct => file
32 </Route>

The TCP destination is unreachable and the om_tcp log queue is full. Input accounting events will be added
to the buffer until it gets full, then they will be discarded. Input events will also be written to the output file,
regardless of whether the buffer is full.

191
26.3.10. Scheduled Buffering
While buffering is typically used when a log source becomes unavailable, NXLog can also be configured to buffer
logs programmatically. For this purpose, the pm_blocker module can be added to a route.

Example 85. Buffering Logs and Forwarding by Schedule

This example collects log messages via UDP and forwards them to a remote NXLog agent. However, events
are buffered with pm_buffer during the week and only forwarded on weekends.

• During the week, the pm_blocker instance is blocked and events accumulate in the large on-disk buffer.
• During the weekend, the pm_blocker instance is unblocked and all events, including those that have
accumulated in the buffer, are forwarded.

nxlog.conf (truncated)
 1 <Input udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 </Input>
 5
 6 <Processor buffer>
 7 Module pm_buffer
 8
 9 # 500 MiB disk buffer
10 Type Disk
11 MaxSize 512000
12 WarnLimit 409600
13 </Processor>
14
15 <Processor schedule>
16 Module pm_blocker
17 <Schedule>
18 # Start blocking Monday morning
19 When 0 0 * * 1
20 Exec schedule->block(TRUE);
21 </Schedule>
22 <Schedule>
23 # Stop blocking Saturday morning
24 When 0 0 * * 6
25 Exec schedule->block(FALSE);
26 </Schedule>
27 </Processor>
28 [...]

192
It is currently a weekday and the schedule pm_blocker instance is blocked.

If it is possible to use flow control with the log sources, then it is not necessary to use extra buffering. Instead,
the inputs will be paused and read later when the route is unblocked.

193
Example 86. Collecting Log Data on a Schedule

This configuration reads events from the Windows Event Log and forwards them to a remote NXLog agent
in compressed batches with om_batchcompress. However, events are only forwarded during the night.
Because the im_msvistalog instance can be paused and events will still be available for collection later, it is
not necessary to configure any extra buffering.

• During the day, the pm_blocker instance is blocked, the output log queue becomes full, and the
eventlog instance is paused.

• During the night, the pm_blocker instance is unblocked. The events in the schedule log queue are
processed, the eventlog instance is resumed, and all pending events are read from the Event Log and
forwarded.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 </Input>
 4
 5 <Processor schedule>
 6 Module pm_blocker
 7 <Schedule>
 8 # Start blocking at 7:00
 9 When 0 7 * * *
10 Exec schedule->block(TRUE);
11 </Schedule>
12 <Schedule>
13 # Stop blocking at 19:00
14 When 0 19 * * *
15 Exec schedule->block(FALSE);
16 </Schedule>
17 </Processor>
18
19 <Output batch>
20 Module om_batchcompress
21 Host 10.3.0.211
22 </Output>
23
24 <Route scheduled_batches>
25 Path eventlog => schedule => batch
26 </Route>

The current time is within the specified "day" interval and pm_blocker is blocked.

194
26.4. Character Set Conversion
It is recommended to normalize logs to UTF-8. The xm_charconv module provides character set conversion: the
convert_fields() procedure for converting an entire message (all event fields) and a convert() function for
converting a string.

Example 87. Character Set Auto-Detection of Various Input Encodings

This configuration shows an example of character set auto-detection. The input file may contain differently
encoded lines, but by invoking the convert_fields() procedure, each message will have the character set
encoding of its fields detected and then converted to UTF-8 as needed.

nxlog.conf
 1 <Extension _charconv>
 2 Module xm_charconv
 3 AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
 4 </Extension>
 5
 6 <Input filein>
 7 Module im_file
 8 File "tmp/input"
 9 Exec convert_fields("auto", "utf-8");
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File "tmp/output"
15 </Output>
16
17 <Route r>
18 Path filein => fileout
19 </Route>

26.5. Detecting a Dead Agent or Log Source


It is a common requirement to detect conditions when there are no log messages coming from a source. This
usually indicates a problem such as network connectivity issues, a server which is down, or an unresponsive
application or system service. Usually this problem should be detected by monitoring tools (such as Nagios or
OpenView), but the absence of logs can also be a good reason to investigate.

The im_mark module is designed as means of monitoring the health of the NXLog agent by
NOTE
generating "mark" messages every 30 minutes. The message text and interval are configurable.

The solution to this problem is the combined use of statistical counters and Scheduled checks. The input module
can update a statistical counter configured to calculate events per hour. In the same input module a Schedule
block checks the value of the statistical counter periodically. When the event rate is zero or drops below a certain
limit, an appropriate action can be executed such as sending out an alert email or generating an internal warning
message. Note that there are other ways to address this issue and this method may not be optimal for all
situations.

195
Example 88. Alerting on Absence of Log Messages

The following configuration example creates a statistical counter in the context of the im_tcp module to
calculate the number of events received per hour. The Schedule block within the context of the same
module checks the value of the msgrate statistical counter and generates an internal error message when
no logs have been received within the last hour.

nxlog.conf
 1 <Input in>
 2 Module im_tcp
 3 Port 2345
 4 <Exec>
 5 create_stat("msgrate", "RATE", 3600);
 6 add_stat("msgrate", 1);
 7 </Exec>
 8 <Schedule>
 9 Every 3600 sec
10 <Exec>
11 create_stat("msgrate", "RATE", 10);
12 add_stat("msgrate", 0);
13 if defined get_stat("msgrate") and get_stat("msgrate") <= 1
14 log_error("No messages received from the source!");
15 </Exec>
16 </Schedule>
17 </Input>

26.6. Event Correlation


It is possible to write correlation rules in the NXLog language using the built-in features such as variables and
statistical counters. While these features are quite powerful, some cases cannot be detected with them,
especially when conditions require a sliding window.

A dedicated NXLog module, pm_evcorr, is available for advanced correlation requirements. It provides features
similar to those of SEC and greatly enhances the correlation capabilities of NXLog.

196
Example 89. Correlation Rules

The following configuration provides samples for each type of rule: Absence, Pair, Simple, Suppressed, and
Thresholded.

nxlog.conf (truncated)
 1 <Processor evcorr>
 2 Module pm_evcorr
 3 TimeField EventTime
 4
 5 <Simple>
 6 Exec if $Message =~ /^simple/ $raw_event = "got simple";
 7 </Simple>
 8
 9 <Suppressed>
10 # Match input event and execute an action list, but ignore the following
11 # matching events for the next t seconds.
12 Condition $Message =~ /^suppressed/
13 Interval 30
14 Exec $raw_event = "suppressing..";
15 </Suppressed>
16
17 <Pair>
18 # If TriggerCondition is true, wait Interval seconds for RequiredCondition
19 # to be true and then do the Exec. If Interval is 0, there is no window on
20 # matching.
21 TriggerCondition $Message =~ /^pair-first/
22 RequiredCondition $Message =~ /^pair-second/
23 Interval 30
24 Exec $raw_event = "got pair";
25 </Pair>
26
27 <Absence>
28 # If TriggerCondition is true, wait Interval seconds for RequiredCondition
29 [...]

26.7. Extracting Data


When NXLog receives an event, it creates an event record with a $raw_event field, other core fields like
$EventReceivedTime, and any fields provided by the particular module (see Fields for more information). This
section explores the various ways that NXLog can be configured to extract values from the raw event.

Some log sources (like Windows EventLog collected via im_msvistalog) already contain structured data. In this
case, there is often no additional extraction required; see Message Classification.

26.7.1. Regular Expressions via the Exec Directive


NXLog supports the use of regular expressions for parsing fields. For detailed information about regular
expressions in NXLog, see the Reference Manual Regular Expressions section.

197
Example 90. Parsing With Regular Expressions

Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵

With this configuration, the Syslog message shown above is first parsed with parse_syslog(). This results in a
$Message field created in the event record. Then, a regular expression is used to further parse the
$Message field and create additional fields if it matches.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 <Exec>
10 parse_syslog();
11 if $Message =~ /(?x)^Failed\ (\S+)\ for(?:\ invalid user)?\ (\S+)\ from
12 \ (\S+)\ port\ \d+\ ssh2$/
13 {
14 $AuthMethod = $1;
15 $AccountName = $2;
16 $SourceIPAddress = $3;
17 }
18 </Exec>
19 </Input>

Named capturing is supported also. Each captured group is automatically added to the event record as a
field with the same name.

nxlog.conf
 1 <Input in>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 $Message =~ /(?x)^Failed\ (?<AuthMethod>\S+)\ for(?:\ invalid\ user)?
 8 \ (?<AccountName>\S+)\ from\ (?<SourceIPAddress>\S+)\ port
 9 \ \d+\ ssh2$/;
10 </Exec>
11 </Input>

Table 51. Additional Fields Parsed by Regular Expression

Field Value
$AuthMethod password

$AccountName linda

$SourceIPAddress 192.168.1.60

198
26.7.2. Pattern Matching With Grok
The xm_grok module provides parsing for unstructured log messages with Grok patterns.

The examples below demonstrate how to parse Apache messages using Grok patterns.

Example 91. Creating the Pattern to Parse the Access Message

The message below is a sample of an Apache access message.

Apache Access Message


192.168.3.20 - - [28/Jun/2019] "GET /cgi-bin/try/ HTTP/1.0" 200 3395↵

The above Apache message can be parsed using the Grok pattern below.

Pattern for the Access Message


ACCESS_LOG %{IP:ip_address} - - \[%{TIMESTAMP_ACCESS:timestamp}\]
"%{METHOD:http_method} %{UNIXPATH:uri} HTTP/%{HTTP_VERSION:http_version}"
%{INT:http_status_code} %{INT:response_size}

Example 92. Creating the Pattern to Parse the Error Message

The message below is a sample of an Apache error message.

Apache Error Message


[Fri Dec 16 01:46:23 2019] [error] [client 1.2.3.4] Directory index forbidden↵
by rule: /home/test/↵

The above Apache message can be parsed using the Grok pattern below.

Pattern for the Error Message


ERROR_LOG \[%{TIMESTAMP_ERROR:timestamp}\] \[%{LOGLEVEL:severity}\]
\[client %{IP:client_address}\] %{GREEDYDATA:message}

Lists of Grok patterns are available in various repositories. As an example, see the logstash-plugin section on
Github.

199
Example 93. Configuring NXLog to Parse Apache Messages

The following configuration reads messages from the apache_entries.log file using the im_file module
and stores the result in the $raw_event field.

The match_grok() function reads patterns from the patterns.txt file and attempts a series of matches on
the $raw_event field. If none of the patterns match, an internal message is logged.

nxlog.conf
 1 <Extension grok>
 2 Module xm_grok
 3 Pattern patterns.txt
 4 </Extension>
 5
 6 <Input messages>
 7 Module im_file
 8 File "apache_entries.log"
 9 <Exec>
10 if not ( match_grok($raw_event, "%{ACCESS_LOG}") or
11 match_grok($raw_event, "%{ERROR_LOG}"))
12 {
13 log_info('Event did not match any pattern');
14 }
15 </Exec>
16 </Input>

This example uses the patterns.txt file, which contains all necessary Grok patterns.

patterns.txt
INT (?:[+-]?(?:[0-9]+))
YEAR (?>\d\d){1,2}
MONTH
\b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|ruar)?|[Mm](?:a|ä)?r(?:ch|z)?|[Aa]pr(?:il)?|[Mm]a(?:y|i
)?|[Jj]un(?:e|i)?|[Jj]ul(?:y)?|[Aa]ug(?:ust)?|[Ss]ep(?:tember)?|[Oo](?:c|k)?t(?:ober)?|[Nn]ov(?
:ember)?|[Dd]e(?:c|z)(?:ember)?)\b
DAY
(?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
UNIXPATH (/([\w_%!$@:.,+~-]+|\\.)*)+
GREEDYDATA .*
IP (?<![0-9])(?:(?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-
9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-
5]))(?![0-9])
LOGLEVEL
([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WA
RN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]ever
e|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
TIMESTAMP_ACCESS %{INT}\/%{MONTH}\/%{YEAR}(:%{HOUR}:%{MINUTE}:%{SECOND} %{GREEDYDATA})?
TIMESTAMP_ERROR %{DAY} %{MONTH} %{INT} %{HOUR}:%{MINUTE}:%{SECOND} %{YEAR}
METHOD (GET|POST|PUT|DELETE|HEAD|TRACE|OPTIONS|CONNECT|PATCH){1}
HTTP_VERSION 1.(0|1)

ACCESS_LOG %{IP:ip_address} - - \[%{TIMESTAMP_ACCESS:timestamp}\] "%{METHOD:http_method}


%{UNIXPATH:uri} HTTP/%{HTTP_VERSION:http_version}" %{INT:http_status_code} %{INT:response_size}
ERROR_LOG \[%{TIMESTAMP_ERROR:timestamp}\] \[%{LOGLEVEL:severity}\] \[client
%{IP:client_address}\] %{GREEDYDATA:message}

200
26.7.3. Pattern Matching With pm_pattern
Regular expressions are widely used in pattern matching. Unfortunately, using a large number of regular
expression based patterns does not scale well, because these need to be evaluated linearly. The pm_pattern
module implements a more efficient pattern matching than regular expressions used in Exec directives.

Example 94. Using Regular Expressions With pm_pattern

Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵

With this configuration, the above Syslog message is first parsed with parse_syslog(). This results in a
$Message field created in the event record. Then, the pm_pattern module is used with a pattern XML file to
further parse the record.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>
11
12 <Processor pattern>
13 Module pm_pattern
14 PatternFile /var/lib/nxlog/patterndb.xml
15 </Processor>
16
17 <Output out>
18 Module om_null
19 </Output>
20
21 <Route r>
22 Path in => pattern => out
23 </Route>

The patterns for the pm_pattern module instance above are declared in the following patterndb.xml file.

201
Pattern Database (patterndb.xml)
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
  <created>2010-01-01 01:02:03</created>
  <version>42</version>
  <!-- First and only pattern group in this file -->
  <group>
  <name>ssh</name>
  <id>42</id>
  <!-- Only try to match this group if $SourceName == "sshd" -->
  <matchfield>
  <name>SourceName</name>
  <type>exact</type>
  <value>sshd</value>
  </matchfield>
  <!-- First and only pattern in this pattern group -->
  <pattern>
  <id>1</id>
  <name>ssh auth failure</name>
  <!-- Do regular expression match on $Message field -->
  <matchfield>
  <name>Message</name>
  <type>regexp</type>
  <value>^Failed (\S+) for(?: invalid user)? (\S+) from (\S+) port \d+ ssh2</value>
  <!-- Set 3 event record fields from captured strings -->
  <capturedfield>
  <name>AuthMethod</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>AccountName</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>SourceIPAddress</name>
  <type>string</type>
  </capturedfield>
  </matchfield>
  <!-- Set additional fields if pattern matches -->
  <set>
  <field>
  <name>TaxonomyAction</name>
  <value>Authenticate</value>
  <type>string</type>
  </field>
  <field>
  <name>TaxonomyStatus</name>
  <value>Failure</value>
  <type>string</type>
  </field>
  </set>
  </pattern>
  </group>
</patterndb>

Table 52. Fields Added by pm_pattern

Field Value
$AuthMethod password

202
Field Value
$AccountName linda

$SourceIPAddress 192.168.1.60

$TaxonomyAction Authenticate

$TaxonomyStatus Failure

NXLog Manager provides an interface for writing pattern files, and will also test sample events to aid in
establishing the correct match patterns. The pattern functions can be accessed from the PATTERNS menu in the
page header.

Example 95. Creating Patterns With NXLog Manager

The following instructions explain the steps required for creating the above pattern database with NXLog
Manager.

1. Open PATTERNS › CREATE GROUP. Enter a Name for the new pattern group, and optionally a
Description, in the Properties section. The name is used to refer to the pattern group later. The name
of the above pattern group is ssh.
2. Add a match field by clicking [ Add Field ] in the Match section. Only messages that match will be
further processed by this pattern group. In the above example, there is no reason to attempt any
matches if the $SourceName field does not equal sshd. The above pattern group uses Field name
=SourceName, Match=EXACT, and Value=sshd.

3. Save the new pattern group.


4. Open PATTERNS › CREATE FIELD to create a new field to be used when creating new patterns. For the
above example, the $AuthMethod field must be added because it is not in the default set provided by
NXLog Manager. Set Name=AuthMethod and Field Type=STRING, then click [ Save ].
5. Open PATTERNS › CREATE PATTERN. In the Pattern Info section, enter a Pattern Name and
optionally a Pattern Description. Select the correct Pattern Group from the list. In the above
example, the ssh pattern group is used.
6. In the Match section, set match values for the fields to be matched. If a regular expression match with
captured subgroups is detected, the interface will provide a Captured fields list where target fields can
be selected. The above example uses Field name=Message, Match=REGEXP, and Value=^Failed
(\S+) for(?: invalid user)? (\S+) from (\S+) port \d+ ssh2$. The three captured fields are
AuthMethod, AccountName, and SourceIPAddress.

203
7. The Set section allows fields to be set if the match is successful. Click [ Add Field ] for each field. The
above example sets $TaxonomyStatus to Failure and $TaxonomyAction to Authenticate.

8. The Action section accepts NXLog language statements like would be specified in an Exec directive.
Click [ Add action ], type in the statement, and click [ Verify ] to make sure the statement is valid. The
above example does not include any NXLog language statements.
9. The final tabbed section allows test messages to be entered to verify that the match works as expected.
Click the [ + ] to add a test case. To test the above example, add a Value for the Message field: Failed
password for invalid user linda from 192.168.1.60 port 38176 ssh2. Click [ Update Test
Cases ] in the Match section to automatically fill the captured fields. Verify that the fields are set as
expected. Additional test cases can be added to test other events.

10. Save the new pattern. Then click [ Export ] to download the pattern.xml file or use the pattern to
configure a managed agent.

See the NXLog Manager User Guide for more information.

204
26.7.4. Using the Extracted Fields
The previous sections explore ways that the log message can be parsed and new fields added to the event
record. Once the required data has been extracted and corresponding fields created, there are various ways to
use this new data.

• A field or set of fields can be matched by string or regular expression to trigger alerts, perform filtering, or
further classify the event.
• Fields in the event record can be renamed, modified, or deleted.
• Event correlation can be used to execute statements or suppress messages based on matching events inside
a specified window.
• Some output formats can be used to preserve the full set of fields in the event record (such as JSON and the
NXLog Binary format).

26.8. Filtering Messages


Message filtering is a process where only some of the received messages are kept. Filtering is possible using
regular expressions or other operators using any of the fields. See NXLog language for complete details on
expressions.

26.8.1. Using the drop() Procedure


Use the drop() procedure in an Exec directive to conditionally discard messages.

Example 96. Using drop() to Discard Unmatched Messages

In this example, any line that matches neither of the two regular expressions will be discarded with the
drop() procedure. Only lines that match at least one of the regular expressions will be kept.

nxlog.conf
1 <Input file>
2 Module im_file
3 File "/var/log/myapp/*.log"
4 Exec if not ($raw_event =~ /failed/ or $raw_event =~ /error/) drop();
5 </Input>

205
Example 97. Using drop() with $SourceName and $Message to Isolate Authentication Errors

In this example, events collected from multiple hosts and multiple sources by a centralized log server are
contained in an input file. By defining a list of targeted $SourceName values along with the presence of
certain keywords in the $Message field as criteria for authentication failures, the drop() procedure will
discard all non-matching events.

nxlog.conf
 1 define AUTHSOURCES "su", "sudo", "sshd", "unix_chkpwd"
 2
 3 <Input combined>
 4 Module im_file
 5 File "tmp/central-logging"
 6 <Exec>
 7 if not (
 8 defined($SourceName)
 9 and $SourceName IN (%AUTHSOURCES%)
10 and (
11 $Message =~ /fail/
12 or $Message =~ /error/
13 or $Message =~ /illegal/
14 or $Message =~ /invalid/
15 )
16 ) drop();
17 </Exec>
18 </Input>

Example 98. Using drop() with $SourceName and $EventID to Collect all DNS Events

In this example events are to be collected from all DNS sources. Three of the four sources contain only
DNS-specific events which can be matched by their $SourceName value alone against the defined list, but
the Sysmon source can contain other non-DNS events as well. However, all Sysmon events with an Event ID
of 22 are DNS events. The conditional statement drops all events that do not have a $SourceName in the
defined list as well as those that match the Sysmon $SourceName but do not have a value of 22 for their
$EventID.

nxlog.conf
 1 define DNSSOURCES "Microsoft-Windows-DNSServer", \
 2 "Microsoft-Windows-DNS-Client", \
 3 "systemd-resolved"
 4
 5 <Input combined>
 6 Module im_file
 7 File "tmp/central-logging"
 8 <Exec>
 9 if not (defined($SourceName)
10 and ($SourceName IN (%DNSSOURCES%)
11 or ($SourceName == "Microsoft-Windows-Sysmon"
12 and $EventID == 22)))
13 drop();
14 </Exec>
15 </Input>

206
Example 99. Filtering During the Output Phase to Create Multiple Event Logs from a Single Input

This example uses the same centralized log server events from the previous examples above as an input to
three outputs. Separate categories based on a single $SourceName are created and written to three
separate files. Each output instance defines a range of values for $EventId, the criteria for the
categorization into two groups: DNS Server Audit or DNS Server Analytical. The conditional statement in the
second instance uses $SeverityValue to keep only those audit events having a value greater than 2
(warnings or errors).

nxlog.conf (truncated)
 1 <Input combined>
 2 Module im_file
 3 File "tmp/central-logging"
 4 </Input>
 5
 6 <Output DNS_Audit>
 7 Module om_file
 8 File "tmp/DNS-Server-Audit"
 9 <Exec>
10 if not (
11 defined($SourceName)
12 and $SourceName == "Microsoft-Windows-DNSServer"
13 and $EventId >= 513
14 and $EventId <= 582
15 ) drop();
16 </Exec>
17 </Output>
18
19 <Output DNS_Audit_Action_Required>
20 Module om_file
21 File "tmp/DNS-Server-Audit-Action-Required"
22 <Exec>
23 if not (
24 defined($SourceName)
25 and $SourceName == "Microsoft-Windows-DNSServer"
26 and $EventId >= 513
27 and $EventId <= 582
28 and $SeverityValue > 2 # Severity higher than INFO
29 [...]

26.8.2. Other Options for Filtering


The NXLog language also supports embedded XML queries in two input modules: Windows 2008/Vista and Later
(im_msvistalog) and Windows Event Collector (im_wseventing). For more detailed information about filtering
events from Windows Event Log see the Filtering Events section.

26.9. Format Conversion


The requirements and possibilities for format conversion are endless. NXLog provides a broad range of
functionality for conversion, including the NXLog language and dedicated modules. For special cases, a processor
or extension module can be crafted.

For converting between CSV formats, see Complex CSV Format Conversion.

207
Example 100. Converting from BSD to IETF Syslog

This configuration receives log messages in the BSD Syslog format over UDP and forwards the logs in the
IETF Syslog format over TCP.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input bsd>
 6 Module im_udp
 7 Port 514
 8 Host 0.0.0.0
 9 Exec parse_syslog_bsd(); to_syslog_ietf();
10 </Input>
11
12 <Output ietf>
13 Module om_tcp
14 Host 1.2.3.4
15 Port 1514
16 </Output>
17
18 <Route bsd_to_ietf>
19 Path bsd => ietf
20 </Route>

26.10. Log Rotation and Retention


NXLog can implement many kinds of log rotation and retention policies in order to prevent overuse of disk space
and to organize older logs. These policies can be applied based on file size, time intervals, or even event
attributes (such as severity). Log files can be rotated out to custom filenames and then compressed and/or
deleted after a specified time period. The configuration is very flexible and custom policies can be easily
implemented.

NXLog supports three main approaches to file rotation. In each case, policies should usually be implemented
using a Schedule block.

• Most policies are implemented within the scope of an om_file module instance, where output files are being
written.
• The im_file module can be configured to rotate log files after they have been fully read.
• Any log file on the system can be rotated under the scope of an xm_fileop module or any other module. This
includes the internal log file (specified by the LogFile directive).

208
Example 101. Rotating om_file Log Files

Log files written by an om_file module often need to be rotated regularly. This example uses the om_file
file_name() function and xm_fileop file_cycle() procedure to rotate the output file daily, keeping a total of 7
old log files.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Output out>
 6 Module om_file
 7 File '/var/log/out.log'
 8 <Schedule>
 9 When @daily
10 <Exec>
11 file_cycle(file_name(), 7);
12 reopen();
13 </Exec>
14 </Schedule>
15 </Output>

Example 102. Rotating the Internal Log File

NXLog will write its own logs to a file specified by the LogFile directive. It is good practice to set up rotation
of this file. This configuration uses the xm_fileop file_size() function. The file_cycle() procedure rotates the file
if it is larger than 5 MB. The file is also rotated weekly. No more than 8 past log files are retained.

nxlog.conf
 1 define LOGFILE /opt/nxlog/var/log/nxlog/nxlog.log
 2 LogFile %LOGFILE%
 3
 4 <Extension _fileop>
 5 Module xm_fileop
 6
 7 # Check the log file size every hour and rotate if larger than 5 MB
 8 <Schedule>
 9 Every 1 hour
10 <Exec>
11 if (file_exists('%LOGFILE%') and file_size('%LOGFILE%') >= 5M)
12 file_cycle('%LOGFILE%', 8);
13 </Exec>
14 </Schedule>
15
16 # Rotate log file every week on Sunday at midnight
17 <Schedule>
18 When @weekly
19 Exec if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
20 </Schedule>
21 </Extension>

There are many other ways that rotation and retention can be implemented. See the following sections for more
details and examples.

209
26.10.1. Rotation Policies and Intervals
• The om_file reopen() procedure will cause NXLog to reopen the output file specified by the File directive.
• The rotate_to() procedure can be used to choose a name to rotate the current file to. This procedure will
reopen the output file automatically, so there is no need to use the the reopen() procedure.
• The file_cycle() procedure will move the selected file to "file.1". If "file.1" already exists, it will be moved to "
file.2", and so on. If an integer is used as a second argument, it specifies the maximum number of previous
files to keep.

If file_cycle() is used on a file that NXLog currently has open under the scope of an
om_file module instance, the reopen() procedure must be used to continue logging to
WARNING the file specified by the File directive. Otherwise, events will continue to be logged to
the rotated file ("file.1", for example). (This is not necessary if the rotated file is the
LogFile.)

26.10.1.1. Rotating by File Size


A log file can be rotated according to a pre-defined file size. This policy can be configured with the om_file
file_size() function or the xm_fileop file_size() function.

Example 103. Using the file_size() Function

This example uses the file_size() function to detect if a file has grown beyond a specified size. If it has, the
file_cycle() procedure is used to rotate it. The file size is checked hourly with the When directive.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Output out>
 6 Module om_file
 7 File '/var/log/out.log'
 8 <Schedule>
 9 When @hourly
10 <Exec>
11 if file_size(file_name()) >= 1M
12 {
13 file_cycle(file_name());
14 reopen();
15 }
16 </Exec>
17 </Schedule>
18 </Output>

26.10.1.2. Using Time-Based Intervals


For time interval based rotation policies NXLog provides two directives for use in Schedule blocks.

• The Every directive rotates log files according to a specific interval specified in seconds, minutes, days, or
weeks.
• The When directive provides crontab-style scheduling, including extensions like @hourly, @daily, and
@weekly.

210
Example 104. Using Every and When for Time-Based Rotation

This example shows the use of the Every and When directives. The output file is rotated daily using the
rotate_to() function. The name is generated in the YYYY-MM-DD format according to the current server time.

nxlog.conf
 1 <Output out>
 2 Module om_file
 3 File '/var/log/out.log'
 4 <Schedule>
 5 # This can likewise be used for `@weekly` or `@monthly` time periods.
 6 When @daily
 7
 8 # The following crontab-style is the same as `@daily` above.
 9 # When "0 0 * * *"
10
11 # The `Every` directive could also be used in this case.
12 # Every 24 hour
13
14 Exec rotate_to(file_name() + strftime(now(), '_%Y-%m-%d'));
15 </Schedule>
16 </Output>

Example 105. Rotating Into a Nested Directory Structure

In this example, logs for each year and month are stored in separated sub-directories as shown below. The
log file is rotated daily.

.../logs/YEAR/MONTH/YYYY-MM-DD.log

This is accomplished with the xm_fileop dir_make() procedure, the core strftime() function, and the om_file
rotate_to() procedure.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Output out>
 6 define OUT_DIR /srv/logs
 7
 8 Module om_file
 9 File '%OUT_DIR%/out.log'
10 <Schedule>
11 When @daily
12 <Exec>
13 # Create year/month directories if necessary
14 dir_make('%OUT_DIR%/' + strftime(now(), '%Y/%m'));
15
16 # Rotate current file into the correct directory
17 rotate_to('%OUT_DIR%/' + strftime(now(), '%Y/%m/%Y-%m-%d.log'));
18 </Exec>
19 </Schedule>
20 </Output>

211
26.10.1.3. Using Dynamic Filenames
As an alternative to traditional file rotation, output filenames can be set dynamically, based on each log event
individually. This is possible because the om_file File directive supports expressions.

Because dynamic filenames result in events being written to multiple files with semi-arbitrary
NOTE names, they are not suitable for scenarios where a server or application expects events to be
written to a particular foo.log. In this case normal rotation should be used instead.

Often one of now(), $EventReceivedTime, and $EventTime are used for dynamic filenames. Consider the
following points.

• The now() function uses the current server time, not when the event was created or when it was received by
NXLog. If logs are delayed, they will be stored according to the time at which the NXLog output module
instance processes them. This will not work with nxlog-processor(8) (see Offline Log Processing).
• The $EventReceivedTime field timestamp is set by the input module instance when an event is received by
NXLog. This will usually be practically the same as using now(), except in cases where there are processing
delays in the NXLog route (such as when using buffering). This can be used with nxlog-processor(8) if the
$EventReceivedTime field was previously set in the logs.

• The $EventTime field is set from a timestamp in the event, so will result in correct value even if the event
was delayed before reaching NXLog. Note that some parsing may be required before this field is available
(for example, the parse_syslog() procedure sets the xm_syslog EventTime field). Note also that an incorrect
timestamp in an event record can cause the field to be unset or filled incorrectly resulting in data written into
the wrong file.

Example 106. Timestamp-Based Dynamic Filenames With om_file

This example accepts Syslog formatted messages via UDP. Each message is parsed by the parse_syslog()
procedure. The EventTime field is set from the timestamp in the syslog header. This field is then used by
the expression in the File directive to generate an output filename for the event.

Even if messages received from clients over the network are out of order or delayed, they will still be placed
in the appropriate output files according to the timestamps.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_udp
 7 Port 514
 8 Host 0.0.0.0
 9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_file
14 File '/var/log/nxlog/out_' + strftime($EventTime, '%Y-%m-%d')
15 Exec to_syslog_ietf();
16 </Output>

Dynamic filenames can be based on other fields also.

212
Example 107. Attribute-Based Dynamic Filenames With om_file

In this example, events are grouped by their source hostname.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_file
14 File '/tmp/logs_by_host/' + $Hostname
15 </Output>

26.10.1.4. Rotating Input Files


An im_file module instance can be configured to manipulate files after they are fully processed. The im_file OnEOF
block can be used for this purpose.

When using OnEOF for rotation, the rotated files must be named (or placed in a directory)
WARNING
such that they will not be detected as new files and re-read by the module instance.

If a logging service keeps a log file open for writing, the xm_exec exec() procedure should be used
NOTE
to restart the service or otherwise instruct it to re-open the log file.

213
Example 108. Using im_file OnEOF for Input Files

In this example, files matching /var/log/app/*.log are read with an im_file module instance. When each
file has been fully read, it is rotated. The GraceTimeout directive will prevent NXLog from rotating the file
until after there have been no events for 10 seconds.

The input files are rotated by adding a timestamp suffix to the filename. For example, an input file named
/var/log/app/errors.log would be rotated to /var/log/app/errors.log_20180101T130100. The new
name does not match the wildcard specified by the File directive, so the file is not re-read.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Input app_logs_rotated>
 6 Module im_file
 7 File '/var/log/app/*.log'
 8 <OnEOF>
 9 <Exec>
10 file_rename(file_name(),
11 file_name() + strftime(now(), '_%Y%m%dT%H%M%S'));
12 </Exec>
13 GraceTimeout 10
14 </OnEOF>
15 </Input>

26.10.2. Retention Policies


NXLog can be configured to keep old log files according to a particular retention policy. Functions and
procedures for retention are provided by the xm_fileop module. Additional actions, such as compressing old log
files, can be implemented with the xm_exec extension module.

26.10.2.1. Using Simple File Cycling


The file_cycle() procedure provides simple numbered rotation and, optionally, retention.

214
Example 109. Cycling One Year of Logs With file_cycle()

This example demonstrates the use of the xm_fileop file_cycle() procedure for keeping a total of 12 log files,
one for each month. Log files older than 1 year will be automatically deleted.

This policy creates following log file structure: /var/log/foo.log for the current
month,/var/log/foo.log.1 for the previous month, and so on up to the maximum of 12 files.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Output out>
 6 Module om_file
 7 File '/var/logs/foo.log'
 8 <Schedule>
 9 When @monthly
10 <Exec>
11 file_cycle(file_name(), 12);
12 reopen();
13 </Exec>
14 </Schedule>
15 </Output>

Different policies for different events can be implemented in combination with dynamic filenames.

215
Example 110. Retaining Files According to Severity

This example uses the $Severity field (such as $Severity set by parse_syslog()) to filter events to separate
files. Then different retention policies are applied according to severity. Here, one week of debug logs, 2
weeks of informational logs, and 4 weeks of higher severity logs are retained.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _fileop>
 6 Module xm_fileop
 7 </Extension>
 8
 9 <Input logs_in>
10 Module im_file
11 File "/var/log/messages"
12 Exec parse_syslog();
13 </Input>
14
15 <Output logs_out>
16 define OUT_DIR /opt/nxlog/var/log
17
18 Module om_file
19 File '%OUT_DIR%/' + $Severity + '.log'
20 <Schedule>
21 When @daily
22 <Exec>
23 file_cycle('%OUT_DIR%/DEBUG.log', 7);
24 file_cycle('%OUT_DIR%/INFO.log', 14);
25 file_cycle('%OUT_DIR%/WARNING.log', 28);
26 file_cycle('%OUT_DIR%/ERROR.log', 28);
27 file_cycle('%OUT_DIR%/CRITICAL.log', 28);
28 reopen();
29 </Exec>
30 </Schedule>
31 </Output>

26.10.2.2. Compressing Old Log Files


The xm_exec module can be used to compress old log files to reduce disk usage.

216
Example 111. Using bzip2 With exec_async()

In this example, the file size of the output file is checked hourly with the om_file file_size() function. If the
size is over the limit, then:

1. a newfile module variable is set to the name the current file will be rotated to,

2. the om_file rotate_to() procedure renames the current output file to the name set in newfile,

3. the module re-opens the original file specified by the File directive and continue logging, and
4. the xm_exec exec_async() procedure call bzip2 on the rotated-out file (without waiting for the command
to complete).

nxlog.conf (truncated)
 1 <Input in>
 2 Module im_null
 3 </Input>
 4
 5 # tag::guide_include[]
 6 <Extension _exec>
 7 Module xm_exec
 8 </Extension>
 9
10 <Extension _fileop>
11 Module xm_fileop
12 </Extension>
13
14 <Output out>
15 Module om_file
16 File '/opt/nxlog/var/log/app.log'
17 <Schedule>
18 When @hourly
19 <Exec>
20 if out->file_size() > 15M
21 {
22 set_var('newfile', file_name() + strftime(now(), '_%Y%m%d%H%M%S'));
23 rotate_to(get_var('newfile'));
24 exec_async('/bin/bzip2', get_var('newfile'));
25 }
26 </Exec>
27 </Schedule>
28 [...]

26.10.2.3. Deleting Old Log Files


For retention policies where file deletion is not handled automatically by the xm_fileop file_cycle() procedure, the
xm_fileop file_remove() can be used to delete old files. This procedure can also delete files based on their creation
time.

217
Example 112. Using file_remove() to Delete Old Files

This example uses file_remove() to remove any files older than 30 days.

nxlog.conf
 1 <Input in>
 2 Module im_null
 3 </Input>
 4
 5 <Output logs_out>
 6 Module om_file
 7 File '/var/log/'+ strftime(now(),'%Y%m%d') + '.log'
 8 <Schedule>
 9 When @daily
10
11 # Delete logs older than 30 days (24x60x60x30)
12 Exec file_remove('/var/log/*.log', now() - 2592000);
13 </Schedule>
14 </Output>

26.11. Message Classification


Pattern matching is commonly used for message classification. When certain strings are detected in a log
message, the message gets tagged with classifiers. Thus it is possible to query or take action based on the
classifiers only. There are several ways to classify messages based on patterns.

See also Extracting Data, a closely related topic, for more examples of classification.

26.11.1. Simple Matching on Fields


Often, message classification can be performed during parsing. However, if the required fields have already been
parsed or the input module provides structured data, then it is only necessary to match the relevant fields and
set classifiers.

218
Example 113. Classifying a Windows Security EventLog Message

This example classifies Windows Security login failure events with Event ID 4625 (controlled by the "Audit
logon events" audit policy setting). If a received event has that ID, it is classified as a failed authentication
attempt and the $AccountName field is set to the value of $TargetUserName.

Table 53. Sample Event via im_msvistalog (Excerpt)

Field Value
$EventType AUDIT_FAILURE

$EventID 4625

$SourceName Microsoft-Windows-Security-Auditing

$Channel Security

$Category Logon

$TargetUserSid S-1-0-0

$TargetUserName linda

$TargetDomainName WINHOST

$Status 0xc000006d

$FailureReason %%2313

$SubStatus 0xc000006a

$LogonType 2

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <Exec>
 4 if ($EventID == 4625) and
 5 ($SourceName == 'Microsoft-Windows-Security-Auditing')
 6 {
 7 $TaxonomyAction = 'Authenticate';
 8 $TaxonomyStatus = 'Failure';
 9 $AccountName = $TargetUserName;
10 }
11 </Exec>
12 </Input>

Table 54. Fields Added to the Event Record

Field Value
$TaxonomyAction Authenticate

$TaxonomyStatus Failure

$AccountName linda

26.11.2. Regular Expressions via the Exec Directive


The =~ operator can be used for regular expression matching in an Exec directive.

219
Example 114. Regular Expression Message Classification

When the contents of the $Message field match against the regular expression, the $AccountName and
$AccountID fields are filled with the appropriate values from the referenced captured sub-strings.
Additionally the value LoginEvent is stored in the $Action field.

1 if $Message =~ /(?x)^pam_unix\(sshd:session\):\ session\ opened\ for\ user\ (\S+)


2 \ by\ \(uid=(\d+)\)/
3 {
4 $AccountName = $1;
5 $AccountID = integer($2);
6 $Action = 'LoginEvent';
7 }

26.11.3. Using pm_pattern


When there are a lot of patterns, writing them all in the configuration file is inefficient. Instead, the pm_pattern
module can be used.

220
Example 115. Classifying With pm_pattern

The above pattern matching rule can be defined in the pm_pattern modules’s XML format in the following
way, which will accomplish the same result.

Pattern Database (patterndb.xml)


<pattern>
  <id>42</id>
  <name>ssh_pam_session_opened</name>
  <description>ssh pam session opened</description>
  <matchfield>
  <name>Message</name>
  <type>REGEXP</type>
  <value>
  ^pam_unix\(sshd:session\): session opened for user (\S+) by \(uid=(\d+)\)
  </value>
  <capturedfield>
  <name>AccountName</name>
  <type>STRING</type>
  </capturedfield>
  <capturedfield>
  <name>AccountID</name>
  <type>INTEGER</type>
  </capturedfield>
  </matchfield>
  <set>
  <field>
  <name>Action</name>
  <type>STRING</type>
  <value>LoginEvent</value>
  </field>
  </set>
</pattern>

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input uds>
 6 Module im_uds
 7 UDS /dev/log
 8 Exec parse_syslog_bsd();
 9 </Input>
10
11 <Processor pattern>
12 Module pm_pattern
13 PatternFile /var/lib/nxlog/patterndb.xml
14 </Processor>
15
16 <Output file>
17 Module om_file
18 File "/var/log/messages"
19 Exec to_syslog_bsd();
20 </Output>
21
22 <Route uds_to_file>
23 Path uds => pattern => file
24 </Route>

221
26.12. Parsing Multi-Line Messages
Multi-line messages such as exception logs and stack traces are quite common in logs. Unfortunately these log
messages are often stored in files or forwarded over the network without any encapsulation. In this case, the
newline characters in the messages cannot be correctly parsed by simple line-based parsers, which treat every
line as a separate event.

Multi-line events may have one or more of:

• a header in the first line (with timestamp and severity field, for example),
• a closing character sequence marking the end, and
• a fixed line count.

Based on this information, NXLog can be configured to reconstruct the original messages, creating a single event
for each multi-line message.

26.12.1. xm_multiline
NXLog provides xm_multiline for multi-line parsing; this dedicated extension module is the recommended way to
parse multi-line messages. It supports header lines, footer lines, and fixed line counts. Once configured, the
xm_multiline module instance can be used as a parser via the input module’s InputType directive.

Example 116. Using the xm_multiline Module

This configuration creates a single event record with the matching HeaderLine and all successive lines until
an EndLine is received.

nxlog.conf
 1 <Extension multiline_parser>
 2 Module xm_multiline
 3 HeaderLine "---------------"
 4 EndLine "END------------"
 5 </Extension>
 6
 7 <Input in>
 8 Module im_file
 9 File "/var/log/app-multiline.log"
10 InputType multiline_parser
11 </Input>

It is also possible to use regular expressions with the HeaderLine and EndLine directives.

222
Example 117. Using Regular Expressions With xm_multiline

Here, a new event record is created beginning with each line that matches the regular expression.

nxlog.conf
 1 <Extension tomcat_parser>
 2 Module xm_multiline
 3 HeaderLine /^\d{4}\-\d{2}\-\d{2} \d{2}\:\d{2}\:\d{2},\d{3} \S+ \[\S+\] \- .*/
 4 </Extension>
 5
 6 <Input log4j>
 7 Module im_file
 8 File "/var/log/tomcat6/catalina.out"
 9 InputType tomcat_parser
10 </Input>

Because the EndLine directive is not specified in this configuration, the xm_multiline parser
cannot know that a log message is finished until it receives the HeaderLine of the next
NOTE message. The log message is kept in the buffers, waiting to be forwarded, until either a
new log message is read or the im_file module instance’s PollInterval has expired. See the
xm_multiline AutoFlush directive.

26.12.2. Module Variables


It is also possible to parse multi-line messages by using module variables, as shown below. However, it is
generally recommended to use the xm_multiline module instead, because it offers some significant advantages:

• more efficient message processing,


• a more readable configuration,
• correctly incremented module event counters (one increment per multi-line message versus one per line),
and
• operation on the message source level rather than the module instance level (each file for a wildcarded
im_file module instance or each TCP connection for an im_tcp/im_ssl instance).

223
Example 118. Parsing Multi-Line Messages with Module Variables

This example saves the matching line and successive lines in the saved variable. When another matching
line is read, an internal log message is generated with the contents of the saved variable.

nxlog.conf
 1 <Input log4j>
 2 Module im_file
 3 File "/var/log/tomcat6/catalina.out"
 4 <Exec>
 5 if $raw_event =~ /(?x)^\d{4}\-\d{2}\-\d{2}\ \d{2}\:\d{2}\:\d{2},\d{3}\ \S+
 6 \ \[\S+\]\ \-\ .*/
 7 {
 8 if defined(get_var('saved'))
 9 {
10 $tmp = $raw_event;
11 $raw_event = get_var('saved');
12 set_var('saved', $tmp);
13 delete($tmp);
14 log_info($raw_event);
15 }
16 else
17 {
18 set_var('saved', $raw_event);
19 drop();
20 }
21 }
22 else
23 {
24 set_var('saved', get_var('saved') + "\n" + $raw_event);
25 drop();
26 }
27 </Exec>
28 </Input>

As with the previous example, a log message is kept in the saved variable, and not
NOTE
forwarded, until a new log message is read.

26.13. Rate Limiting and Traffic Shaping


Application of rate limiting and traffic shaping improves utilization of services which are running in parallel with
NXLog.

26.13.1. Rate Limiting


Rate limiting restricts the number of messages that can be read by NXLog within a given time unit.

The poor man’s tool for rate limiting is the sleep() procedure.

224
Example 119. Rate Limiting With the sleep() Procedure

In the following example, sleep() is invoked with 500 microseconds. This means that the input module will
be able to read at most 2000 messages per second.

nxlog.conf
1 <Input in>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 Exec sleep(500);
6 </Input>

This is not very precise because the module can do additional processing which can add to the total
execution time, but it gets fairly close.

WARNING It is not recommended to use rate limiting on a route that reads logs over UDP.

26.13.2. Traffic Shaping


Shaping the outgoing traffic of NXLog can guarantee that a required bandwidth remains for other running
services.

The traffic shaping script can be downloaded from the nxlog-public/contrib repository.

The script does not require configuring NXLog, but it needs to be configured to run at startup with tools like
crontab or rc.local.

To configure running the script with crontab, the @reboot task should be added to the /etc/crontab file.

/etc/crontab
1 @reboot /usr/local/sbin/traffic-shaper.sh

To configure running the script with rc.local, the script path should be added to the /etc/rc.local file.

/etc/rc.local
1 /usr/local/sbin/traffic-shaper.sh

The traffic shaper ties to the destination port on the network level and can shape traffic in accordance with
priorities. For example, high priority can be configured for a database server and low priority for a backup
system.

For more information about the Linux traffic control, see the Traffic Control HOWTO section on The Linux
Documentation Project website.

26.14. Rewriting and Modifying Messages


There are many ways to modify log messages.

26.14.1. Simple Rewrite


A simple rewrite can be done by modifying the $raw_event field without parsing the message (with Syslog, for
example). Regular expression capturing can be used for this.

225
Example 120. Simple Rewrite Statement

This statement, when used in an Exec directive, will apply the replacement directly to the $raw_event field.
In this case, a parsing procedure like parse_syslog() would not be used.

1 if $raw_event =~ /^(aaaa)(replaceME)(.+)/
2 $raw_event = $1 + 'replaceMENT' + $3;

Example 121. Converting a Timestamp Format

This example will convert a timestamp field to a different format. Like the previous example, the goal is to
modify the $raw_event field directly, rather than use other fields and then a procedure like to_json() to
update $raw_event.

The input log format is line-based, with whitespace-separated fields. The first field is a timestamp
expressed as seconds since the epoch.

Input Sample
1301471167.225121 AChBVvgs1dfHjwhG8 141.143.210.102 5353 224.0.0.251 5353 udp dns - - - S0 - -
0 D 1 73 0 0 (empty)↵

In the output module instance Exec directive, the regular expression will match and capture the first field
from the line, and remove it. This captured portion is parsed with the parsedate() function and used to set
the $EventTime field. This field is then prepended to the $raw_event field to replace the previously
removed field.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File "conn.log"
 4 </Input>
 5
 6 <Output out>
 7 Module om_tcp
 8 Host 192.168.0.1
 9 Port 1514
10 <Exec>
11 if $raw_event =~ s/^(\S+)//
12 {
13 $EventTime = parsedate($1);
14 $raw_event = strftime($EventTime, 'YYYY-MM-DDThh:mm:ss.sTZ') +
15 $raw_event;
16 }
17 </Exec>
18 </Output>

Output Sample
2011-03-30T00:46:07.225121-07:00 AChBVvgs1dfHjwhG8 141.143.210.102 5353 224.0.0.251 5353 udp
dns - - - S0 - - 0 D 1 73 0 0 (empty)↵

26.14.2. Modifying Fields


A more complex method is to parse the message into fields, modify some fields, and finally reconstruct the
message from the fields. This method is much more versatile: it allows rewriting to be done regardless of input
and output formats.

226
Example 122. Rewrite Using Fields

In this example, each Syslog message is received via UDP and parsed with parse_syslog_bsd(). Then, if the
$Message field matches the regular expression, the $SeverityValue field is modified. Finally, the
to_syslog_bsd() procedure generates $raw_event from the fields.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Port 514
 8 Host 0.0.0.0
 9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 <Exec>
16 if $Message =~ /error/ $SeverityValue = syslog_severity_value("error");
17 to_syslog_bsd();
18 </Exec>
19 </Output>
20
21 <Route syslog_to_file>
22 Path udp => file
23 </Route>

26.14.3. Renaming and Deleting Fields


In some cases it may be necessary to rename or delete fields.

The simplest way is to use the NXLog language and the Exec directive.

Example 123. Simple Field Rename

This statement uses the rename_field() procedure to rename the $user field to $AccountName.

1 rename_field($user, $AccountName);

Example 124. Simple Field Deletion

This statement uses the delete() procedure to delete the $Serial field.

1 delete($Serial);

Alternatively, the xm_rewrite extension module (available in NXLog Enterprise Edition) can be used to rename or
delete fields.

227
Example 125. Using xm_rewrite to Whitelist and Rename Fields

This example uses the parse_syslog() procedure to create a set of Syslog fields in the event record. It then
uses the Keep directive to whitelist a set of fields, deleting any field that is not in the list. Finally the Rename
directive is used to rename the $EventTime field to $Timestamp. The resulting event record is converted to
JSON and sent out via TCP.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension rewrite>
 6 Module xm_rewrite
 7 Keep EventTime, Severity, Hostname, SourceName, Message
 8 Rename EventTime, Timestamp
 9 </Extension>
10
11 <Input in>
12 Module im_file
13 File '/var/log/messages'
14 Exec parse_syslog(); rewrite->process();
15 </Input>
16
17 <Output out>
18 Module om_tcp
19 Host 10.0.0.1
20 Port 1514
21 Exec to_json();
22 </Output>
23
24 <Route r>
25 Path in => out
26 </Route>

Example 126. Using xm_rewrite to Remove Fields

Here is an example Extension block that uses the Delete directive to delete all the severity fields. This could
be used to prevent severity-based matching (during later processing) on an event source that does not set
severity values correctly.

nxlog.conf
1 <Extension rewrite>
2 Module xm_rewrite
3 Delete SyslogSeverityValue, SyslogSeverity, SeverityValue, Severity
4 </Extension>

26.15. Timestamps
The NXLog core provides functions for parsing timestamps that return datetime values, along with functions for
generating formatted timestamps from datetime values.

26.15.1. Parsing Timestamps


Most timestamps can be parsed with the parsedate() function, which will automatically parse any of the
supported formats.

228
Example 127. Parsing a Timestamp With parsedate()

Consider the following line-based input sample. Each record begins with a timestamp followed by a tab.

Input Sample
2016-10-11T22:14:15.003Z ⇥ machine.example.com ⇥ An account failed to log on.↵

This example configuration uses a regular expression to capture the string up to the first tab. Then the
parsedate() function is used to parse the resulting string and set the $EventTime field to the corresponding
datetime value. This value can be converted to a timestamp string as required in later processing, either
explicitly or as defined by the global DateFormat directive (see Formatting Timestamps).

nxlog.conf
1 <Input in>
2 Module im_file
3 File 'in.log'
4 Exec if $raw_event =~ /^([^\t])\t/ $EventTime = parsedate($1);
5 </Input>

The parsedate() function is especially useful if the timestamp format varies within the events
being processed. A timestamp of any supported format will be parsed. In this example, the
TIP
timestamp must be at the beginning of the event and followed by a tab character to be
matched by the regular expression.

Sometimes a log source will contain a few events with invalid or unexpected formatting. If parsedate() fails to
parse the input string, it will return an undefined datetime value. This allows the user to configure a fallback
timestamp.

Example 128. Using a Fallback Timestamp With parsedate()

This example statement uses a vague regular expression that may in some cases match an invalid string. If
parsedate() fails to parse the timestamp, it will return an undefined datetime value. In this case, the final
line below will set $EventTime to the current time.

1 if $raw_event =~ /^(\S+)\s+(\S+)/
2 $EventTime = parsedate($1 + " " + $2);
3
4 # Make sure $EventTime is set
5 if not defined($EventTime) $EventTime = now();

$EventTime = $EventReceivedTime could be used instead to set a timestamp according to


TIP
when the event was received by NXLog.

For parsing more exotic formats, the strptime() function can be used.

229
Example 129. Using strptime() to Parse Timestamps

In this input sample, the date and time are two distinct fields delimited by a tab. It also uses a non-standard
single digit format instead of fixed width with double digits.

Input Sample
2011-5-29 ⇥ 0:3:2 GMT ⇥ WINDOWSDC ⇥ An account failed to log on.↵

To parse this, a regular expression can be used to capture the timestamp string. This string is then parsed
with the strptime() function.

1 if $raw_event =~ /^(\d+-\d+-\d+\t\d+:\d+:\d+ \w+)/


2 $EventTime = strptime($1, '%Y-%m-%d%t%H:%M:%S %Z');

26.15.2. Adjusting Timestamps


Sometimes a log source sends events with incorrect or incomplete timestamps. For example, some network
devices may not have the correct time (especially immediately after rebooting); also, the BSD Syslog header
provides neither the year nor the timezone. NXLog can be configured to apply timestamp corrections in various
ways.

Reliably applying timezone offsets is difficult due to complications like daylight savings time
(DST) and networking and processing delays. For this reason, it is best to use clock
WARNING
synchronization (such as NTP) and timezone-aware timestamps at the log source when
possible.

The simplest solution for incorrect timestamps is to replace them with the time when the event was received by
NXLog. This is a good option for devices with untrusted clocks on the local network that send logs to NXLog in
real-time. The $EventReceivedTime field is automatically added to each event record by NXLog; this field can be
stored alongside the event’s own timestamp (normally $EventTime) if all fields are preserved when the event is
stored/forwarded. Alternatively, this field can be used as the event timestamp as shown below. This would have
the effect of influencing the timestamp used on most outputs, such as with the to_syslog_ietf() procedure.

Example 130. Using $EventReceivedTime as the Event Timestamp

This configuration accepts Syslog messages via UDP with the im_udp module. Events are parsed with the
parse_syslog() procedure, which adds an EventTime field from the Syslog header timestamp. The
$EventTime value, however, is replaced by the timestamp set by NXLog in the $EventReceivedTime field.
Any later processing that uses the $EventTime field will operate on the updated timestamp. For example, if
the to_syslog_ietf() procedure is used, the resulting IETF Syslog header will contain the
$EventReceivedTime timestamp.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input syslog>
 6 Module im_udp
 7 <Exec>
 8 parse_syslog();
 9 $EventTime = $EventReceivedTime;
10 </Exec>
11 </Input>

In some edge cases, a UTC timestamp that does not have the timezone specified is parsed as local time. This can

230
happen if BSD Syslog timestamps are in UTC, or when reading a non-timezone-aware ID timestamp with
im_odbc. In this case, it is necessary to either manually re-parse (see Parsing Timestamps) or apply a
corresponding reverse offset.

Example 131. Reversing an Incorrect Local-to-UTC Timezone Offset

This statement uses the parsedate() and strftime() functions to apply a reverse offset after an incorrect
local-to-UTC timezone conversion. To reduce the likelihood of an incorrect offset during the daylight saving
time (DST) transition, this should be done in the Input module instance which is collecting the events (see
the warning above).

1 $EventTime = parsedate(strftime($EventTime, '%Y-%m-%d %H:%M:%SZ'));

For the general case of adjusting timestamps, the plus (+) and minus (-) operators can be used to adjust a
timestamp by a specified number of seconds.

Example 132. Adjusting a Datetime Value by Seconds

This statement adds two hours to the $EventTime field.

This simple method may not be suitable for correction of a timezone that uses
WARNING daylight saving time (DST). In that case the required offset may change based on
whether DST is in effect.

1 $EventTime = $EventTime + (2 * 3600);

26.15.3. Formatting Timestamps


After a timestamp has been parsed to a datetime value, it will usually need to be converted back to a string at
some point before being sent to the output. This can be done automatically by the output configuration.

231
Example 133. Using the Default Timestamp Formatting

Consider an event record with an $EventTime field (as a datetime value) and a $Message field. Note that
the table below shows the $EventTime value as it is stored internally: as microseconds since the epoch.

Table 55. Sample Event Record

Field Value
$EventTime 1493425133541851

$Message EXT4-fs (dm-0): mounted filesystem with ordered data mode.

The following output module instance uses the to_json() procedure without specifying the timestamp
format.

nxlog.conf
1 <Output out>
2 Module om_file
3 File 'out.log'
4 Exec to_json();
5 </Output>

The output of the $EventTime field in this case will depend on the DateFormat directive. The default
DateFormat is YYYY-MM-DD hh:mm:ss (local time).

Output Sample
{
  "EventTime": "2017-01-02 15:19:22",
  "Message": "EXT4-fs (dm-0): mounted filesystem with ordered data mode."
}

A different timestamp may be used in some cases, depending on the procedure used to
convert the field and the output module. The to_syslog_bsd() procedure, for example, will
NOTE
use the $EventTime value to generate a RFC 3164 format timestamp regardless of how the
DateFormat directive is set.

Alternatively, the strftime() function can be used to explicitly convert a datetime value to a string with the
required format.

232
Example 134. Using strftime() to Format Timestamps

Again, consider an event record with an $EventTime field (as a datetime value) and a $Message field. In this
example, the strftime() function is used with a format string (see the strftime(3) manual) to convert
$EventTime to a string in the local time zone. Then the to_json() procedure is used to set the $raw_event
field.

nxlog.conf
1 <Output out>
2 Module om_file
3 File 'out.log'
4 <Exec>
5 $EventTime = strftime($EventTime, '%Y-%m-%dT%H:%M:%S%z');
6 to_json();
7 </Exec>
8 </Output>

Output Sample
{
  "EventTime": "2017-04-29T02:18:53+0200",
  "Message": "EXT4-fs (dm-0): mounted filesystem with ordered data mode."
}

NXLog Enterprise Edition supports a few additional format strings for formats that the stock C strftime() does not
offer, including formats with fractional seconds and in UTC time. See the Reference Manual strftime()
documentation for the list.

Example 135. Using strftime() Special Formats in NXLog Enterprise Edition

The following statement will convert $EventTime to a timestamp format with fractional seconds and in UTC
(regardless of the current time zone).

1 $EventTime = strftime($EventTime, 'YYYY-MM-DDThh:mm:ss.sUTC');

The resulting timestamp string in this case would be 2017-04-29T00:18:53.541851Z.

233
Chapter 27. Forwarding and Storing Logs
This chapter discusses the configuration of NXLog outputs, including:

• converting log messages to various formats,


• forwarding logs over the network,
• writing logs to files and sockets,
• storing logs in databases,
• sending logs to an executable, and
• forwarding raw data over TCP, UDP, and TLS/SSL protocols.

27.1. Generating Various Formats


The data format used in an outgoing log message must be considered in addition to the transport protocol. If the
message cannot be parsed by the receiver, it may be discarded or improperly processed. See also Parsing
Various Formats.

Syslog
There are two Syslog formats, the older BSD Syslog (RFC 3164) and the newer IETF Syslog (RFC 5424). The
transport protocol in Syslog can be UDP, TCP, or SSL. The xm_syslog module provides procedures for
generating Syslog messages. For more information, see Generating Syslog.

Example 136. Generating Syslog and Sending via TCP

This configuration uses the to_syslog_ietf() procedure to convert the corresponding fields in the event
record to a Syslog message in IETF format. The result is forwarded via TCP by the om_tcp module.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Output out>
 6 Module om_tcp
 7 Host 192.168.1.1
 8 Port 1514
 9 Exec to_syslog_ietf();
10 </Output>

Syslog Snare
The Snare agent format is a special format on top of BSD Syslog which is used and understood by several
tools and log analyzer frontends. This format is most useful when forwarding Windows EventLog data in
conjunction with im_mseventlog and/or im_msvistalog. The to_syslog_snare() procedure can construct Syslog
Snare formatted messages. For more information, see Generating Snare.

234
Example 137. Generating Syslog Snare and Sending via UDP

In this example, the to_syslog_snare() procedure converts the corresponding fields in the event record to
Snare format. The messages are then forwarded via UDP by the om_udp module.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Output out>
 6 Module om_udp
 7 Host 192.168.1.1
 8 Port 514
 9 Exec to_syslog_snare();
10 </Output>

NXLog Binary format


The Binary format is only understood by NXLog. All the fields are preserved when the data is sent in this
format, so there is no need to parse it again. The output module instance must contain OutputType Binary.
The receiver NXLog module instance can be set to InputType Binary.

Graylog Extended Log Format (GELF)


The xm_gelf module can be used to generate GELF output.

Example 138. Generating GELF Output

With this configuration, NXLog will send the fields in the event record via UDP in GELF format.

nxlog.conf
 1 <Extension _gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Output out>
 6 Module om_udp
 7 Host 127.0.0.1
 8 Port 12201
 9 OutputType GELF_UDP
10 </Output>

JSON
This is one of the most popular formats for interchanging data between various systems. The xm_json
module provides procedures for generating JSON messages by using data from the event record.

235
Example 139. Generating JSON and sending via TCP

With this configuration, NXLog will send the fields of the event record via TCP in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output out>
 6 Module om_tcp
 7 Host 192.168.1.1
 8 Port 1514
 9 Exec to_json();
10 </Output>

27.2. Forwarding Over the Network


After the data are converted to the required format as described in the Generating Various Formats section, they
can be forwarded using various protocols in different formats, including raw data. For each protocol, there is a
trade-off between speed, reliability, compatibility, and security.

UDP
To send logs as UDP datagrams, use the om_udp module.

UDP packets can be dropped by the operating system because the protocol does not
WARNING guarantee reliable message delivery. It is recommended to use TCP or TLS/SSL instead if
message loss is a concern.

236
Example 140. Using the om_udp Module

This example provides configurations to forward data to the specified host via UDP.

The configuration below converts and forwards log messages in Graylog Extended Log Format (GELF).

nxlog.conf
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/tmp/input"
 8 </Input>
 9
10 <Output out>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 OutputType GELF_UDP
15 </Output>

The below configuration sample forwards data via UDP in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/tmp/input"
 8 </Input>
 9
10 <Output out>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 Exec to_json();
15 </Output>

TCP
To send logs over TCP, use the om_tcp module.

237
Example 141. Using the om_tcp Module

In this example, log messages are forwarded to the specified host via TCP.

The configuration below provides forwarding data as a Syslog message in IETF format.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/tmp/input"
 8 </Input>
 9
10 <Output out>
11 Module om_tcp
12 Host 192.168.1.1
13 Port 1514
14 Exec to_syslog_ietf();
15 </Output>

The below configuration sample forwards messages without transforming them.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File "/tmp/input"
 4 </Input>
 5
 6 <Output out>
 7 Module om_tcp
 8 Host 192.168.0.127
 9 Port 10500
10 </Output>

SSL/TLS
To send logs over a trusted, secure SSL connection, use the om_ssl module.

238
Example 142. Using the om_ssl Module

This example provides nearly identical behavior to the TCP example above, but in this case SSL is used
to securely transmit the data.

The configuration below enables forwarding raw data over SSL/TLS using a self-signed certificate.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File '/tmp/input'
 4 </Input>
 5
 6 <Output out>
 7 Module om_ssl
 8 Host 192.168.0.127
 9 Port 10500
10 OutputType Binary
11 # Allows using self-signed certificates
12 AllowUntrusted TRUE
13 # Certificate from the peer host
14 CAFile /tmp/peer_cert.pem
15 # Certificate file
16 CertFile /tmp/cert.pem
17 # Keypair file
18 CertKeyFile /tmp/key.pem
19 </Output>

The below configuration sample forwards data over SSL/TLS in JSON format using a trusted CA
certificate.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File '/tmp/input'
 4 </Input>
 5
 6 <Extension json>
 7 Module xm_json
 8 </Extension>
 9
10 <Output out>
11 Module om_ssl
12 Host 192.168.0.127
13 Port 10500
14 # Allows using self-signed certificates
15 AllowUntrusted FALSE
16 # Certificate from the peer host
17 CAFile /tmp/peer_cert.pem
18 # Certificate file
19 CertFile /tmp/cert.pem
20 # Keypair file
21 CertKeyFile /tmp/key.pem
22 Exec to_json();
23 </Output>

HTTP(S)
To send logs over an HTTP or HTTPS connection, use the om_http module.

239
Example 143. Using the om_http Module

This example provides configurations for forwarding data via HTTP to the specified HTTP address.

Using the below configuration sample, NXLog will send raw data in text form using a POST request for
each log message.

nxlog.conf
1 <Input in>
2 Module im_file
3 File '/tmp/input'
4 </Input>
5
6 <Output out>
7 Module om_http
8 URL http://server:8080/
9 </Output>

The configuration below will forward data in Graylog Extended Log Format (GELF) over HTTPS using a
trusted certificate.

nxlog.conf
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/tmp/input"
 8 </Input>
 9
10 <Output out>
11 Module om_http
12 URL http://server:8080/
13 # Allows using self-signed certificates
14 HTTPSAllowUntrusted FALSE
15 # Certificate from the peer host
16 HTTPSCAFile /tmp/peer_cert.pem
17 # Certificate file
18 HTTPSCertFile /tmp/cert.pem
19 # Keypair file
20 HTTPSCertKeyFile /tmp/key.pem
21 OutputType GELF_UDP
22 </Output>

27.3. Sending to Files and Sockets


Files
To store logs in local files, use the om_file module. See also Writing Syslog to File.

240
Example 144. Using the om_file Module

This configuration writes log messages to the specified file. No additional processing is performed by
the output module instance.

nxlog.conf
1 <Output out>
2 Module om_file
3 File "/var/log/out.log"
4 </Output>

Unix Domain Socket


To send logs to a Unix domain socket, use the om_uds module. See also Sending Syslog to the Local Syslog
Daemon via /dev/log.

Example 145. Using the om_uds Module

With this configuration, log messages are written to the specified socket without any additional
processing.

nxlog.conf
1 <Output out>
2 Module om_uds
3 UDS /dev/log
4 </Output>

27.4. Storing in Databases


The om_dbi and om_odbc modules can be used to store logs in databases. The om_dbi module can be used on
POSIX systems where libdbi is available. The om_odbc module, available in NXLog Enterprise Edition, can be used
with ODBC compatible databases on Windows, Linux, and Unix.

Example 146. Using the om_dbi Module

This configuration uses libdbi and the pgsql driver to insert events into the specified database. The SQL
statement references fields in the event record to be added to the database.

nxlog.conf
 1 <Output out>
 2 Module om_dbi
 3 SQL INSERT INTO log (facility, severity, hostname, timestamp, application, \
 4 message) \
 5 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
 6 $SourceName, $Message)
 7 Driver pgsql
 8 Option host 127.0.0.1
 9 Option username dbuser
10 Option password secret
11 Option dbname logdb
12 </Output>

241
Example 147. Using the om_odbc Module

This example inserts events into the database specified by the ODBC connection string. In this case, the
sql_exec() and sql_fetch() functions are used to interact with the database.

nxlog.conf
 1 <Output out>
 2 Module om_odbc
 3 ConnectionString DSN=mysql_ds;username=mysql;password=mysql;database=logdb;
 4 <Exec>
 5 if ( sql_exec("INSERT INTO log (facility, severity, hostname, timestamp,
 6 application, message) VALUES (?, ?, ?, ?, ?, ?)",
 7 1, 2, "host", now(), "app", $raw_event) == TRUE )
 8 {
 9 if ( sql_fetch("SELECT max(id) as id from log") == TRUE )
10 {
11 log_info("ID: " + $id);
12 if ( sql_fetch("SELECT message from log WHERE id=?", $id) == TRUE )
13 log_info($message);
14 }
15 }
16 </Exec>
17 </Output>

27.5. Sending to Executables


Using the om_exec module, all messages can be piped to an external program or script which will run until the
module (or NXLog) is stopped.

Example 148. Using the om_exec Module

This configuration executes the specified command and writes log messages to its standard input.

nxlog.conf
1 <Output out>
2 Module om_exec
3 Command /usr/bin/someprog
4 Arg -
5 </Output>

242
Chapter 28. Centralized Log Collection
Centralized log collection, log aggregation, or log centralization is the process of sending event log data to a
dedicated server or service for storage, and optionally for search and analytics. Storing logs on a centralized
system offers several benefits over storing the data locally.

• Event data can be accessed even if the originating server is offline, compromised, or decommissioned.
• Data can be analyzed and correlated across more than one system.
• It is more difficult for malicious actors to remove evidence from logs that have already been forwarded.
• Incident investigation and auditing is easier because all event data is collected in a single location.
• Scalable, high-availability, and redundancy solutions are easier to implement and maintain since they can be
implemented at the point of the collection server.
• Compliance with internal and external standards for log data retention only need to be to managed at a
single point.

28.1. Architecture
The following diagram depicts an example of centralized log collection architecture. The single, central server
collects logs from other servers, applications, and network devices. After collection, the logs can be forwarded as
required for further analysis or storage.

This chapter is concerned with the left half of the diagram: collecting logs from clients.

In practice, network topology and other requirements may dictate that additional servers such as relays be
added for log handling. For those cases, other functionality may be necessary than what is covered here (such as
buffering).

243
28.2. Collection Modes
In the context of clients generating logs, NXLog supports both agent-based and agent-less log collection, and it is
possible to configure a system to use both in mixed mode. In brief, these modes differ as follows (see the Log
Processing Modes section for more details).

Agent-based log collection requires that an NXLog agent be installed on the client. With a local agent, collection is
much more flexible, providing features such as filtering on the source system to send only the required data,
format conversion, compression, encryption, and delivery reliability, among others. It is generally recommended
that NXLog be deployed as an agent wherever possible.

Example 149. Transporting Batch-Compressed Logs in Agent-Based Mode

With agent-based log collection, NXLog agents are installed on both the client and the central server. Here,
the im_batchcompress and om_batchcompress modules are used to transport logs both compressed and
encrypted. These modules preserve all the fields in the event record.

nxlog.conf (Client)
1 <Output batch>
2 Module om_batchcompress
3 Host 192.168.56.101
4 Port 2514
5 UseSSL TRUE
6 CAFile /opt/openssl_rootca/rootCA.pem
7 CertFile /opt/openssl_server/server.crt
8 CertKeyFile /opt/openssl_server/server.key
9 </Output>

nxlog.conf (Log Server)


1 <Input batch>
2 Module im_batchcompress
3 ListenAddr 0.0.0.0
4 Port 2514
5 CAFile /opt/openssl_rootca/rootCA.pem
6 CertFile /opt/openssl_server/central.crt
7 CertKeyFile /opt/openssl_server/central.key
8 </Input>

In agent-less mode, there is no NXLog agent installed on the client. Instead, the client forwards events to the
central server in a native format. On the central server, NXLog accepts and parses the logs received. Often there
is limited control over the log format used, and it may not be possible to implement encryption, compression,
delivery reliability, or other features.

244
Example 150. Collecting UDP Syslog Logs in Agent-Less Mode

With agent-less collection, NXLog is installed on the central server but not on the client. Clients can be
configured to send UDP Syslog messages to the central server using their native logging functionality. The
im_udp module below could be replaced with im_tcp or im_ssl according to what protocol is supported by
the clients.

UDP transport does not provide any guarantee of delivery. Network congestion or
WARNING
other issues may result in lost log data.

nxlog.conf (Log Server)


 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input input_udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>

It is common for logs to be collected using both modes among the various clients, network devices, relays, and
log servers in a network. For example, an NXLog relay may be configured to collect logs from both agents and
agent-less sources and perform filtering and processing before forwarding the data to a central server.

28.3. Requirements
Various logging requirements may dictate particular details about the chosen logging architecture. The following
are important things to consider when deciding how to set up centralized log collection. In some cases, these
requirements can only be met by using agent-based collection.

Reliability
UDP does not guarantee message delivery, and should be avoided if log data loss is not acceptable. Instead,
TCP (and therefore TLS as well) offers guaranteed packet delivery. Furthermore, with agent-based collection
NXLog can provide application-level, guaranteed delivery. See Reliable Network Delivery for more
information.

Structured data
Correlating data across multiple log sources requires parsing event data into a common set of fields. Event
fields are a core part of NXLog processing, and an NXLog agent can be configured to parse events at any point
along their path to the central server. Often, parsing is done as early as possible (at the source, for agent-
based collection) to simplify later categorization and to reduce processing load on log servers as logs are
received. See Parsing Various Formats and Message Classification.

Encryption
To maintain confidentiality of log data, TLS can be used during transport.

Compression
If bandwidth is a concern, log data compression may be desirable. Most event data is highly compressible,
allowing bandwidth requirements to be reduced significantly. The im_batchcompress and om_batchcompress
modules provide batched, compressed transport of log data between NXLog agents.

Storage format
Normally, data should be converted to, and stored in, a common format when dealing with heterogeneous
logs sources.

245
28.4. Data Formats
When using agent-based collection, it is often desirable to convert the data prior to transfer. In this case,
structured data is often sent using one of these formats.

Batch compression modules


The im_batchcompress and om_batchcompress modules can be used to send logs in compressed, and
optionally encrypted, batches. All fields in the event record are preserved.

NXLog binary format


NXLog has its own binary format (see Binary InputType and Binary OutputType) that retains all the fields of an
event and can be used to send logs via TCP, UDP, or TLS (or with other stream-oriented modules).

JSON
JSON is easy to generate and parse and has become a de-facto standard for logging as well. It has some
limitations, such as the missing datetime format. See the JSON section.

Agent-less collection is restricted to formats supported by the clients. The following are a few common formats,
but many more are supported. See also the OS Support chapters.

Syslog
Using Syslog has become a common practice and many SIEM vendors and products support (or even require)
Syslog. See the Syslog chapter for more details. Syslog contains free form message data that typically needs
to be parsed to extract more information for further analysis. Syslog often uses UDP, TCP, or TLS for
transport.

Snare
The Snare format is commonly used to transport Windows EventLog, with or without Syslog headers.

Windows Event Forwarding (WEF)


Windows EventLog can be forwarded over HTTPS with Windows Event Forwarding. See the Windows Event
Log chapter.

246
Chapter 29. Encrypted Transfer
In order to protect log data in transit from being modified or viewed by an attacker, NXLog provides SSL/TLS data
encryption support in many input and output modules. Benefits of using SSL/TLS encrypted log transfer include:

• strong authentication,
• message integrity (assures that the logs are not changed), and
• message confidentiality (assures that the logs cannot be viewed by an unauthorized party).

It is important that certificates be renewed before expiration. The NXLog Manager


WARNING dashboard can be configured with a "Certificate summary" which lists soon-to-expire
certificates in a separate group.

29.1. SSL/TLS Encryption in NXLog


The SSL/TLS protocol encrypts log data on the client side and then decrypts it on the server side. It’s
recommended to use 2048-bit keys or larger.

There are several modules in NXLog Enterprise Edition that support SSL/TLS encryption:

• im_ssl and om_ssl support secure TCP connections,


• im_http and om_http support secure HTTP connections, and
• im_batchcompress and om_batchcompress support encryption of compressed log batches transferred
between NXLog instances.

When using the SSL/TLS, there are two ways to handle authentication.

• With mutual authentication, both client and log server agents are authenticated, and certificates/keys must
be deployed for both agents. This is the most secure and prevents log collection if the client’s certificate is
untrusted or has expired.
• With server-side authentication only, authentication takes place only via a certificate/key deployed on the
server. On the log server, the im_ssl AllowUntrusted directive (or corresponding directive for im_http or
im_batchcompress) must be set to TRUE. The client is prevented from sending logs to an untrusted server
but the server accepts logs from untrusted clients.

247
Example 151. Client/Server Encrypted Transfer

With the following configurations, a client reads logs from all log files under the /var/log directory, parses
the events with parse_syslog(), converts to JSON with to_json(), and forwards them over a secure connection
to the central server.

These configurations use mutual authentication: both agents are authenticated and certificates must be
created for both agents.

nxlog.conf (Client)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input messages>
10 Module im_file
11 File "/var/log/*"
12 Exec parse_syslog();
13 </Input>
14
15 <Output central_ssl>
16 Module om_ssl
17 Host 192.168.56.103
18 Port 516
19 CAFile /opt/ssl/rootCA.pem
20 CertFile /opt/ssl/client.crt
21 CertKeyFile /opt/ssl/client.key
22 KeyPass password
23 Exec to_json();
24 </Output>

The server receives the logs on port 516 and writes them to /var/log/logmsg.log.

nxlog.conf (Central Server)


 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input input_ssl>
 6 Module im_ssl
 7 Host 0.0.0.0
 8 Port 516
 9 CAFile /opt/ssl/rootCA.pem
10 CertFile /opt/ssl/central.crt
11 CertKeyFile /opt/ssl/central.key
12 KeyPass password
13 </Input>
14
15 <Output fileout>
16 Module om_file
17 File "/var/log/logmsg.log"
18 </Output>

248
29.2. OpenSSL Certificate Creation
NXLog Manager provides various features for creating, deploying, and managing SSL/TLS certificates, and is
especially helpful when managing many NXLog agents across an organization. This section, however, provides
steps for creating self-signed certificates with OpenSSL, a Linux-based SSL/TLS cryptography toolkit.

1. Generate the private root key for your Certification Authority (CA).

$ openssl genrsa -out rootCA.key 2048

2. Self-sign the key and create a CA certificate.

$ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 730 -out rootCA.pem

3. Create a certificate for each server.


a. Generate a private key for the server.

$ openssl genrsa -out server.key 2048

b. Generate the certificate signing request for the CA. When prompted for the Common Name, enter the
server’s name or IP address.

$ openssl req -new -key server.key -out server.csr

c. Sign the request.

$ openssl x509 -req -in server.csr -CA rootCA.pem -CAkey rootCA.key \


  -CAcreateserial -out server.crt -days 500 -sha256

249
Chapter 30. Reducing Bandwidth and Data Size
There are several ways that NXLog can be configured to reduce the size of log data. This can help lower
bandwidth requirements during transport, storage requirements for log data storage, and licensing costs for
commercial SIEM systems that charge based on data volume.

The three main strategies for achieving this goal are covered in the following sections:

• Filtering Events by removing unnecessary or duplicate events at the source so that less data needs to be
transported and stored—reducing the data size during all subsequent stages of processing.
• Trimming Events by removing extra content or fields from event records which can reduce the total volume
of log data.
• Compressing During Transport can drastically reduce bandwidth requirements for events being forwarded.

To achieve the best results, it is important to understand how fields work in NXLog and which fields are being
transferred or stored. For example, removing or modifying fields without modifying $raw_event will not reduce
data requirements at all for an output module instance that uses only $raw_event. See Event Records and Fields
for details, as well as the explanation in Compressing During Transport below.

30.1. Filtering Events


Depending on the logging requirements and the log source, it may be possible to simply discard certain events.
NXLog can be configured to filter events based on nearly any set of criteria. See also Filtering Messages.

Example 152. Dropping Unnecessary Events

In this example, an NXLog agent is configured to collect Syslog messages from devices on the local network.
Events are parsed with the xm_syslog parse_syslog() procedure, which sets the SeverityValue field. Any event
with a normalized severity lower than 3 (warning) is discarded.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input syslog>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog(); if $SeverityValue < 3 drop();
10 </Input>

Similarly, the pm_norepeat module can be used to detect, count, and discard duplicate events. In their place,
pm_norepeat generates a single event with a last message repeated n times message.

250
Example 153. Dropping Duplicate Events

With this configuration, NXLog collects Syslog messages from hosts on the local network with im_udp and
parses them with the xm_syslog parse_syslog() procedure. Events are then routed through a pm_norepeat
module instance, where the $Hostname, $Message, and $SourceName fields are checked to detect duplicate
messages. Last, events are sent to a remote host with om_batchcompress.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input syslog_udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>
11
12 <Processor norepeat>
13 Module pm_norepeat
14 CheckFields Hostname, Message, SourceName
15 </Processor>
16
17 <Output out>
18 Module om_batchcompress
19 Host 10.2.0.2
20 Port 2514
21 </Output>
22
23 <Route r>
24 Path syslog_udp => norepeat => out
25 </Route>

30.2. Trimming Events


NXLog can be configured to parse events into various fields in the event record. In this case, a whitelist can be
used to retain a set of important fields. See Rewriting and Modifying Messages for more information about
modifying events.

251
Example 154. Discarding Extra Fields via Whitelist

This configuration reads from the Windows EventLog with im_msvistalog and uses an xm_rewrite module
instance to discard any fields in the event record that are not included in the whitelist. The xm_rewrite
instance below could be used with multiple sources; for example, the whitelist would also be suitable for
the xm_syslog fields.

NOTE The xm_rewrite module does not remove the $raw_event field.

nxlog.conf
 1 <Extension whitelist>
 2 Module xm_rewrite
 3 Keep AccountName, Channel, EventID, EventReceivedTime, EventTime, Hostname, \
 4 Severity, SeverityValue, SourceName
 5 </Extension>
 6
 7 <Input eventlog>
 8 Module im_msvistalog
 9 <QueryXML>
10 <QueryList>
11 <Query Id='0'>
12 <Select Path='Security'>*[System/Level&lt;=4]</Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 Exec whitelist->process();
17 </Input>

In some cases, event messages contain a lot of extra data that is duplicated across multiple events of the same
time. One example of this is the "descriptive event data" which has been introduced by Microsoft for the
Windows EventLog. By removing this verbose text from common events, event sizes can be reduced significantly
while still preserving all the forensic details of the event.

Example 155. Removing Descriptive Data From Event Messages

The following configuration collects events from the Application, Security, and System channels. Rules are
included for truncating the messages of Security events with IDs 4688 and 4769.

In this example, the $Message field is truncated. However, the $raw_event field is not. For
most input modules, $raw_event will include the contents of $Message and other fields
NOTE (see the im_msvistalog $raw_event field). To update the $raw_event field, include a
statement for this (see the comment in the configuration example). See also Compressing
During Transport below for more details.

252
Input Sample (Event ID 4769)
A Kerberos service ticket was requested.

Account Information:
  Account Name: WINAD$@TEST.COM
  Account Domain: TEST.COM
  Logon GUID: {55a7f67c-a32c-150a-29f1-7e173ff130a7}

Service Information:
  Service Name: WINAD$
  Service ID: TEST\WINAD$

Network Information:
  Client Address: ::1
  Client Port: 0

Additional Information:
  Ticket Options: 0x40810000
  Ticket Encryption Type: 0x12
  Failure Code: 0x0
  Transited Services: -

This event is generated every time access is requested to a resource such as a computer or a
Windows service. The service name indicates the resource to which access was requested.

This event can be correlated with Windows logon events by comparing the Logon GUID fields in
each event. The logon event occurs on the machine that was accessed, which is often a
different machine than the domain controller which issued the service ticket.

Ticket options, encryption types, and failure codes are defined in RFC 4120.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Application">
 7 *[System[(Level&lt;=4)]]</Select>
 8 <Select Path="Security">
 9 *[System[(Level&lt;=4)]]</Select>
10 <Select Path="System">
11 *[System[(Level&lt;=4)]]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 <Exec>
16 if ($Channel == 'Security') and ($EventID == 4688)
17 $Message =~ s/\s*Token Elevation Type indicates the type of .*$//s;
18 else if $(Channel == 'Security') and ($EventID == 4769)
19 $Message =~ s/\s*This event is generated every time access is .*$//s;
20 # Additional rules can be added here
21 # ...
22 # Optionally, update the $raw_event field
23 #$raw_event = $EventTime + ' ' + $Message;
24 </Exec>
25 </Input>

253
Output Sample
A Kerberos service ticket was requested.

Account Information:
  Account Name: WINAD$@TEST.COM
  Account Domain: TEST.COM
  Logon GUID: {55a7f67c-a32c-150a-29f1-7e173ff130a7}

Service Information:
  Service Name: WINAD$
  Service ID: TEST\WINAD$

Network Information:
  Client Address: ::1
  Client Port: 0

Additional Information:
  Ticket Options: 0x40810000
  Ticket Encryption Type: 0x12
  Failure Code: 0x0
  Transited Services: -

30.3. Compressing During Transport


There are several ways that event data can be transported between NXLog agents, including the *m_tcp and
*m_ssl modules. However, those modules do not provide data compression. The im_batchcompress and
om_batchcompress modules, available in NXLog Enterprise Edition, can be used to transfer events in
compressed (and optionally, encrypted) batches.

The following chart compares the data requirements for the *m_tcp, *m_ssl (with TLSv1.2), and *m_batchcompress
module pairs. It is based on a sample of BSD Syslog records parsed with parse_syslog(). The values shown reflect
the total bi-directional bytes transferred at the packet level. Of course, ratios will vary from this in practice based
on network conditions and the compressibility of the event data.

Note that the om_tcp and om_ssl modules (among others) transfer only the $raw_event field by default, but can
be configured to transfer all fields with OutputType Binary. The om_batchcompress module transfers all fields
in the event record, but it is possible to send only the $raw_event field by first removing the other fields (see
Generating $raw_event and Removing Other Fields below).

254
Simply configuring the *m_batchcompress modules for the transfer of event data between NXLog agents can
significantly reduce the bandwidth requirements for that part of the log path.

The table below displays the comparison of sending the same data set using different methods and modules:

Table 56. Data Transfer Comparison

Compressi Modules Event size Diff vs Sender Receiver EPS sender EPS
on method used baseline CPU usage CPU usage receiver
None om_tcp, 112 0.00% 141 215.07 83091.8 84169.9
im_tcp

None om_ssl, 301.7 +169.38% 141.34 191.9 33161.4 47482.9


im_ssl

SSLCompre om_ssl, 293.2 +161.79% 138.98 190.69 34497.7 47128.5


ssion im_ssl

Batch om_batchco 18.4 -83.57% 119.69 181.1 36252.1 77491.8


compressio mpress,
n im_batchco
mpress

Compression ratios show that enabling SSLCompression yields only a minimal improvement in message size.

Batch compression fares much better, because it compresses data in batches leading to better compression
ratios.

Example 156. Batched Log Transfer

With the following configuration, an NXLog agent uses om_batchcompress to send events in compressed
batches to a remote NXLog agent.

The *m_batchcompress modules also support SSL/TLS encryption; see the im_batchcompress
TIP
and om_batchcompress configuration details.

nxlog.conf (Sending Agent)


1 <Output out>
2 Module om_batchcompress
3 Host 10.2.0.2
4 Port 2514
5 </Output>

The remote NXLog agent receives and decompresses the received batches with im_batchcompress. All
fields in an event are available to the receiving agent.

nxlog.conf (Receiving Agent)


1 <Input in>
2 Module im_batchcompress
3 ListenAddr 10.2.0.2
4 Port 2514
5 </Input>

To further reduce the size of the batches transferred by the *m_batchcompress modules, and if only the
$raw_event field will be needed later in the log path, the extra fields can be removed from the event record prior
to transfer. This can be done with an xm_rewrite instance for multiple fields or with the delete() procedure (see
Renaming and Deleting Fields).

255
Example 157. Generating $raw_event and Removing Other Fields

In this configuration, events are collected from the Windows EventLog with im_msvistalog, which sets the
$raw_event and many other fields. To reduce the size of the events, only the $raw_event field is retained;
all the other fields in the event record are removed by the xm_rewrite module instance (called by clean-
>process()).

Rather than using the default im_msvistalog $raw_event field, it would also be possible to
NOTE customize it with something like $raw_event = $EventTime + ' ' + $Message or
to_json().

nxlog.conf
 1 <Extension clean>
 2 Module xm_rewrite
 3 Keep raw_event
 4 </Extension>
 5
 6 <Input eventlog>
 7 Module im_msvistalog
 8 <QueryXML>
 9 <QueryList>
10 <Query Id='0'>
11 <Select Path='Security'>*[System/Level&lt;=4]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 </Input>
16
17 <Output out>
18 Module om_batchcompress
19 Host 10.2.0.2
20 Exec clean->process();
21 </Output>

Alternatively, if the various fields in the event record will be handled later in the log path, the $raw_event field
can be set to an empty string (but see the warning below).

256
Example 158. Emptying $raw_event and Sending Other Fields

This configuration collects events from the Windows EventLog with im_msvistalog, which writes multiple
fields to the event record. In this case, the $raw_event field contains the same data as other fields. Because
the om_batchcompress module instance will send all the fields in the event record, the $raw_event field
can be emptied.

Many output modules operate on the $raw_event field only. It should not be set to
an empty string unless the output module sends all the event fields
(om_batchcompress or a module using the Binary OutputType) and so on for all
WARNING
subsequent agents and modules. Otherwise, a module instance will encounter an
empty $raw_event. For this reason, the following example is in general not
recommended.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id='1'>
 6 <Select Path='Security'>*[System/Level&lt;=4]</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>
11
12 <Output out>
13 Module om_batchcompress
14 Host 10.2.0.2
15 Exec $raw_event = '';
16 </Output>

257
Chapter 31. Reliable Message Delivery
Sometimes regulatory compliance or other requirements mandate that the logging infrastructure function in an
ultra-reliable manner. NXLog Enterprise Edition can be configured to guarantee that:

• log data is safe even in case of a crash,


• no messages are lost due to intermittent network issues, and
• there is no message duplication.

See also Using Buffers.

31.1. Crash-Safe Operation


A host or NXLog crash can happen for various reasons, including power failures without a UPS, kernel panics,
and software bugs. To protect against data loss in these situations, the following techniques are implemented in
NXLog Enterprise Edition.

• Log messages are buffered in various places in NXLog, and buffered messages can be lost in the case of a
crash. Persistent module message queues can be enabled so that these messages are stored on disk instead
of in memory. Each log message is removed from the queue only after successful delivery. See the
PersistLogqueue and SyncLogqueue global configuration directives, and the PersistLogqueue and
SyncLogqueue module directives.

Log message removal from queues in processor modules happens before delivery. This
WARNING can result in potential data loss. Do not use processor modules when high reliability
operation is required.

• Input positions (for im_file and other modules) are saved in the cache file, and by default this file is only
saved to disk on shutdown. In case of a crash some events may be duplicated or lost depending on the value
of the ReadFromLast directive. This data can be periodically flushed and synced to disk using the
CacheFlushInterval and CacheSync directives.

Example 159. Configuration for Crash-Safe Operation

In this example, the log queues are synced to disk after each successful delivery. The cache file containing
the current event ID is also flushed and synced to disk after each event is read from the database. Note that
these reliability features, when enabled, significantly reduce the processing speed.

nxlog.conf
 1 PersistLogqueue TRUE
 2 SyncLogqueue TRUE
 3 CacheFlushInterval always
 4 CacheSync TRUE
 5
 6 <Input in>
 7 Module im_file
 8 File 'input.log'
 9 </Input>
10
11 <Output out>
12 Module om_tcp
13 Host 10.0.0.1
14 Port 1514
15 </Output>

258
31.2. Reliable Network Delivery
The TCP protocol provides guaranteed packet delivery via packet level acknowledgment. Unfortunately, if the
receiver closes the TCP connection prematurely while messages are being transmitted, unsent data stored in the
socket buffers will be lost since this is handled by the operating system instead of the application (NXLog). This
can result in message loss and affects im_tcp, om_tcp, im_ssl, and om_ssl. See the diagram in All Buffers in a
Basic Route.

The solution to this unreliability in the TCP protocol is application-level acknowledgment. NXLog provides two
pairs of modules for this purpose.

• NXLog can use the HTTP/HTTPS protocol to provide guaranteed message delivery over the network,
optionally with TLS/SSL. The client (om_http) sends the event in a HTTP POST request. The server (im_http,
only available in NXLog Enterprise Edition) responds with a status code indicating successful message
reception.

Example 160. HTTPS Log Transfer

In the following configuration example, a client reads logs from a file and transmits the logs over an
SSL-secured HTTP connection.

nxlog.conf (Client/Sending)
 1 <Input in>
 2 Module im_file
 3 File 'input.log'
 4 </Input>
 5
 6 <Output out>
 7 Module om_http
 8 URL https://10.0.0.1:8080/
 9 HTTPSCertFile %CERTDIR%/client-cert.pem
10 HTTPSCertKeyFile %CERTDIR%/client-key.pem
11 HTTPSCAFile %CERTDIR%/ca.pem
12 </Output>

The remote NXLog agent accepts the HTTPS connections and stores the received messages in a file. The
contents of input.log will be replicated in output.log.

nxlog.conf (Server/Receiving)
 1 <Input in>
 2 Module im_http
 3 ListenAddr 0.0.0.0
 4 Port 8080
 5 HTTPSCertFile %CERTDIR%/server-cert.pem
 6 HTTPSCertKeyFile %CERTDIR%/server-key.pem
 7 HTTPSCAFile %CERTDIR%/ca.pem
 8 </Input>
 9
10 <Output out>
11 Module om_file
12 File 'output.log'
13 </Output>

• The om_batchcompress and im_batchcompress modules, available in NXLog Enterprise Edition, also provide
acknowledgment as part of the batchcompress protocol.

259
Example 161. Batched Log Transfer

With the following configuration, a client reads logs from a file and transmits the logs in compressed
batches to a remote NXLog agent.

nxlog.conf (Client/Sending)
 1 <Input in>
 2 Module im_file
 3 File 'input.log'
 4 </Input>
 5
 6 <Output out>
 7 Module om_batchcompress
 8 Host 10.0.0.1
 9 UseSSL true
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 CAFile %CERTDIR%/ca.pem
13 </Output>

The remote NXLog agent receives and decompresses the received message batches and stores the
individual messages in a file. The contents of input.log will be replicated in output.log.

nxlog.conf (Server/Receiving)
 1 <Input in>
 2 Module im_batchcompress
 3 ListenAddr 0.0.0.0
 4 CertFile %CERTDIR%/server-cert.pem
 5 CertKeyFile %CERTDIR%/server-key.pem
 6 CAFile %CERTDIR%/ca.pem
 7 </Input>
 8
 9 <Output out>
10 Module om_file
11 File 'output.log'
12 </Output>

31.3. Protection Against Duplication


If the contents of the cache file containing the event position are lost, the module can either read everything
from the beginning or risk losing some messages. In the former case, messages may be duplicated. When using
persistent queues, messages are not removed from the queue until they have been successfully delivered. If the
crash occurs just before removal, the message will be sent again after the agent restarts resulting in a duplicate.

In some cases it may be very important that a log message is not duplicated. For example, a duplicated message
may trigger the same alarm a second time or cause an extra entry in a financial transaction log. NXLog Enterprise
Edition can be configured to prevent duplicate messages from occurring.

The best way to prevent duplicated messages is by using serial numbers, as it is only possible to detect
duplicates at the receiver. The receiver can keep track of what has been received by storing the serial number of
the last message. If a message is received with the same or a lower serial number from the same source, the
message is simply discarded.

In NXLog Enterprise Edition, duplication prevention works as follows.

• Each module that receives a message directly from an input source or from another module in the route
assigns a field named $__SERIAL__$ with a monotonically increasing serial number. The serial number is

260
taken from a global generator and is increased after each fetch so that two messages received at two
modules simultaneously will not have the same serial number. The serial number is initialized to the seconds
elapsed since the UNIX epoch when NXLog is started. This way it can provide 1,000,000 serial numbers per
second without problems in case it is stopped and restarted. Otherwise the value would need to be saved
and synced to disk as after each serial number fetch which would adversely affect performance. When a
module receives a message it checks the value of the field named $__SERIAL__$ against the last saved
value.
• The im_http module keeps the value of the last $__SERIAL__$ for each client. It is only possible to know and
identify the client (om_http sender) in HTTPS mode. The Common Name (CN) in the certificate subject is
used and is assumed to uniquely identify the client.

The remote IP and port number cannot be used to identify the remote sender because the
remote port is assigned dynamically and changes for every connection. Thus if a client
sends a message, disconnects, reconnects, and then sends the same message again, it is
NOTE impossible to know if this is the same client or another. For this reason it is not possible to
protect against message duplication with plain TCP or HTTP when multiple clients connect
from the same IP. The im_ssl and im_batchcompress modules do not have the certificate
subject extraction implemented at this time.

• All other non-network modules use the value of $SourceModuleName which is automatically set to the name
of the module instance generating the log message. This value is assumed to uniquely identify the source.
The value of $SourceModuleName is not overwritten if it already exists. Note that this may present problems
in some complex setups.
• The algorithm is implemented in one procedure call named duplicate_guard(), which can be used in modules
to prevent message duplication. The dropped() function can be then used to test whether the current log
message has been dropped.

261
Example 162. Disallowing Duplicated Messages

The following client and server configuration examples extend the earlier HTTPS example to provide an
ultra-reliable operation where messages cannot be lost locally due to a crash, lost over the network, or
duplicated.

nxlog.conf (Client/Sending)
 1 PersistLogqueue TRUE
 2 SyncLogqueue TRUE
 3 CacheFlushInterval always
 4 CacheSync TRUE
 5
 6 <Input in>
 7 Module im_file
 8 File 'input.log'
 9 </Input>
10
11 <Output out>
12 Module om_http
13 URL https://10.0.0.1:8080/
14 HTTPSCertFile %CERTDIR%/client-cert.pem
15 HTTPSCertKeyFile %CERTDIR%/client-key.pem
16 HTTPSCAFile %CERTDIR%/ca.pem
17 Exec duplicate_guard();
18 </Output>

The server accepts the HTTPS connections and stores the received messages in a file. The contents of
input.log will be replicated in output.log

nxlog.conf (Server/Receiving)
 1 PersistLogqueue TRUE
 2 SyncLogqueue TRUE
 3 CacheFlushInterval always
 4 CacheSync TRUE
 5
 6 <Input in>
 7 Module im_http
 8 ListenAddr 0.0.0.0
 9 Port 8080
10 HTTPSCertFile %CERTDIR%/server-cert.pem
11 HTTPSCertKeyFile %CERTDIR%/server-key.pem
12 HTTPSCAFile %CERTDIR%/ca.pem
13 Exec duplicate_guard();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File 'output.log'
19 Exec duplicate_guard();
20 </Output>

262
OS Support
Each of the following chapters lists some of the common log sources that can be collected on the corresponding
platform. See also Supported Platforms.

263
Chapter 32. IBM AIX
NXLog can collect various types of system logs on the AIX platform. For deployment details, see the supported
AIX platforms, AIX installation, and monitoring.

AIX Audit
The im_aixaudit module natively collects logs generated by the AIX Audit system, without depending on
auditstream or any other process.

Example 163. Collecting AIX Audit Logs

This example reads AIX audit logs from the /dev/audit device file.

nxlog.conf
1 <Input in>
2 Module im_aixaudit
3 DeviceFile /dev/audit
4 </Input>

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

Example 164. Using an External Command

This example uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/adm/ras/errlog
6 </Input>

DNS Monitoring
Logs can be collected from BIND 9.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring.

264
Example 165. Monitoring File Integrity

This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>

Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See Collecting and Parsing Syslog for more information.

Example 166. Reading Syslog Messages From File

This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>

Log Files
The im_file module can be used to collect events from log files.

Example 167. Reading From Log Files

This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.

265
Example 168. Reading Process Accounting Logs

This configuration turns on process accounting (using /tmp/nxlog.acct as the log file) and watches for
messages.

nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/tmp/nxlog.acct"
5 </Input>

266
Chapter 33. FreeBSD
NXLog can collect various types of system logs on FreeBSD platforms. For deployment details, see the supported
FreeBSD platforms, FreeBSD installation, and monitoring.

Basic Security Mode (BSM) Auditing


The im_bsm module collects logs generated by the BSM auditing system.

Example 169. Collecting BSM Audit Logs

This example reads BSM audit logs from the /dev/auditpipe device file.

nxlog.conf
1 <Input bsm>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

Example 170. Using an External Command

This example uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>

DNS Monitoring
Logs can be collected from BIND 9.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring.

267
Example 171. Monitoring File Integrity

This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>

Kernel
Logs from the kernel can be collected directly with the im_kernel module.

The system logger may need to be disabled or reconfigured to collect logs with im_kernel. To
NOTE completely disable syslogd on FreeBSD, run service syslogd onestop and sysrc
syslogd_enable=NO.

Example 172. Collecting Kernel Logs

This configuration reads events from the kernel.

nxlog.conf
1 <Input kernel>
2 Module im_kernel
3 </Input>

Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See the Linux System Logs and Collecting and Parsing Syslog sections for more information.

Example 173. Reading Syslog Messages From File

This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>

268
Log Files
The im_file module can be used to collect events from log files.

Example 174. Reading From Log Files

This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.

Example 175. Reading Process Accounting Logs

This configuration turns on process accounting (using /var/account/acct as the log file) and watches
for messages.

nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/var/account/acct"
5 </Input>

269
Chapter 34. OpenBSD
NXLog can collect various types of system logs on OpenBSD platforms. For deployment details, see the
supported OpenBSD platforms, OpenBSD installation, and monitoring.

Basic Security Mode (BSM) Auditing


The im_bsm module collects logs generated by the BSM auditing system.

NOTE OpenBSD does not support BSM Auditing.

Example 176. Collecting BSM Audit Logs

This example reads BSM audit logs from the /dev/auditpipe device file.

nxlog.conf
1 <Input bsm>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

Example 177. Using an External Command

This example uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>

DNS Monitoring
Logs can be collected from BIND 9.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring.

270
Example 178. Monitoring File Integrity

This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>

Kernel
Logs from the kernel can be collected directly with the im_kernel module. See Linux System Logs.

The system logger may need to be disabled or reconfigured to collect logs with im_kernel. To
NOTE completely disable syslogd on OpenBSD, run rcctl stop syslogd and rcctl disable
syslogd.

Example 179. Collecting Kernel Logs

This configuration reads events from the kernel.

nxlog.conf
1 <Input kernel>
2 Module im_kernel
3 </Input>

Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See the Linux System Logs and Collecting and Parsing Syslog sections for more information.

Example 180. Reading Syslog Messages From File

This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>

271
Log Files
The im_file module can be used to collect events from log files.

Example 181. Reading From Log Files

This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.

Example 182. Reading Process Accounting Logs

This configuration turns on process accounting (using /var/account/acct as the log file) and watches
for messages.

nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/var/account/acct"
5 </Input>

272
Chapter 35. GNU/Linux
NXLog can collect various types of system logs on GNU/Linux platforms. For deployment details, see the
supported Linux platforms and the corresponding installation page for RHEL/CentOS, Debian/Ubuntu, or SLES.
Notes are also available about hardening and monitoring NXLog on Linux.

Custom Programs and Scripts


The im_exec module allows log data to be collected from custom external programs. The im_perl, im_python
and im_ruby modules can also be used to implement integration with custom data sources or sources that
are not supported out-of-the-box.

The perlfcount add-on can be used to collect system information and statistics on Linux platforms.

DNS Monitoring
Logs can be collected from BIND 9 on Linux.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See Monitoring
on Linux.

Kernel
The im_kernel module reads logs directly from the kernel log buffer. These logs can be parsed with
xm_syslog. See the Linux System Logs section.

Linux Audit System


The im_linuxaudit module can be used to collect Audit System logs directly from the kernel without using
auditd or temporary log files. Audit logs can also be collected from file with im_file, or over the network by
using im_tcp in conjunction with audisp-remote (a plugin for the audit event dispatcher daemon, audispd,
that performs remote logging). See Linux Audit System for more details.

Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In each case, the xm_syslog module can be used to parse the events. See
the Linux System Logs and Collecting and Parsing Syslog sections for more information.

Log Databases
Events can be read from databases with the im_dbi, im_oci, and im_odbc modules.

Log Files
The im_file module can be used to collect events from log files.

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
This overlaps with Audit System logging.

273
Chapter 36. Apple macOS
NXLog can collect various types of system logs on the macOS platform. For deployment details, see the
supported macOS platforms and macOS installation.

Apple System Logs Files


The im_file and xm_asl modules can be used to collect and parse Apple System Log (*.asl) files.

Example 183. Reading and Parsing Apple System Logs

This example reads events from input.asl and parses them with the xm_asl parser.

nxlog.conf
 1 <Extension asl_parser>
 2 Module xm_asl
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 # Example: "/var/log/asl/*"
 8 File "foo/input.asl"
 9 InputType asl_parser
10 Exec delete($EventReceivedTime);
11 </Input>

Basic Security Mode (BSM) Auditing


The im_bsm module collects logs directly from the BSM auditing system.

Example 184. Collecting BSM Audit Logs From the Kernel

This configuration reads BSM audit logs directly from the kernel with the im_bsm module.

nxlog.conf
1 Group wheel
2
3 <Input bsm>
4 Module im_bsm
5 DeviceFile /dev/auditpipe
6 </Input>

Alternatively, BSM logs can be read from the log files.

274
Example 185. Reading BSM Audit Logs From File

This configuration reads from the BSM audit log files with im_file and parses the events with xm_bsm.

nxlog.conf
 1 Group wheel
 2
 3 <Extension bsm_parser>
 4 Module xm_bsm
 5 </Extension>
 6
 7 <Input bsm>
 8 Module im_file
 9 File '/var/audit/*'
10 InputType bsm_parser
11 </Input>

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

Example 186. Using an External Command

This example uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input systemlog>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/system.log
6 </Input>

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring.

Example 187. Monitoring File Integrity

This configuration watches for changes to files and directories under /bin and /usr/bin/.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/bin/*"
4 File "/usr/bin/*"
5 ScanInterval 3600
6 Recursive TRUE
7 </Input>

Kernel
Logs from the kernel can be collected directly with the im_kernel module or via the local log file with im_file.

275
For log collection details, see macOS Kernel.

Local Syslog
Events written to file in Syslog format can be collected with im_file. The xm_syslog module can be used to
parse the events. See the Syslog section for more information.

Example 188. Reading Syslog Messages From File

This configuration file collects system logs from /var/log/system.log. This method does not read
from /dev/klog directly, so it is not necessary to disable syslogd.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/system.log"
8 Exec parse_syslog();
9 </Input>

Log Files
The im_file module can be used to collect events from log files.

Example 189. Reading From Log Files

This configuration uses the im_file module to read events from the specified log file.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/foo/in.log"
4 </Input>

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.

Example 190. Reading Process Accounting Logs

With this configuration file, NXLog will enable process accounting to the specified file and reads events
from it.

nxlog.conf
1 Group wheel
2
3 <Input acct>
4 Module im_acct
5 File '/var/log/acct'
6 AcctOn TRUE
7 </Input>

276
Chapter 37. Oracle Solaris
NXLog can collect various types of system logs on the Solaris platform. For deployment details, see the
supported Solaris platforms, Solaris installation, and monitoring.

Basic Security Mode (BSM) Auditing


The xm_bsm module can be used to parse logs collected with im_file.

Example 191. Collect BSM Audit Logs From the Kernel

This example configuration reads from files in /var/audit with im_file. The InputType provided by
xm_bsm is used to parse the binary format.

nxlog.conf
1 <Extension bsm_parser>
2 Module xm_bsm
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/audit/*'
8 InputType bsm_parser
9 </Input>

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

Example 192. Using an External Command

This example uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
1 <Input systemlog>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/syslog
6 </Input>

DNS Monitoring
Logs can be collected from BIND 9.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring.

277
Example 193. Monitoring File Integrity

This configuration watches for changes to files and directories under /usr/bin/.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/usr/bin/*"
4 Digest SHA1
5 ScanInterval 3600
6 Recursive TRUE
7 </Input>

Local Syslog
Events written to file in Syslog format can be collected with the im_file module and parsed with the xm_syslog
module. See Collecting and Parsing Syslog for more information.

Example 194. Reading Syslog Messages From File

This example uses the im_file module to read messages from /var/log/messages and the xm_syslog
parse_syslog() procedure to parse them.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>

Log Files
The im_file module can be used to collect events from log files.

Example 195. Reading From Log Files

This configuration uses the im_file module to read events from the specified log file.

nxlog.conf
1 <Input in>
2 Module im_file
3 File "/foo/input.log"
4 </Input>

Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.

278
Example 196. Reading Process Accounting Logs

With this configuration file, NXLog will enable process accounting to the specified file and read events
from it.

nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File '/tmp/nxlog.acct'
5 </Input>

279
Chapter 38. Microsoft Windows
NXLog can collect various types of system logs on the Windows platform. For deployment details, see the
supported Windows platforms and Windows installation. Notes are also available about hardening and
monitoring NXLog on Windows.

Custom Programs
The im_exec module allows log data to be collected from custom external programs.

DHCP Monitoring
DHCP logging can be set up for Windows DHCP Server using the im_file module by reading DHCP audit logs
directly from CSV files. Alternatively, the im_msvistalog module can be used to collect DHCP Server or Client
event logs from the built-in channels in Windows Event Log.

DNS Monitoring
DNS logging can be set up for Windows DNS Server using either ETW tracing or debug logging.

File Integrity Monitoring


File and directory changes can be detected and logged for auditing with the im_fim module. See File Integrity
Monitoring on Windows.

Log Databases
Events can be read from databases with the im_odbc module. Some products write logs to SQL Server
databases; see the Microsoft System Center Operations Manager section for an example.

Log Files
The im_file module can be used to collect events from log files.

Microsoft Active Directory Domain Controller


Troubleshoot Active Directory domain controllers by integrating DCs as log sources.

Microsoft Exchange
Logs generated by Microsoft Exchange can be used as a source for log collection with many log types
supported.

Microsoft IIS
IIS can be configured to write logs in W3C format, which can be read with im_file and parsed with xm_w3c or
xm_csv. Other formats can be parsed with other methods. See Microsoft IIS.

Microsoft .NET applications


Capture logs directly from Microsoft .NET applications using third-party utilities.

Microsoft SharePoint
Collect the various types of logs generated by Microsoft SharePoint, parse the ULS into another format, and
send.

Microsoft SQL Server


Log messages can be collected from the Microsoft SQL Server error log files with the im_file module. See
Microsoft SQL Server.

Microsoft System Center Operations Manager (SCOM)


Logs recorded in Microsoft System Center Operations Manager databases can be collected with the im_odbc
module.

Registry Monitoring
The Windows Registry can be monitored for changes; see the im_regmon module. For an example ruleset,

280
see the regmon-rules add-on.

Snare
Windows Event Log data can be converted to Snare format as needed for some third-party integrations.

Sysmon
Many additional audit events can be generated with the Sysmon utility, including process creation, system
driver loading, network connections, and modification of file creation timestamps. These events are written to
the Event Log. See the Sysmon section for more information.

Windows Applocker
Collecting event logs from Windows AppLocker is supported by using the im_msvistalog or the other Windows
Event Log modules.

Windows Event Tracing (ETW)


Events logged through ETW can be collected with the im_etw module. This includes events logged to the
Analytical and Debug logs.

Windows Event Log


See the Windows Event Log section, which covers both local and remote event collection with the
im_msvistalog, im_wseventing, and im_mseventlog modules.

Windows Firewall
Windows Firewall logs can be collected with the im_file module from the Advanced Security log. Alternatively,
the im_msvistalog module can be used to collect Windows Firewall events from Windows Event Log.

Windows Management Instrumentation (WMI)


WMI event logs can be read directly from Windows Event Log by using the im_msvistalog module. WMI events
can also be collected via ETW directly using the im_etw module. Reading WMI log files utilizing the im_file
module is also supported.

Windows Performance Counters


The im_winperfcount module can be used for collecting data such as CPU and memory usage.

Windows Powershell
PowerShell scripts can be integrated for log processing tasks and configuration generation (for example,
Azure SQL Database); see Using PowerShell Scripts. It is also possible to collect Powershell activity logs.

281
Integration

282
Chapter 39. Amazon Web Services (AWS)
AWS is a subsidiary of Amazon that provides various cloud computing services.

39.1. Amazon CloudWatch


Amazon CloudWatch is a set of cloud monitoring services. The CloudWatch Logs service can be used to collect
log data from Elastic Compute Cloud (EC2), CloudTrail, Route 53, and other sources. See the CloudWatch
documentation for more information about configuring and using CloudWatch Logs.

NXLog can be set up to retrieve CloudWatch log streams in either of two ways:

• NXLog can connect to the CloudWatch API using the Boto 3 client and poll for logs at regular intervals. This is
suitable when a short delay in log collection is acceptable.
• Or, AWS Lambda can be set up to push log data to NXLog via HTTP. This method offers low latency log
collection.

39.1.1. Pulling Logs via the CloudWatch API


1. A service account must be created for accessing the log data. In the AWS web interface, go to Services >
IAM.
2. Click the Users option in the left-side panel and click the Add user button.
3. Provide a User name, for example nxlog. Tick the checkbox to allow Programmatic access to this account.

4. Choose to Attach existing policies directly and select the CloudWatchLogsReadOnly policy. Click Next:
Review and then Create user.

283
5. Save access keys for this user and Close.
6. Install and configure Boto 3, the AWS SDK for Python. See the Boto 3 Quickstart and Credentials
documentation for more details.
7. Edit the region_name and group_name variables in the cloudwatch.py script, as necessary.

8. Configure NXLog to execute the script with the im_python module.

284
Example 197. Using a Amazon CloudWatch Add-On

This example NXLog configuration uses im_python to execute the CloudWatch add-on script. The xm_json
parse_json() procedure is then used is parse the JSON log data into fields.

nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input py>
6 Module im_python
7 PythonCode cloudwatch.py
8 Exec parse_json();
9 </Input>

cloudwatch.py (truncated)
import nxlog, boto3, json, time

class LogReader:
  def __init__(self, time_interval):
  client = boto3.client('logs', region_name='eu-central-1')

  self.lines = ""
  all_streams = []
  group_name = '<ENTER GROUP NAME HERE>'

  #query CloudWatch for all log streams in the group


  stream_batch = client.describe_log_streams(logGroupName=group_name)
  all_streams += stream_batch['logStreams']
  start_time = int(time.time()-time_interval)*1000
  end_time = int(time.time())*1000

  while 'nextToken' in stream_batch:


  stream_batch = client.describe_log_streams(
[...]

39.1.2. Accepting Log Data From Lambda via HTTP


Using a push model follows an event-driven computing approach and allows for low latency. In this scenario, an
AWS Lambda function sends log data in JSON format with the HTTP POST method. NXLog listens for connections
and accepts log data.

1. In the AWS web interface, go to Services › Lambda and click the Create function button.

2. Click the Author from scratch button.


3. Provide the name for the function and select Create a new role from template(s) from the Role dropdown.
Enter a role name to be associated with this Lambda function. Then click the Create function button.

285
4. Under Function code select Upload a .ZIP file for Code entry type, select Python under Runtime, and
change the Handler name to lambda_function.lambda_handler.
5. Set the correct host and port in lambda_function.py, then upload a ZIP archive with that file (and
certificates, if needed). Click Save.

6. From the Configuration tab, change to the Triggers tab. Click + Add trigger.
7. Choose CloudWatch Logs as a trigger for the Lambda function. Select the log group that should be
forwarded and provide a Filter Name, then click Submit.

286
287
Example 198. Lambda Collection via HTTPS Input

In this example, the im_http module listens for connections from the Lambda script via HTTP. The xm_json
parse_json() procedure is then used to parse the JSON log data into fields.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input http>
 6 Module im_http
 7 ListenAddr 127.0.0.1
 8 Port 8080
 9 HTTPSCertFile %CERTDIR%/server-cert.pem
10 HTTPSCertKeyFile %CERTDIR%/server-key.pem
11 HTTPSCAFile %CERTDIR%/ca.pem
12 HTTPSRequireCert TRUE
13 HTTPSAllowUntrusted FALSE
14 Exec parse_json();
15 </Input>

lambda_function.py
import json, base64, zlib, ssl, http.client

print('Loading function')

def lambda_handler(event, context):


  compressed_logdata = base64.b64decode(event['awslogs']['data'])
  logdata = zlib.decompress(compressed_logdata, 16+ zlib.MAX_WBITS)
  context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
  context.load_verify_locations("ca.pem")

  # For more details regarding the SSLContext.load_cert_chain()


  # function, please refer to Python's ssl module documentation at
  # <https://docs.python.org/3/library/ssl.html#ssl.SSLContext>
  context.load_cert_chain("client.pem")

  conn = http.client.HTTPSConnection("<HOST>:<PORT>", context=context)


  conn.set_debuglevel(3)
  headers = {"Content-type": "application/json"}
  conn.request('POST', "", logdata, headers)
  conn.close()

39.2. Amazon EC2


Amazon EC2 provides cloud-based virtual computing.

When running NXLog in EC2 instances, it may be helpful to include the current instance ID in the collected logs.
For more information about retrieving EC2 instance metadata and adding it to event data, see the Amazon Web
Services section of the Cloud Instance Metadata chapter.

39.3. Amazon Simple Storage Service (S3)


Amazon S3 is a high availability, low-latency storage service offered by Amazon. For more information, see the
AWS Amazon S3 Overview.

288
NXLog can be set up to send log data to S3 storage or read log data from S3 storage. For more information, see
the Amazon S3 add-on documentation.

289
Chapter 40. Apache HTTP Server
The Apache HTTP Server provides very comprehensive and flexible logging capabilities. A brief overview is
provided in the following sections. See the Log Files section of the Apache HTTP Server Documentation for more
detailed information about configuring logging.

40.1. Error Log


Apache error logging is controlled by the ErrorLog, ErrorLogFormat, and LogLevel directives. The error log can
be parsed by NXLog with a regular expression.

290
Example 199. Using the Apache Error Log

The following directives enable error logging of all messages at or above the "informational" severity level,
in the specified format, to the specified file. The ErrorLogFormat defined below is equivalent to the
default, which includes the timestamp, the module producing the message, the event severity, the process
ID, the thread ID, the client address, and the detailed error message.

apache2.conf
LogLevel info
ErrorLogFormat "[%{u}t] [%-m:%l] [pid %P:tid %T] [client %a] %M"
ErrorLog /var/log/apache2/error.log

The following is a typical log message generated by the Apache HTTP Server, an NXLog configuration for
parsing it, and the resulting JSON.

Log Sample
[Tue Aug 01 07:17:44.496832 2017] [core:info] [pid 15019:tid 140080326108928] [client
192.168.56.1:60154] AH00128: File does not exist: /var/www/html/notafile.html↵

nxlog.conf
 1 <Input apache_error>
 2 Module im_file
 3 File '/var/log/apache2/error.log'
 4 <Exec>
 5 if $raw_event =~ /(?x)^\[\S+\ ([^\]]+)\]\ \[(\S+):(\S+)\]\ \[pid\ (\d+):
 6 tid\ (\d+)\]\ (\[client\ (\S+)\]\ )?(.+)$/
 7 {
 8 $EventTime = parsedate($1);
 9 $ApacheModule = $2;
10 $ApacheLogLevel = $3;
11 $ApachePID = $4;
12 $ApacheTID = $5;
13 if $7 != '' $ClientAddress = $7;
14 $Message = $8;
15 }
16 </Exec>
17 </Input>

Output Sample
{
  "EventReceivedTime": "2017-08-01T07:17:45.641190+02:00",
  "SourceModuleName": "apache_error",
  "SourceModuleType": "im_file",
  "EventTime": "2017-08-01T07:17:44.496832+02:00",
  "ApacheModule": "core",
  "ApacheLogLevel": "info",
  "ApachePID": "15019",
  "ApacheTID": "140080317716224",
  "ClientAddress": "192.168.56.1:60026",
  "Message": "AH00128: File does not exist: /var/www/html/notafile.html"
}

40.2. Access Log


The access log file and format are configured with the LogFormat and CustomLog directives. The LogFormat
directive is used to define a format, while the CustomLog directive configures logging to a specified file in one of
the defined formats. Multiple CustomLog directives can be used to enable logging to multiple files.

291
There are several options for handling logging when using virtual hosts. The examples below, when specified in
the main server context (not in a <VirtualHost> section) will log all requests exactly as with a single-host server.
The %v format string can be added, if desired, to log the name of the virtual server responding to the request.
Alternatively, the CustomLog directive can be specified inside a <VirtualHost> section, in which case only the
requests served by that virtual server will be logged to the file.

Pre-defined format strings for the Common Log and Combined Log Formats may be included by
NOTE default. These pre-defined formats may use %O (the total sent including headers) instead of the
standard %b (the size of the requested file) in order to allow detection of partial requests.

Example 200. Using the Common Log Format for the Access Log

The LogFormat directive below creates a format named common that corresponds to the Common Log
Format. The second directive configures the Apache HTTP Server to write entries to the access_log file in
the common format.

apache2.conf
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /var/log/apache2/access_log common

Example 201. Using the Combined Log Format for the Access Log

The following directives will configure the Apache HTTP Server to write entries to the access_log file in the
Combined Log Format.

apache2.conf
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" combined
CustomLog /var/log/apache2/access_log combined

NXLog configuration examples for parsing these access log formats can be found in the Common & Combined
Log Formats section.

292
Chapter 41. Apache Tomcat
Apache Tomcat provides flexible logging that can be configured for different transports and formats.

Example 202. Collecting Apache Tomcat Logs

Here is a log sample consisting of three events. The log message of the second event spans multiple lines.

Log Sample
2001-01-25 17:31:42,136 INFO [org.nxlog.somepackage.Class] - single line↵
2001-01-25 17:41:16,268 ERROR [org.nxlog.somepackage.Class] - Error retrieving names: ; nested
exception is:↵
  java.net.ConnectException: Connection refused↵
AxisFault↵
 faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException↵
 faultSubcode:↵
 faultString: java.net.ConnectException: Connection refused↵
 faultActor:↵
 faultNode:↵
 faultDetail:↵
  {http://xml.apache.org/axis/}stackTrace:java.net.ConnectException: Connection refused↵
2001-01-25 17:57:38,469 INFO [org.nxlog.somepackage.Class] - third log message↵

In order to parse and process multiple line log messages, the xm_multiline module can be used. In this
example, a regular expression match determines the beginning of a log message.

nxlog.conf
 1 define REGEX /(?x)^(?<EventTime>\d{4}\-\d{2}\-\d{2}\ \d{2}\:\d{2}\:\d{2}),\d{3}\ \
 2 (?<Severity>\S+)\ \[(?<Class>\S+)\]\ \-\ (?<Message>[\s\S]+)/
 3
 4 <Extension multiline>
 5 Module xm_multiline
 6 HeaderLine %REGEX%
 7 </Extension>
 8
 9 <Input log4j>
10 Module im_file
11 File "/var/log/tomcat6/catalina.out"
12 InputType multiline
13 Exec if $raw_event =~ %REGEX% $EventTime = parsedate($EventTime);
14 </Input>

293
Chapter 42. APC Automatic Transfer Switch
The APC Automatic Transfer Switch (ATS) is capable of sending its logs to a remote Syslog destination via UDP.

Log Sample
Date Time Event↵
------------------------------------------------------------------------↵
03/26/2017 16:20:55 Automatic Transfer Switch: Communication↵
  established.↵
03/26/2017 16:20:45 System: Warmstart.↵
03/26/2017 16:19:13 System: Detected an unauthorized user attempting↵
  to access the SNMP interface from 192.168.15.11.↵

The ATS is an independent device, so if there more than one installed in a particular environment the
configuration below must be applied to each device individually. For more details about configuring APC ATS
logging, go to the APC Support Site and select the product name or part number.

The steps below have been tested on AP7700 series devices and should work for other ATS
NOTE
models also.

1. Configure NXLog for receiving log entries via UDP (see the example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the device.
3. Configure Syslog logging on the ATS using either the web interface or the command line. See the following
sections.

Example 203. Receiving Logs from APC ATS

The following examples shows the ATS logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/apc.log"
19 Exec to_json();
20 </Output>

Logs like the example at the beginning of the chapter will produce output as follows.

294
Output Sample
{
  "MessageSourceAddress": "192.168.15.22",
  "EventReceivedTime": "2017-03-26 17:03:27",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 23,
  "SyslogFacility": "LOCAL7",
  "SyslogSeverityValue": 7,
  "SyslogSeverity": "DEBUG",
  "SeverityValue": 1,
  "Severity": "DEBUG",
  "Hostname": "192.168.15.22",
  "EventTime": "2017-03-26 16:04:18",
  "SourceName": "System",
  "Message": "Detected an unauthorized user attempting to access the SNMP interface from
192.168.15.11. 0x0004"
}
{
  "MessageSourceAddress": "192.168.15.22",
  "EventReceivedTime": "2017-03-26 17:20:04",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 23,
  "SyslogFacility": "LOCAL7",
  "SyslogSeverityValue": 7,
  "SyslogSeverity": "DEBUG",
  "SeverityValue": 1,
  "Severity": "DEBUG",
  "Hostname": "192.168.15.22",
  "EventTime": "2017-03-26 16:20:54",
  "SourceName": "System",
  "Message": "Warmstart. 0x0002"
}
{
  "MessageSourceAddress": "192.168.15.22",
  "EventReceivedTime": "2017-03-26 17:20:04",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 23,
  "SyslogFacility": "LOCAL7",
  "SyslogSeverityValue": 7,
  "SyslogSeverity": "DEBUG",
  "SeverityValue": 1,
  "Severity": "DEBUG",
  "Hostname": "192.168.15.22",
  "EventTime": "2017-03-26 16:20:55",
  "Message": "Automatic Transfer Switch: Communication established. 0x0C05"
}

42.1. Configuring via the Web Interface


1. Log in to the web panel.
2. Go to Network › Syslog.

3. Enable Syslog.
4. Select the Facility.

295
5. Add up to four Syslog servers and a port for each.
6. Map the Local Severity to the Syslog Severity as required.

7. Click [ Apply ].

42.2. Configuring via the Command Line


1. Log in to the ATS via Telnet.
2. Type 2 and then 9 to go to the Syslog settings.
3. Type 1 to configure the Syslog settings.
4. Type 1 to enable Syslog.
5. Type 2 to configure the Syslog facility.
6. Type 3 to save the changes.
7. Press ESC to go one level up.
8. Select one of the four Syslog server slots.
9. Type 1 to set the Syslog server IP address.
10. Type 2 to change set the UDP port number.
11. Type 3 to apply the changes.
12. Press ESC to go one level up.
13. Type 6 to map the local severity to the Syslog severity.
14. Use options from 1 to 4 to choose the mapping.
15. Type 5 to accept the changes.

296
Example 204. ATS Syslog Settings

The following shows the Syslog settings screen, which is shown after completing step 2 above.

------- Syslog ---------------------------------------------------------

  Syslog Settings Severity Mapping


  --------------------------------------------------------------------
  Syslog : Enabled Severe : DEBUG Info: DEBUG
  Facility: LOCAL7 Warning: DEBUG None: DEBUG

  # Syslog Server Port IP


  --------------------------------------------------------------------
  1 514 192.168.15.251
  2 514 0.0.0.0
  3 514 0.0.0.0
  4 514 0.0.0.0

  1- Settings
  2- Server 1
  3- Server 2
  4- Server 3
  5- Server 4
  6- Severity Mapping

  <ESC>- Back, <ENTER>- Refresh, <CTRL-L>- Event Log


> 1

297
Chapter 43. Apple macOS Kernel
NXLog supports different ways of collecting Apple macOS kernel logs:

• Collect directly with the im_kernel module, which requires disabling syslogd.
• Collect via the local log file with im_file; see Local Syslog below.

Example 205. Collecting Kernel Logs Directly

This configuration uses the im_kernel module to read events directly from the kernel (via /dev/klog).
This requires that syslogd be disabled as follows:

1. Unload the daemon.

$ sudo launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist

2. Rename plist to keep syslogd from starting again at the next reboot.

$ sudo mv /System/Library/LaunchDaemons/com.apple.syslogd.plist \
  /System/Library/LaunchDaemons/com.apple.syslogd.plist.disabled

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input kernel>
6 Module im_kernel
7 Exec parse_syslog_bsd();
8 </Input>

Newer versions of Apple macOS use ULS (Unified Logging System) with SIP (System Integrity Protection) and
users are unable to easily disable syslogd while keeping SIP enabled. For this setup, you can leverage the
im_exec module to collect from /usr/bin/log stream --style=json --type=log.

298
Example 206. Collecting ULS Kernel Logs from /usr/bin/log

This configuration uses the im_exec module to read events from the kernel (via /usr/bin/log) and
parses the data with the xm_json module.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension multiline>
 6 Module xm_multiline
 7 HeaderLine /^\[{|^},{/
 8 </Extension>
 9
10 <Input in>
11 Module im_exec
12 Command /usr/bin/log
13 Arg stream
14 Arg --style=json
15 Arg --type=log
16 InputType multiline
17 <Exec>
18 $raw_event =~ s/^\[{|^},{/{/;
19 $raw_event =~ s/\}]$//;
20 $raw_event = $raw_event + "\n}";
21 parse_json();
22 </Exec>
23 </Input>

299
Chapter 44. ArcSight Common Event Format (CEF)
NXLog can be configured to collect or forward logs in Common Event Format (CEF). NXLog Enterprise Edition
provides the xm_cef module for parsing and generating CEF.

CEF is a text-based log format developed by ArcSight™ and used by HP ArcSight™ products. It uses Syslog as
transport. The full format includes a Syslog header or "prefix", a CEF "header", and a CEF "extension". The
extension contains a list of key-value pairs. Standard key names are provided, and user-defined extensions can
be used for additional key names. In some cases, CEF is used with the Syslog header omitted.

CEF Syntax
Jan 11 10:25:39 host CEF:Version|Device Vendor|Device Product|Device Version|Device Event Class
ID|Name|Severity|[Extension]↵

Log Sample
Oct 12 04:16:11 localhost CEF:0|nxlog.org|nxlog|2.7.1243|Executable Code was Detected|Advanced
exploit detected|100|src=192.168.255.110 spt=46117 dst=172.25.212.204 dpt=80↵

44.1. Collecting and Parsing CEF


NXLog Enterprise Edition can be configured to collect and parse CEF logs with the xm_cef module.

The ArcSight™ Logger can be configured to send CEF logs via TCP with the following steps.

1. Log in to the Logger control panel.


2. Browse to Configuration › Data › Forwarders.

3. Click Add to create a new Forwarder:


◦ Name: nxlog

◦ Type: TCP Forwarder

◦ Type of Filter: Unified Query

4. Click Next to proceed to editing the new Forwarder:


◦ Query: (define as required)
◦ IP/Host: (enter the IP address or hostname of the system running NXLog)
◦ Port: 1514

5. Click Save.

300
Example 207. Receiving CEF Logs

With this configuration, NXLog will collect CEF logs via TCP, convert to plain JSON format, and save to file.

nxlog.conf
 1 <Extension _cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Extension _syslog>
10 Module xm_syslog
11 </Extension>
12
13 <Input logger_tcp>
14 Module im_tcp
15 Host 0.0.0.0
16 Port 1514
17 Exec parse_syslog(); parse_cef($Message);
18 </Input>
19
20 <Output json_file>
21 Module om_file
22 File '/var/log/json'
23 Exec to_json();
24 </Output>
25
26 <Route r>
27 Path logger_tcp => json_file
28 </Route>

44.2. Generating and Forwarding CEF


NXLog Enterprise Edition can be configured to generate and forward CEF logs with the xm_cef module.

The ArcSight™ Logger can be configured to receive CEF logs via TCP with the following steps.

1. Log in to the Logger control panel.


2. Browse to Configuration › Data › Receivers in the navigation menu.

3. Click Add to create a new Receiver:


◦ Name: nxlog

◦ Type: CEF TCP Receiver

4. Click Next to proceed to editing the new Receiver:


◦ Port: 574

◦ Encoding: UTF-8

◦ Source Type: CEF

5. Click Save.

301
Example 208. Sending CEF Logs

With this configuration, NXLog will read Syslog logs from file, convert them to CEF, and forward them to the
ArcSight Logger via TCP. Default values will be used for the CEF header unless corresponding fields are
defined in the event record (see the to_cef() procedure in the Reference Manual for a list of fields).

nxlog.conf
 1 <Extension _cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input messages_file>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output logger_tcp>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 574
19 Exec $Message = to_cef(); to_syslog_bsd();
20 </Output>
21
22 <Route r>
23 Path messages_file => logger_tcp
24 </Route>

44.3. Using xm_csv and xm_kvp


Because NXLog Community Edition does not include the xm_cef module, the xm_csv and xm_kvp modules may
be used instead to handle CEF logs.

WARNING The xm_csv and xm_kvp modules may not always correctly parse or generate CEF logs.

Example 209. Using CEF with NXLog Community Edition

Here, the xm_csv module is used to parse the pipe-delimited CEF header, while the xm_kvp module is used
to parse the space-delimited key-value pairs in the CEF extension. The required extension configurations
are shown below.

302
nxlog.conf Extensions
 1 <Extension cef_header>
 2 Module xm_csv
 3 Fields $Version, $Device_Vendor, $Device_Product, $Device_Version, \
 4 $Signature_ID, $Name, $Severity, $_Extension
 5 Delimiter |
 6 QuoteMethod None
 7 </Extension>
 8
 9 <Extension cef_extension>
10 Module xm_kvp
11 KVDelimiter '='
12 KVPDelimiter ' '
13 QuoteMethod None
14 </Extension>
15
16 <Extension syslog>
17 Module xm_syslog
18 </Extension>

For CEF input, use an input instance like this one.

nxlog.conf Input
 1 <Input in>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 <Exec>
 6 parse_syslog();
 7 cef_header->parse_csv($Message);
 8 cef_extension->parse_kvp($_Extension);
 9 </Exec>
10 </Input>

For CEF output, use an output instance like this one.

nxlog.conf Output
 1 <Output out>
 2 Module om_tcp
 3 Host 192.168.1.1
 4 Port 574
 5 <Exec>
 6 $_Extension = cef_extension->to_kvp();
 7 $Version = 'CEF:0';
 8 $Device_Vendor = 'NXLog';
 9 $Device_Product = 'NXLog';
10 $Device_Version = '';
11 $Signature_ID = '0';
12 $Name = '-';
13 $Severity = '';
14 $Message = cef_header->to_csv();
15 to_syslog_bsd();
16 </Exec>
17 </Output>

303
Chapter 45. Box
Box provides content management and file sharing services.

NXLog can be set up to pull events from Box using their REST API. For more information, see the Box add-on.

304
Chapter 46. Brocade Switches
Brocade switches can be configured to send Syslog messages to a remote destination, UDP port 514.

Log Sample
2017/03/22-23:05:12, [SEC-1203], 113962, FID 128, INFO, fcsw1, Login information: Login successful
via TELNET/SSH/RSH. IP Addr: admin2↵

The best way to configure a Brocade switch is with the command line interface. In the case of multiple switches
running in redundancy mode, each device must be configured separately.

More details on configuring Brocade switches can be found in the Brocade Document Library: search for a
particular switch model and select Installation & Configuration Guides from the Filter list.

The steps below have been tested with Brocade 4100 series switches and OS v6. Newer
NOTE
software versions may have additional capabilities, such as sending logs over TLS.

1. Configure NXLog for receiving Syslog entries via UDP (see the example below), then restart NXLog.
2. Make sure the NXLog agent is accessible from the switch.
3. Log in to the switch via SSH.
4. Run the following commands. Replace LEVEL with an integer corresponding to the desired Syslog local facility
(see the example). Replace IP_ADDRESS with the address of the NXLog agent.

# syslogdfacility -l LEVEL
# syslogdIpAdd IP_ADDRESS

Example 210. Sending Logs With local5 Facility

The following commands query the current Syslog facility and then set up Syslog logging to
192.168.6.143 with Syslog facility local5.

fcsw1:admin> syslogdfacility
Syslog facility: LOG_LOCAL7
fcsw1:admin> syslogdfacility -l 5
Syslog facility changed to LOG_LOCAL5
fcsw1:admin> syslogdIpAdd 192.168.6.143
Syslog IP address 192.168.6.143 added

305
Example 211. Receiving Brocade Logs

This example shows Brocade switch logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/brocade.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.5.15",
  "EventReceivedTime": "2017-03-22 20:23:58",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 21,
  "SyslogFacility": "LOCAL5",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-03-22 20:23:58",
  "Hostname": "192.168.5.15",
  "SourceName": "raslogd",
  "Message": "2017/03/22-23:05:12, [SEC-1203], 113962, WWN 10:00:00:05:1e:02:8e:fc | FID 128,
INFO, fcsw1, Login information: Login successful via TELNET/SSH/RSH. IP Addr: admin2"
}

306
Chapter 47. Check Point
The im_checkpoint module, provided by NXLog Enterprise Edition, can collect logs from Check Point devices over
the OPSEC LEA protocol.

Example 212. Collecting Check Point LEA Logs

With the following configuration, NXLog will collect logs from Check Point devices over the LEA protocol and
write them to file in JSON format.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input checkpoint>
 6 Module im_checkpoint
 7 Command /opt/nxlog/bin/nx-im-checkpoint
 8 LEAConfigFile /opt/nxlog/etc/lea.conf
 9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'tmp/output'
14 Exec $raw_event = to_json();
15 </Output>
16
17 <Route checkpoint_to_file>
18 Path checkpoint => file
19 </Route>

307
Chapter 48. Cisco ACS
An example Syslog record from a Cisco Secure Access Control System (ACS) device looks like the following. For
more information, refer to the Syslog Logging Configuration Scenario chapter in the Cisco Configuration Guide.

Log Sample
<38>Oct 16 21:01:29 10.0.1.1 CisACS_02_FailedAuth 1k1fg93nk 1 0 Message-Type=Authen failed,User-
Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵

Example 213. Collecting From Cisco Secure ACS

The following configuration file instructs NXLog to accept Syslog messages on UDP port 1514. The payload
is parsed as Syslog and then the ACS specific fields are extracted. The output is written to file in JSON
format.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_udp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15 if ( $Message =~ /^CisACS_(\d\d)_(\S+) (\S+) (\d+) (\d+) (.*)$/ )
16 {
17 $ACSCategoryNumber = $1;
18 $ACSCategoryName = $2;
19 $ACSMessageId = $3;
20 $ACSTotalSegments = $4;
21 $ACSSegmentNumber = $5;
22 $ACSMessage = $6;
23 if ( $ACSMessage =~ /Message-Type=([^\,]+)/ ) $ACSMessageType = $1;
24 if ( $ACSMessage =~ /User-Name=([^\,]+)/ ) $AccountName = $1;
25 if ( $ACSMessage =~ /NAS-IP-Address=([^\,]+)/ ) $ACSNASIPAddress = $1;
26 if ( $ACSMessage =~ /AAA Server=([^\,]+)/ ) $ACSAAAServer = $1;
27 }
28 else log_warning("Does not match: " + $raw_event);
29 [...]

308
Chapter 49. Cisco ASA
Cisco Adaptive Security Appliance (ASA) devices are capable of sending their logs to a remote Syslog destination
via TCP or UDP. When sending logs over the network, TCP is the preferred protocol since packet loss is possible
with UDP, especially when network traffic is high.

Log Sample
Apr 15 2017 00:21:14 192.168.12.1 : %ASA-5-111010: User 'john', running 'CLI' from IP 0.0.0.0,
executed 'dir disk0:/dap.xml'↵
Apr 15 2017 00:22:27 192.168.12.1 : %ASA-4-313005: No matching connection for ICMP error message:
icmp src outside:81.24.28.226 dst inside:72.142.17.10 (type 3, code 0) on outside interface.
Original IP payload: udp src 72.142.17.10/40998 dst 194.153.237.66/53.↵
Apr 15 2017 00:22:42 192.168.12.1 : %ASA-3-710003: TCP access denied by ACL from
179.236.133.160/8949 to outside:72.142.18.38/23↵

For more details about configuring Syslog on Cisco ASA, see the Cisco configuration guide for the ASA or
Adaptive Security Device Manager (ASDM) version in use.

The steps below have been tested with ASA 9.x and ASDM 7.x, but should also work with other
NOTE
versions.

49.1. Forwarding Cisco ASA Logs Over TCP


1. Configure NXLog for receiving Syslog via TCP (see the examples below), then restart NXLog.
2. Make sure the NXLog agent is accessible from each of the ASA devices being configured.
3. Set up Syslog logging using either the command line or ASDM. See the following sections.

Example 214. Receiving Cisco ASA Logs

This example shows Cisco ASA logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/asa.log"
19 Exec to_json();
20 </Output>

The following output log sample resulted from the input at the beginning of the chapter being processed by
this configuration.

309
Output Sample
{
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-04-15 00:19:53",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 20,
  "SyslogFacility": "LOCAL4",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.12.1",
  "EventTime": "2017-04-15 00:21:14",
  "Message": "%ASA-5-111010: User 'john', running 'CLI' from IP 0.0.0.0, executed 'dir
disk0:/dap.xml'"
}
{
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-04-15 00:21:06",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 20,
  "SyslogFacility": "LOCAL4",
  "SyslogSeverityValue": 4,
  "SyslogSeverity": "WARNING",
  "SeverityValue": 3,
  "Severity": "WARNING",
  "Hostname": "192.168.12.1",
  "EventTime": "2017-04-15 00:22:27",
  "Message": "%ASA-4-313005: No matching connection for ICMP error message: icmp src
outside:81.24.28.226 dst inside:72.142.17.10 (type 3, code 0) on outside interface. Original IP
payload: udp src 72.142.17.10/40998 dst 194.153.237.66/53."
}
{
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-04-15 00:21:21",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 20,
  "SyslogFacility": "LOCAL4",
  "SyslogSeverityValue": 3,
  "SyslogSeverity": "ERR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "Hostname": "192.168.12.1",
  "EventTime": "2017-04-15 00:22:42",
  "Message": "%ASA-3-710003: TCP access denied by ACL from 179.236.133.160/8949 to
outside:72.142.18.38/23"
}

The contents of the message can be parsed to extract additional fields.

310
Example 215. Extracting Additional Fields

The following configuration uses regular expressions to parse additional key-value pairs from substrings
embedded in the string value of $Message field. Once they have been parsed and added as new fields, a
copy of the $Message field is made, given the name $ASAMessage, and assigned the remaining string value
after the parsed substrings have been removed.

nxlog.conf
 1 <Input in_syslog_tcp>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /^%(ASA)-(\d)-(\d{6}): (.*)$/
 8 {
 9 $ASASeverityNumber = $2;
10 $ASAMessageID = $3;
11 $ASAMessage = $4;
12 }
13 </Exec>
14 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-04-15 14:27:04",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 20,
  "SyslogFacility": "LOCAL4",
  "SyslogSeverityValue": 3,
  "SyslogSeverity": "ERR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "Hostname": " 192.168.12.1",
  "EventTime": "2017-04-15 14:28:26",
  "Message": "%ASA-3-710003: TCP access denied by ACL from 117.247.81.21/52569 to
outside:72.142.18.38/23",
  "ASASeverityNumber": "3",
  "ASAMessageID": "710003",
  "ASAMessage": "TCP access denied by ACL from 117.247.81.21/52569 to outside:72.142.18.38/23"
}

Further field extraction can be done based on message ID. Detailed information on existing IDs and their formats
can be found in the Cisco ASA Series Syslog Messages book.

311
Example 216. Extracting Fields According to Message ID

The following NXLog configuration parses a very common firewall message: "TCP access denied by ACL".
The regular expressions have been enhanced with pattern matching for parsing out the IP address/port
for both the source and the destination.

nxlog.conf
 1 <Input in_syslog_tcp>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /(?x)^%ASA-3-710003:\ TCP\ access\ denied\ by\ ACL\ from
 8 \ ([0-9.]*)\/([0-9]*)\ to\ outside:([0-9.]*)\/([0-9]*)/
 9 {
10 $ASASeverityNumber = "3";
11 $ASAMessageID = "710003";
12 $ASAMessage = "TCP access denied by ACL";
13 $ASASrcIP = $1;
14 $ASASrcPort = $2;
15 $ASADstIP = $3;
16 $ASADstPort = $4;
17 }
18 </Exec>
19 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-04-15 15:10:20",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 20,
  "SyslogFacility": "LOCAL4",
  "SyslogSeverityValue": 3,
  "SyslogSeverity": "ERR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "Hostname": "192.168.12.1",
  "EventTime": "2017-04-15 15:11:43",
  "Message": "%ASA-3-710003: TCP access denied by ACL from 119.80.179.109/2083 to
outside:72.142.18.38/23",
  "ASASeverityNumber": "3",
  "ASAMessageID": "710003",
  "ASAMessage": "TCP access denied by ACL",
  "ASASrcIP": "119.80.179.109",
  "ASASrcPort": "2083",
  "ASADstIP": "72.142.18.38",
  "ASADstPort": "23"
}

49.1.1. Configuring via Command Line


To configure Cisco ASA Syslog logging from the command line, follow these steps.

1. Log in to the ASA device via SSH.

312
2. Enable logging.

# logging enable

3. In case of a High Availability (HA) pair, enable logging on the standby unit.

# logging standby

4. Specify the Syslog facility. Replace FACILITY with a number from 16 to 23, corresponding to local0 through
local7 (the default is 20, or local4).

# logging facility FACILITY

5. Specify the severity level. Replace LEVEL with a number from 0 to 7. Use the maximum level for which
messages should be generated (severity level 3 will produce messages for levels 3, 2, 1, and 0). The levels
correspond to the Syslog severities.

# logging trap LEVEL

6. Allow ASA to pass traffic when the Syslog server is not available.

# logging permit-hostdown

If logs are being sent via TCP and this setting is not configured, ASA will stop passing traffic
NOTE
when the Syslog server is unavailable.

7. Configure the Syslog host. Replace IP_ADDRESS and PORT with the remote IP address and port that NXLog is
listening on.

# logging host inside IP_ADDRESS tcp/PORT

To enable SSL/TLS for connections to the NXLog agent, add secure at the end of the above
NOTE
command. The im_ssl module will need to be used when configuring NXLog.

Example 217. Redirecting Logs to 192.168.6.143

This command configures 192.168.6.143 as the Syslog host, with TCP port 1514.

# logging host inside 192.168.6.143 tcp/1514

8. Apply the configuration.

# write memory

49.1.2. Configuring via ASDM


To configure remote logging via ASDM, follow these steps.

1. Connect and log in to the GUI.


2. Go to Configuration › Device Management › Logging › Logging Setup and make sure Enable Logging is
selected. In case of a High Availability (HA) pair, Enable logging on the failover standby unit should also be
selected. Click [ Apply ].

313
3. Go to Syslog Setup and specify the Facility Code (the default is 20). Click [ Apply ].

4. Go to Logging Filters, select Syslog Servers, click [ Edit ] and specify the severity level. Click [ OK ] and then [
 Apply ].

314
5. Go to Syslog Servers and select Allow user traffic to pass when TCP syslog server is down. Click [ Apply ].

This setting is important to avoid downtime during TCP logging in case the Syslog server is
NOTE
unavailable.

6. Under Syslog Servers, click [ Add ] and specify the interface, remote IP address, protocol and port. Click [ OK
 ] and then [ Apply ].

To enable SSL/TLS for connections to the NXLog agent, select the Enable secure syslog
NOTE
using SSL/TLS option. The im_ssl module will need to be used when configuring NXLog.

7. Click [ Save ] to save the configuration.

315
49.2. NetFlow From Cisco ASA
NetFlow is a protocol used by Cisco devices that provides the ability to send details about network traffic to a
remote destination. NXLog is capable of receiving NetFlow logs. The steps below outline the configuration
required to send information about traffic passing through Cisco ASA to NXLog via UDP.

1. Configure NXLog for receiving NetFlow via UDP/2162 (see the example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from each of the ASA devices being configured.
3. Set up NetFlow logging on Cisco ASA, using either the command line or ASDM. See the following sections.

The steps below have been tested with ASA 9.x and ASDM 7.x, but should work for other
NOTE
versions also.

Example 218. Receiving NetFlow Logs

This example shows NetFlow logs as received and processed by NXLog.

nxlog.conf
 1 <Extension netflow>
 2 Module xm_netflow
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_netflow_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 2162
13 InputType netflow
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/netflow.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "Version": 9,
  "SysUpTimeMilisec": 2374222958,
  "ExportTime": "2017-05-17 18:39:05",
  "TimeMsecStart": "2017-05-17 18:38:04",
  "Protocol": 6,
  "SourcePort": 64394,
  "DestPort": 443,
  "SourceIpV4Address": "192.168.13.37",
  "DestIpV4Address": "172.217.3.135",
  "inputSNMPIface": 4,
  "outputSNMPIface": 3,
  "ASAeventTime": "2017-05-17 18:39:05",
  "ASAconnID": 41834207,
  "FNF_ICMPCode": 0,
  "FNF_ICMPType": 0,
  "ASAevent": 1,

316
  "ASAextEvent": 0,
  "ASA_XlateSourcePort": 64394,
  "ASA_XlateDestPort": 443,
  "ASA_V4XlateSourceAddr": "72.142.18.38",
  "ASA_V4XlateDestAddr": "172.217.3.135",
  "ASA_IngressACL": "433a1af1a925365e00000000",
  "ASA_EgressACL": "000000000000000000000000",
  "ASA_UserName20": "",
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-05-17 18:36:32",
  "SourceModuleName": "udpin",
  "SourceModuleType": "im_udp"
}
{
  "Version": 9,
  "SysUpTimeMilisec": 2374222958,
  "ExportTime": "2017-05-17 18:39:05",
  "TimeMsecStart": "2017-05-17 18:38:04",
  "Protocol": 17,
  "SourcePort": 65080,
  "DestPort": 443,
  "SourceIpV4Address": "192.168.13.37",
  "DestIpV4Address": "216.58.216.206",
  "inputSNMPIface": 4,
  "outputSNMPIface": 3,
  "ASAeventTime": "2017-05-17 18:39:05",
  "ASAconnID": 41834203,
  "FNF_ICMPCode": 0,
  "FNF_ICMPType": 0,
  "ASAevent": 1,
  "ASAextEvent": 0,
  "ASA_XlateSourcePort": 65080,
  "ASA_XlateDestPort": 443,
  "ASA_V4XlateSourceAddr": "72.142.18.38",
  "ASA_V4XlateDestAddr": "216.58.216.206",
  "ASA_IngressACL": "433a1af1a925365e00000000",
  "ASA_EgressACL": "000000000000000000000000",
  "ASA_UserName20": "",
  "MessageSourceAddress": "192.168.12.1",
  "EventReceivedTime": "2017-05-17 18:36:32",
  "SourceModuleName": "udpin",
  "SourceModuleType": "im_udp"
}

49.2.1. Configuring NetFlow via Command Line


To configure Cisco ASA NetFlow logging, follow these steps.

1. Log in to ASA via SSH.


2. Create a NetFlow destination. Replace IP_ADDRESS and PORT with the address and port that the NXLog agent
is listening on.

# flow-export destination inside IP_ADDRESS PORT

3. Create an access list matching the traffic that needs to be logged. Replace ACL_NAME with a name for the
access list. Replace PROTOCOL, SOURCE_IP, and DESTINATION_IP with appropriate values corresponding to
the traffic to be matched.

317
# access-list ACL_NAME extended permit PROTOCOL SOURCE_IP DESTINATION_IP

4. Create a class map with the access list. Replace ACL_NAME with the access list name used in the previous
step.

# class-map global-class
# match access-list ACL_NAME

5. Add NetFlow destination to global policy. Replace IP_ADDRESS with the address that the NXLog agent is
listening on.

# policy-map global_policy
# class global-class
# flow-export event-type all destination IP_ADDRESS

Example 219. Logging All Traffic to 192.168.6.143

These commands enable NetFlow logging of all traffic to 192.168.6.143 via UDP port 2162.

# flow-export destination inside 192.168.6.143 2162


# access-list global_mpc extended permit ip any any
# class-map global-class
# match access-list global_mpc
# policy-map global_policy
# class global-class
# flow-export event-type all destination 192.168.6.143

49.2.2. Configuring NetFlow via ASDM


To configure Cisco ASA NetFlow logging via ASDM, follow these steps.

1. Connect and log in to the GUI.


2. Go to Configuration › Device Management › Logging › NetFlow.

3. Click [ Add ] and specify the interface, remote IP address, and port that the NXLog agent is listening on.

4. Go to Configuration › Firewall › Service Policy Rules.

5. Click [ Add ], switch to Global, and click [ Next ].

318
6. Select Source and Destination IP Address (uses ACL) and click [ Next ].
7. Specify the source and destination criteria. The example below matches all traffic.

319
8. Go to the NetFlow tab and add the NetFlow destination created during the first step. Make sure the Send
option is selected.

9. Click [ OK ] and [ Finish ].

320
Chapter 50. Cisco FireSIGHT
Cisco FireSIGHT is a suite of network security and traffic management products.

NXLog can be set up to collect Cisco FireSIGHT events using the Cisco Event Streamer (eStreamer) API. This
functionality is implemented as an add-on; for more information, see the Cisco FireSIGHT eStreamer add-on
documentation.

321
Chapter 51. Cisco IPS
Cisco IPS devices monitors and prevents intrusions by analyzing, detecting, and blocking threats.

NXLog can be set up to collect Cisco IPS alerts with the Security Device Event Exchange (SDEE) API. This
functionality is implemented as an add-on; for more information, see the Cisco Intrusion Prevention Systems
(CIDEE) add-on documentation.

322
Chapter 52. Cloud Instance Metadata
Cloud providers often allow retrieval of metadata about a virtual machine directly from the instance. NXLog can
be configured to enrich the log data with this information, which may include details such as instance ID and
type, hostname, and currently used public IP address.

The examples below use the xm_python module and Python scripts for this purpose. Each of the scripts depends
on the requests module which can be installed by running pip install requests or with the system’s
package manager (for example, apt install python-requests on Debian-based systems).

Example 220. Adding Metadata to Events

In this example, NXLog reads from a generic file with im_file. In the Output block, the xm_python
python_call() procedure is used to execute the get_attribute() Python function, which adds one or more
metadata fields to the event record. The output is then converted to JSON format and written to a file.

This configuration is applicable for each of cloud providers listed in the following sections, with the
corresponding Python code which differs according to the provider.

nxlog.conf
 1 <Extension python>
 2 Module xm_python
 3 PythonCode metadata.py
 4 </Extension>
 5
 6 <Extension json>
 7 Module xm_json
 8 </Extension>
 9
10 <Input in>
11 Module im_file
12 File '/var/log/input'
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/tmp/output'
18 <Exec>
19 # Call Python function; this will add one or more fields to the event
20 python_call('get_attribute');
21
22 # Save contents of $raw_event field in $Message prior to JSON conversion
23 $Message = $raw_event;
24
25 # Save all fields in event record to $raw_event field in JSON format
26 $raw_event = to_json();
27 </Exec>
28 </Output>

52.1. Amazon Web Services


The EC2 metadata service can be accessed with a GET request to 169.254.169.254. For example:

$ curl http://169.254.169.254/

See the Instance Metadata and User Data documentation for more information about retrieving metadata from
the AWS EC2 service.

323
Example 221. Using a Python Script to Retrieve EC2 Metadata

The following Python script, which can be used with the xm_python module, collects the instance ID from
the EC2 metadata service and adds a field to the event record.

metadata.py (truncated)
import nxlog, requests

def request_metadata(item):
  """Gets value of metadata attribute 'item', returns text string"""
  # Set metadata URL
  metaurl = 'http://169.254.169.254/latest/meta-data/{0}'.format(item)

  # Send HTTP GET request


  r = requests.get(metaurl)

  # If present, get text payload from the response


  if r.status_code != 404:
  value = r.text
  else:
  value = None

  # Return text value


  return value
[...]

52.2. Azure Cloud


The Azure Instance Metadata Service provides a REST endpoint available at a non-routable IP address
(169.254.169.254), which can be accessed only from within the virtual machine. It is necessary to provide the
header Metadata: true in order to get the response. For example, the request below retrieves the vmId:

$ curl -H "Metadata:true" \
  "http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-08-01&format=text"

See the Azure Instance Metadata service for more information about retrieving the metadata of an Azure
instance.

324
Example 222. Using a Python Script to Retrieve Azure VM Metadata

The following Python script, which can be used with the xm_python module, collects the metadata
attributes from the Azure Instance Metadata Service API and adds a field to the event record for each.

metadata.py (truncated)
import json, nxlog, requests

def request_metadata():
  """Gets all metadata values for compute instance, returns dict"""
  # Set metadata URL
  metaurl = 'http://169.254.169.254/metadata/instance/compute?api-version=2017-08-01'
  # Set header required to retrieve metadata
  metaheader = {'Metadata':'true'}

  # Send HTTP GET request


  r = requests.get(metaurl, headers=metaheader)

  # If present, get text payload from the response


  if r.status_code != 404:
  value = r.text
  else:
  value = None
[...]

52.3. Google Compute Engine


The Google Cloud metadata server is available at metadata.google.internal. It is necessary to provide the
header Metadata-Flavor: Google in order to get the response. For example, the request below retrieves the
instance ID:

$ curl -H "Metadata-Flavor: Google" \


  "http://metadata.google.internal/computeMetadata/v1/instance/id"

See Storing and Retrieving Instance Metadata for more information about retrieving metadata from the Google
Compute Engine.

325
Example 223. Using a Python Script to Retrieve GCE Instance Metadata

The following Python script, which can be used with the xm_python module, collects the instance ID from
the GCE metadata server and adds a field to the event record.

metadata.py (truncated)
import nxlog, requests

def request_metadata(item):
  """Gets value of metadata attribute 'item', returns text string"""
  # Set metadata URL
  metaurl = 'http://metadata.google.internal/computeMetadata/v1/instance/{0}'.format(item)
  # Set header require to retrieve metadata
  metaheader = {'Metadata-Flavor':'Google'}

  # Send HTTP GET request


  r = requests.get(metaurl, headers=metaheader)

  # If present, get text payload from the response


  if r.status_code != 404:
  value = r.text
  else:
  value = None
[...]

326
Chapter 53. Common Event Expression (CEE)
NXLog can be configured to collect or forward logs in the Common Event Expression (CEE) format. CEE was
developed by MITRE as an extension for Syslog, based on JSON. MITRE’s work on CEE was discontinued in 2013.

Log Sample
Dec 20 12:42:20 syslog-relay serveapp[1335]: @cee:
{"pri":10,"id":121,"appname":"serveapp","pid":1335,"host":"syslog-relay","time":"2011-12-
20T12:38:05.123456-05:00","action":"login","domain":"app","object":"account","status":"success"}↵

53.1. Collecting and Parsing CEE


NXLog can parse CEE with the parse_json() procedure provided by the xm_json extension module.

327
Example 224. Collecting CEE Logs

With the following configuration, NXLog accepts CEE logs via TCP, parses the CEE-formatted $Message field,
and writes the logs to file in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog();
15 if $Message =~ /^@cee: ({.+})$/
16 {
17 $raw_event = $1;
18 parse_json();
19 }
20 </Exec>
21 </Input>
22
23 <Output out>
24 Module om_file
25 File '/var/log/json'
26 Exec to_json();
27 </Output>

Input Sample
Oct 13 14:23:11 myserver @cee: { "purpose": "test" }↵

Output Sample
{
  "EventReceivedTime": "2016-09-13 14:23:12",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "myserver",
  "EventTime": "2016-09-13 14:23:11",
  "Message": "@cee: { \"purpose\": \"test\" }",
  "purpose": "test"
}

53.2. Generating and Forwarding CEE


NXLog can also generate CEE, using the to_json() procedure provided by the xm_json extension module.

328
Example 225. Generating CEE Logs

With this configuration, NXLog parses IETF Syslog input from file. The logs are then converted to CEE format
and forwarded via TCP. The Syslog header data and IETF Syslog Structured-Data key/value list from the
input are also included.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/ietf'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 1514
19 Exec $Message = '@cee: ' + to_json(); to_syslog_bsd();
20 </Output>

Input Sample
<13>1 2016-10-13T14:23:11.000000-06:00 myserver - - - [NXLOG@14506 Purpose="test"] This is a
test message.↵

Output Sample
<13>Oct 13 14:23:11 myserver @cee: {"EventReceivedTime":"2016-10-13
14:23:12","SourceModuleName":"in","SourceModuleType":"im_file","SyslogFacilityValue":1,"SyslogF
acility":"USER","SyslogSeverityValue":5,"SyslogSeverity":"NOTICE","SeverityValue":2,"Severity":
"INFO","EventTime":"2016-10-13 14:23:11","Hostname":"myserver","Purpose":"test","Message":"This
is a test message."}↵

329
Chapter 54. Dell EqualLogic
Dell EqualLogic SAN systems are capable of sending logs to a remote Syslog destination via UDP.

In most environments, two or more EqualLogic units are configured as a single group. This allows storage
capacity to be utilized from all devices, and the configuration of RAID levels across multiple drives and hardware
platforms. In this case, Syslog configuration is performed from Group Manager and applies to all members.

Log Sample From a Group


AUDIT grpadmin 18-Mar-2017 20:13:01.508144 lab-array1 :CLI: Login to account grpadmin
succeeded, using local authentication. User privilege is group-admin.↵
AUDIT grpadmin 18-Mar-2017 20:35:51.833836 lab-array1 :User action:volume select volume1
schedule create test type once start-time 06:30PM read-write max-keep 10 start-date 03/18/17 enable↵
11501:9173:lab-array1:MgmtExec:18-Mar-2017
20:39:12.115208:snapshotDelete.cc:446:INFO:8.2.5:Successfully deleted snapshot volume1-2017-03-18-
20:38:00.3.1.↵

For more details about configuring logging on Dell EqualLogic PS series SANs, check the "Dell PS Series
Configuration Guide" which can be downloaded from the Dell EqualLogic Support Site (a valid account is
required).

NOTE The steps below have been tested with a Dell EqualLogic PS6000 series SAN.

1. Configure NXLog for receiving Syslog messages via UDP (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from all devices in the group.
3. Proceed with the logging configuration on EqualLogic, using either the Group Manager or the command line.
See the following sections.

Example 226. Receiving Logs From EqualLogic

The following example shows EqualLogic logs as received and processed by NXLog with the im_udp and
xm_syslog modules.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/equallogic.log"
19 Exec to_json();
20 </Output>

330
Output Sample
{
  "MessageSourceAddress": "192.168.10.43",
  "EventReceivedTime": "2017-03-18 21:12:58",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-03-18 21:12:58",
  "Hostname": "192.168.10.43",
  "SourceName": "11517",
  "Message": "380:netmgtd:18-Mar-2017
21:13:19.415464:rcc_util.c:1032:AUDIT:grpadmin:25.7.0:CLI: Login to account grpadmin succeeded,
using local authentication. User privilege is group-admin."
}
{
  "MessageSourceAddress": "192.168.10.43",
  "EventReceivedTime": "2017-03-18 20:35:31",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-03-18 20:35:31",
  "Hostname": "192.168.10.43",
  "SourceName": "11470",
  "Message": "88:agent:18-Mar-2017 20:35:51.833836:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:volume select volume1 schedule create test type once start-time 06:30PM read-write max-
keep 10 start-date 03/18/17 enable"
}
{
  "MessageSourceAddress": "192.168.10.43",
  "EventReceivedTime": "2017-03-18 20:38:51",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-03-18 20:38:51",
  "Hostname": "192.168.10.43",
  "SourceName": "11502",
  "Message": "103:agent:18-Mar-2017 20:39:12.124329:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:volume select volume1 snapshot delete volume1-2017-03-18-20:38:00.3.1 "
}

331
Example 227. Extracting Fields From the EqualLogic Logs

This configuration uses a regular expression to extract additional fields from each message.

nxlog.conf
 1 <Input in_syslog_udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /(?x)^([0-9]*):([a-z]*):\d{2}-[a-zA-Z]{3}-\d{4}
 8 \ \d{2}:\d{2}:\d{2}.\d{6}:([a-zA-Z.]*):[0-9]*:([a-zA-Z]*):
 9 ([a-z]*):([0-9.]*):([a-zA-Z. ]*):(.*)$/
10 {
11 $EQLMsgSeq = $1;
12 $EQLMsgSrc = $2;
13 $EQLFile = $3;
14 $EQLMsgType = $4;
15 $EQLAccount = $5;
16 $EQLMsgID = $6;
17 $EQLEvent = $7;
18 $EQLMessage = $8;
19 }
20 </Exec>
21 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.10.43",
  "EventReceivedTime": "2017-04-15 16:55:48",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-04-15 16:55:48",
  "Hostname": "192.168.10.43",
  "SourceName": "12048",
  "Message": "113:agent:15-Apr-2017 16:57:09.744470:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:alerts select syslog priority fatal,error,warning,audit",
  "EQLMsgSeq": "113",
  "EQLMsgSrc": "agent",
  "EQLFile": "echoCli.c",
  "EQLMsgType": "AUDIT",
  "EQLAccount": "grpadmin",
  "EQLMsgID": "22.7.0",
  "EQLEvent": "User action",
  "EQLMessage": "alerts select syslog priority fatal,error,warning,audit"
}

54.1. Configuring via the Group Manager


1. Log in to the Group Manager.

332
2. Go to Group › Group Configuration › Notifications.

3. Under the Event Logs section, make sure the Send events to syslog servers option is checked.
4. Select the required Event priorities.

5. Click [ Add ], enter the IP address of the NXLog agent, and click [ OK ].
6. Click the [ Save all changes ] button in the top left corner.

54.2. Configuring via the Command Line


1. Log in to the Group Manager via SSH.
2. Run the following commands. Replace LEVELS with a comma-separated list of event priorities. Available
options include: fatal, error, warning, info, audit, and none. Replace IP_ADDRESS and PORT with the IP
address and port that the NXLog agent is listening on.

# alerts select syslog priority LEVELS


# grpparams syslog-notify enable
# grpparams syslog-server-list IP_ADDRESS:PORT

Example 228. Sending Logs to the Specified Host

These commands will send all logs, except for Informational level, to 192.168.6.143 via the default UDP
port 514.

# alerts select syslog priority fatal,error,warning,audit


# grpparams syslog-notify enable
# grpparams syslog-server-list 192.168.6.143

333
Chapter 55. Dell iDRAC
Integrated Dell Remote Access Controller (iDRAC) is an interface that provides web-based or command line
access to a server’s hardware for management and monitoring purposes. This interface may be implemented as
a separate expansion card (DRAC) or be integrated into the motherboard (iDRAC). In both cases it uses resources
separate from the main server and is independent from the server’s operating system.

Different server generations come with different versions of iDRAC. For example, PowerEdge R520, R620, or R720
servers have iDRAC7, while older models such as PowerEdge 1850 or 1950 come with iDRAC5. Remote Syslog via
UDP is an option starting from iDRAC6.

NOTE An iDRAC Enterprise license is required to redirect logs to a remote Syslog destination.

Audit Log Sample


SeqNumber = 1523↵
Message ID = USR0030↵
Category = Audit↵
AgentID = RACLOG↵
Severity = Information↵
Timestamp = 2017-03-26 13:53:36↵
Message = Successfully logged in using john, from 192.168.0.106 and GUI.↵
Message Arg 1 = john↵
Message Arg 2 = 192.168.0.106↵
Message Arg 3 = GUI↵
FQDD = iDRAC.Embedded.1↵

For more details regarding iDRAC configuration, go to Dell Support and search for the server model or iDRAC
version.

NOTE The steps below were tested with iDRAC7 but should work for newer versions as well.

1. Configure NXLog for receiving Syslog entries via UDP (see the examples below), then restart NXLog.
2. Make sure the NXLog agent is accessible from the management interface.
3. Configure iDRAC remote Syslog logging, using the web interface or the command line. See the following
sections.

334
Example 229. Receiving iDRAC Logs

This example shows iDRAC logs as received and processed by NXLog, with the im_udp and xm_syslog
modules.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/idrac.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.5.50",
  "EventReceivedTime": "2017-03-26 13:52:48",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 21,
  "SyslogFacility": "LOCAL5",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-03-26 13:52:48",
  "Hostname": "192.168.5.50",
  "SourceName": "Severity",
  "Message": "Informational, Category: Audit, MessageID: USR0030, Message: Successfully logged
in using john, from 192.168.0.106 and GUI."
}

335
Example 230. Extracting Additional Fields From iDRAC Logs

The following configuration uses a regular expression to extract additional fields from each message.

nxlog.conf
 1 <Input in_syslog_udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /(?x)^([a-zA-Z]*),\ Category:\ ([a-zA-Z]*),
 8 \ MessageID:\ ([a-zA-Z0-9]*),\ Message:\ (.*)$/
 9 {
10 $DracMsgLevel = $1;
11 $DracMscCategory = $2;
12 $DracMscID = $3;
13 $DracMessage = $4;
14 }
15 </Exec>
16 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.5.50",
  "EventReceivedTime": "2017-04-15 17:32:47",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 21,
  "SyslogFacility": "LOCAL5",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-04-15 17:32:47",
  "Hostname": "192.168.5.50",
  "SourceName": "Severity",
  "Message": "Informational, Category: Audit, MessageID: USR0030, Message: Successfully logged
in using john, from 192.168.0.106 and GUI.",
  "DracMsgLevel": "Informational",
  "DracMscCategory": "Audit",
  "DracMscID": "USR0030",
  "DracMessage": "Successfully logged in using john, from 192.168.0.106 and GUI."
}

55.1. Configuring via the Web Interface


1. Log in to iDRAC.
2. Go to Overview › Server › Alerts.

3. Select the Remote System Log option for all required alert types.

336
4. Click [ Apply ].
5. Go to Overview › Server › Logs › Settings.

6. Select the Remote Syslog Enabled checkbox.


7. Specify up to three Syslog server IP addresses, change the UDP port if required, and then click [ Apply ].

55.2. Configuring via the Command Line


1. Log in to iDRAC via SSH.
2. Run the following commands. Replace ALERT, ACTION, NUMBER, and IP_ADDRESS with the required values (see
below).

# racadm eventfilters set -c ALERT -a ACTION -n NOTIFICATION


# racadm set iDRAC.Syslog.SyslogEnable 1
# racadm set iDRAC.Syslog.Server[NUMBER] IP_ADDRESS

◦ ALERT: the alert descriptor, in the format of idrac.alert.category.[subcategory].[severity].

337
Available categories are all, system, storage, updates, audit, config, and worknotes. Valid severity
values are critical, warning, and info.
◦ ACTION: an action for this alert. Possible values are none, powercycle, poweroff, and systemreset.

◦ NOTIFICATION: required notifications for the alert. Valid values are all or none, or a comma-separated
list including one or more of snmp, ipmi, lcd, email, and remotesyslog.
◦ NUMBER: the Syslog server number—1, 2 or 3.

◦ IP_ADDRESS: the address of the NXLog agent.

Example 231. Configuring Syslog Logging to 192.168.6.143

The following commands disable all alert actions, enable Syslog notifications for all alerts (disabling
other notifications), and enable Syslog logging to 192.168.6.143 (UDP port 514).

This example disables any previously configured alert actions or notifications.


WARNING Different eventfilters arguments must be used to enable or retain other action
or notification types.

# racadm eventfilters set -c idrac.alert.all -a none -n remotesyslog


# racadm set iDRAC.Syslog.SyslogEnable 1
# racadm set iDRAC.Syslog.Server[1] 192.168.6.143

338
Chapter 56. Dell PowerVault MD Series
PowerVault MD logs can be sent to a remote Syslog destination via UDP by using the "Event Monitor" Windows
service, which is a part of the Modular Disk Storage Manager application used to manage PowerVault. The MD
Storage Manager is a separate application which is usually installed on a management server. It connects to the
MD unit and provides a convenient graphical interface for managing the PowerVault storage.

Log Sample
Date/Time: 4/5/17 2:43:00 PM↵
Sequence number: 418209↵
Event type: 4011↵
Description: Virtual disk not on preferred path due to failover↵
Event specific codes: 0/0/0↵
Event category: Error↵
Component type: RAID Controller Module↵
Component location: Enclosure 0, Slot 0↵
Logged by: RAID Controller Module in slot 0↵

Date/Time: 4/5/17 4:06:21 PM↵
Sequence number: 418233↵
Event type: 104↵
Description: Needs attention condition resolved↵
Event specific codes: 0/0/0↵
Event category: Internal↵
Component type: RAID Controller Module↵
Component location: Enclosure 0, Slot 0↵
Logged by: RAID Controller Module in slot 0↵

For more details about configuring PowerVault alerts and using MD Storage Manager, see Dell Support.

The steps below have been tested with the PowerVault MD3200 Series SAN and should work
NOTE
with any MD unit managed by MD Storage Manager Enterprise.

1. Configure NXLog for receiving log entries via UDP (see the examples below), then restart NXLog.
2. Confirm that the NXLog agent is accessible from the server where MD Storage Manager is installed.
3. Locate the PMServer.properties file. By default, the file can be found in C:\Program Files
(x86)\Dell\MD Storage Software\MD Storage Manager\client\data.

4. Edit the file. Set enable_local_logger to true, specify the Syslog server address, and set the facility.

339
Example 232. Sending Logs to 192.168.15.223

With the following directives, the MD Storage Manager will send events to 192.168.15.223 via UDP port
514.

PMServer.properties
Time_format(12/24)=12
syslog_facilty=3
DBM_files_maximum_key=20
DBM_files_minimum_key=5
syslog_receivers=192.168.15.223
DBM_recovery_interval_key=120
DBM_recovery_debounce_key=5
DBM_files_maintain_timeperiod_key=14
eventlog_source_name=StorageArray
enable_local_logger=true
syslog_tag=StorageArray

5. Restart the Event Monitor service to apply the changes.

Example 233. Receiving Syslog Messages From the MD Storage Manager

This example shows PowerVault logs as received and processed by NXLog with the im_udp and xm_syslog
modules.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/mdsan.log"
19 Exec to_json();
20 </Output>

340
Output Sample
{
  "MessageSourceAddress": "192.168.15.231",
  "EventReceivedTime": "2017-04-05 14:43:45",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 4,
  "SyslogSeverity": "WARNING",
  "SeverityValue": 3,
  "Severity": "WARNING",
  "Hostname": "192.168.5.18",
  "EventTime": "2017-04-05 14:43:00",
  "SourceName": "StorageArray",
  "Message": "MD3620f1;4011;Warning;Virtual disk not on preferred path due to failover"
}
{
  "MessageSourceAddress": "192.168.15.231",
  "EventReceivedTime": "2017-04-05 16:07:01",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.5.18",
  "EventTime": "2017-04-05 16:06:21",
  "SourceName": "StorageArray",
  "Message": "MD3620f1;104;Informational;Needs attention condition resolved"
}

341
Example 234. Extracting Additional Fields

The following configuration uses a regular expression to extract additional fields from each message.

nxlog.conf
 1 <Input in_syslog_udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /^([a-zA-Z0-9]*);([0-9]*);([a-zA-Z]*);(.*)$/
 8 {
 9 $MDArray = $1;
10 $MDMsgID = $2;
11 $MDMsgLevel = $3;
12 $MDMessage = $4;
13 }
14 </Exec>
15 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.15.231",
  "EventReceivedTime": "2017-04-05 14:43:45",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 4,
  "SyslogSeverity": "WARNING",
  "SeverityValue": 3,
  "Severity": "WARNING",
  "Hostname": "192.168.5.18",
  "EventTime": "2017-04-05 14:43:00",
  "SourceName": "StorageArray",
  "Message": "MD3620f1;4011;Warning;Virtual disk not on preferred path due to failover",
  "MDArray": "MD3620f1",
  "MDMsgID": "4011",
  "MDMsgLevel": "Warning",
  "MDMessage": "Virtual disk not on preferred path due to failover"
}

342
Chapter 57. DHCP Logs
DHCP servers and clients both generate log activity that may need to be collected, processed, and stored. This
chapter provides information about enabling logging for some common DHCP servers and clients, as well as for
configuring NXLog to collect the DHCP logs.

57.1. ISC DHCP Server (DHCPd)


The ISC DHCP Server, or DHCPd, is commonly used on Linux systems. DHCPd uses Syslog to log its activity. See
Collecting and Parsing Syslog for general information about collecting Syslog logs.

By default, DHCPd logs to the daemon Syslog facility. If desired, the DHCPd log-facility configuration
statement can be used in /etc/dhcp/dhcpd.conf to write logs to a different facility. The system logger could
then be configured to handle that facility’s logs as required. Otherwise, something like the following example
should work with the default settings.

Example 235. Collecting DHCPd Messages

This configuration uses the im_file module to read DHCPd messages from one of the Syslog log files, and
the xm_syslog parse_syslog() procedure to parse them. Only events from the dhcpd source are kept; others
are discarded with drop().

This method will most likely not preserve severity information. See Reading Syslog
WARNING Log Files for more information and the other sections in Collecting and Parsing Syslog
for alternative ways to collect Syslog messages.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input dhcp_server>
 6 Module im_file
 7 # Debian writes `daemon` facility logs to `/var/log/daemon.log` by default
 8 File '/var/log/daemon.log'
 9 # RHEL writes `daemon` facility logs to `/var/log/messages` by default
10 #File '/var/log/messages'
11 <Exec>
12 parse_syslog();
13 if $SourceName != 'dhcpd' drop();
14 </Exec>
15 </Input>

57.2. ISC DHCP Client (dhclient)


The ISC DHCP Client, or dhclient, is commonly used on Linux systems for requesting DHCP leases. Like DHCPd,
dhclient logs its activity the local Syslog logger (daemon facility). See Collecting and Parsing Syslog for general
information about collecting Syslog logs.

343
Example 236. Collecting dhclient Messages

This configuration uses the im_file module to read dhclient messages from one of the Syslog log files, and
the xm_syslog parse_syslog() procedure to parse them. Only events from the dhclient source are kept;
others are discarded with drop().

This method will most likely not preserve severity information. See Reading Syslog
WARNING Log Files for more information and the other sections in Collecting and Parsing Syslog
for alternative ways to collect Syslog messages.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input dhcp_client>
 6 Module im_file
 7 # Debian writes `daemon` facility logs to `/var/log/daemon.log` by default
 8 File '/var/log/daemon.log'
 9 # RHEL writes `daemon` facility logs to `/var/log/messages` by default
10 #File '/var/log/messages'
11 <Exec>
12 parse_syslog();
13 if $SourceName != 'dhclient' drop();
14 </Exec>
15 </Input>

57.3. Windows DHCP Server


DHCP Server events are written to DHCP audit log files (if configured) and to the Windows EventLog. This section
provides details about configuring logging and collecting logs with NXLog.

NOTE The following sections have been tested on Windows Server 2016.

57.3.1. DHCP Server Audit Logging


The Windows DHCP Server provides an audit logging feature that writes server activity to log files. NXLog can be
configured to read and parse these logs.

The log files are named DhcpSrvLog-<DAY>.log for IPv4 and DhcpV6SrvLog-<DAY>.log for IPv6. For example,
Thursday’s log files are DhcpSrvLog-Thu.log and DhcpV6SrvLog-Thu.log.

IPv4 Log Sample (many header lines omitted)


ID,Date,Time,Description,IP Address,Host Name,MAC Address,User Name, TransactionID,
QResult,Probationtime,
CorrelationID,Dhcid,VendorClass(Hex),VendorClass(ASCII),UserClass(Hex),UserClass(ASCII),RelayAgentIn
formation,DnsRegError.↵
00,05/11/18,03:14:55,Started,,,,,0,6,,,,,,,,,0↵
55,05/11/18,03:14:55,Authorized(servicing),,test.com,,,0,6,,,,,,,,,0↵

IPv6 Log Sample (many header lines omitted)


ID,Date,Time,Description,IPv6 Address,Host Name,Error Code, Duid Length, Duid Bytes(Hex),User
Name,Dhcid,Subnet Prefix.↵
11010,05/11/18,03:14:55,DHCPV6 Started,,,,,,,,,,↵
1103,05/11/18,03:14:55,Authorized(servicing),,test.com,,,,,,,,↵

344
The DHCP audit log can be configured with PowerShell or the DHCP Management MMC snap-in.

The default audit log path, C:\Windows\System32\dhcp, is architecture-specific. To collect


DHCP audit logs using a 32-bit NXLog agent on a 64-bit Windows system, it is recommended to
NOTE change the log path to another directory that is not redirected to SysWOW64. For this reason, the
following instructions use C:\dhcp. If the NXLog agent is running on the system’s native
architecture, it is not necessary to change the log file location from the default.

57.3.1.1. Configuring via PowerShell


1. To view the current DHCP audit log configuration, run the following command (see Get-DhcpServerAuditLog
on Microsoft Docs).

> Get-DhcpServerAuditLog

Path : C:\Windows\system32\dhcp
Enable : True
MaxMBFileSize : 70
DiskCheckInterval : 50
MinMBDiskSpace : 20

2. To set the audit log configuration, run this command (see Set-DhcpServerAuditLog on Microsoft Docs).

> Set-DhcpServerAuditLog -Enable $True -Path C:\dhcp

3. The DHCP server must be restarted for the configuration changes to take effect.

> Restart-Service DHCPServer

57.3.1.2. Configuring With the DHCP Management Console


Follow these steps to configure the DHCP audit log. Any changes to the audit log settings apply to both IPv4 and
IPv6, once the DHCP server has been restarted.

1. Run the DHCP MMC snap-in (dhcpmgmt.msc), expand the server for which to configure logging, and click on
IPv4.

2. Right-click on IPv4 and click Properties. Note that the context menu is not fully populated until after the
IPv4 menu has been expanded at least once.

345
3. Make sure Enable DHCP audit logging is checked.
4. Open the Advanced tab, change the Audit log file path, and click [ OK ].

5. Restart the DHCP server by right-clicking the server and clicking All Tasks › Restart.

57.3.1.3. Collecting DHCP Server Audit Logs


The DHCP audit logs are stored in CSV format with a large free-form header containing a list of event ID
descriptions and other details.

Example 237. Collecting and Parsing DHCP Audit Events

This configuration uses a short batch/PowerShell polyglot script with the include_stdout directive to fetch

346
the DHCP audit log location. The im_file module reads from the files and the xm_csv module parses the
lines into fields. Any line that does not match the /^\d+,/ regular expression is discarded with the drop()
procedure (all the header lines are dropped). The event ID and QResult codes are resolved automatically,
with corresponding $Message and $QMessage fields added where applicable.

If DHCP audit logging is disabled, the script will print an error and NXLog will abort during
NOTE
the configuration check.

nxlog.conf (truncated)
 1 <Extension dhcp_csv_parser>
 2 Module xm_csv
 3 Fields ID, Date, Time, Description, IPAddress, Hostname, MACAddress, \
 4 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
 5 DHCID, VendorClassHex, VendorClassASCII, UserClassHex, \
 6 UserClassASCII, RelayAgentInformation, DnsRegError
 7 </Extension>
 8
 9 <Extension dhcpv6_csv_parser>
10 Module xm_csv
11 Fields ID, Date, Time, Description, IPv6Address, Hostname, ErrorCode, \
12 DuidLength, DuidBytesHex, UserName, Dhcid, SubnetPrefix
13 </Extension>
14
15 <Input dhcp_server_audit>
16 Module im_file
17 include_stdout %CONFDIR%\dhcp_server_audit_include.cmd
18 <Exec>
19 # Only process lines that begin with an event ID
20 if $raw_event =~ /^\d+,/
21 {
22 $FileName = file_name();
23 if $FileName =~ /DhcpSrvLog-/
24 {
25 dhcp_csv_parser->parse_csv();
26 $QResult = integer($QResult);
27 if $QResult == 0 $QMessage = "NoQuarantine";
28 else if $QResult == 1 $QMessage = "Quarantine";
29 [...]

347
dhcp_server_audit_include.cmd
@( Set "_= (
REM " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
powershell.exe -ExecutionPolicy Bypass -NoProfile ^
-Command "iex ((gc '%~f0') -join [char]10)"
EndLocal & Exit /B %ErrorLevel%
#>
$AuditLog = Get-DhcpServerAuditLog
if ($AuditLog.Enable) {
  Write-Output "File '$($AuditLog.Path)\Dhcp*SrvLog-*.log'"
}
else {
  [Console]::Error.WriteLine(@"
DHCP audit logging is disabled. To enable, run in PowerShell:
> Set-DhcpServerAuditLog -Enable $True
"@)
  exit 1
}

57.3.2. DHCP Server EventLog


Events are also written to three logs in the EventLog. To make sure the required logs are enabled, open Event
Viewer (eventvwr) and check the logs under Applications and Services Logs › Microsoft › Windows › DHCP-
Server. To enable a log, right-click on it and click Enable Log.

Alternatively, the following PowerShell script will check all three logs, enabling if necessary.

348
$LogNames = @("DhcpAdminEvents",
  "Microsoft-Windows-Dhcp-Server/FilterNotifications",
  "Microsoft-Windows-Dhcp-Server/Operational")
ForEach ($LogName in $LogNames) {
  $EventLog = Get-WinEvent -ListLog $LogName
  if ($EventLog.IsEnabled) {
  Write-Host "Already enabled: $LogName"
  }
  else {
  Write-Host "Enabling: $LogName"
  $EventLog.IsEnabled = $true
  $EventLog.SaveChanges()
  }
}

Example 238. Collecting DHCP Server Events From the EventLog

This configuration uses the im_msvistalog module to collect DHCP Server events from the EventLog
DhcpAdminEvents, FilterNotifications, and Operational logs.

nxlog.conf
 1 <Input dhcp_server_eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="DhcpAdminEvents">*</Select>
 7 <Select Path="Microsoft-Windows-Dhcp-Server/FilterNotifications">
 8 *</Select>
 9 <Select Path="Microsoft-Windows-Dhcp-Server/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>

57.4. Windows DHCP Client


Windows DHCP client logs are written to the EventLog. There are two logs for IPv4 and two for IPv6. To enable
the required logs, open Event Viewer (eventvwr) and check the logs under Applications and Services Logs ›
Microsoft › Windows › Dhcp-Client and Applications and Services Logs › Microsoft › Windows › DHCPv6-
Client. To enable a log, right-click on it and click Enable Log.

349
Alternatively, the following PowerShell script will check all four logs, enabling if necessary.

$LogNames = @("Microsoft-Windows-Dhcp-Client/Admin",
  "Microsoft-Windows-Dhcp-Client/Operational",
  "Microsoft-Windows-Dhcpv6-Client/Admin",
  "Microsoft-Windows-Dhcpv6-Client/Operational")
ForEach ($LogName in $LogNames) {
  $EventLog = Get-WinEvent -ListLog $LogName
  if ($EventLog.IsEnabled) {
  Write-Host "Already enabled: $LogName"
  }
  else {
  Write-Host "Enabling: $LogName"
  $EventLog.IsEnabled = $true
  $EventLog.SaveChanges()
  }
}

Example 239. Collecting Windows DHCP Client Logs

This configuration collects events from the IPv4 and IPv6 Admin and Operational DHCP client logs using
the im_msvistalog module.

nxlog.conf
 1 <Input dhcp_client_eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Dhcp-Client/Admin">*</Select>
 7 <Select Path="Microsoft-Windows-Dhcp-Client/Operational">*</Select>
 8 <Select Path="Microsoft-Windows-Dhcpv6-Client/Admin">*</Select>
 9 <Select Path="Microsoft-Windows-Dhcpv6-Client/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>

350
Chapter 58. DNS Monitoring
Monitoring and proactively analyzing Domain Name Server (DNS) queries and responses has become a standard
security practice for networks of all sizes. Many types of malware rely on DNS traffic to communicate with
command-and-control servers, inject ads, redirect traffic, or transport data.

58.1. DNS Logging and Monitoring


DNS traffic analysis is commonly used to:

• discover unknown devices that appear on the network;


• monitor critical devices that have not issued a query within a predefined time window;
• detect malware from young/esoteric domain lookups or consistent lookup failures; and
• analyze host, subnet, or user behavioral patterns.

DNS traffic can quickly become overwhelming. To save resources, consider discarding any fields
TIP
that will not be required for analysis.

According to RFC 7626 there are no specific privacy laws for DNS data collection, in any country. However, it is
not clear if data protection directive 95/46/EC of the European Union includes DNS traffic collection.

DNS events are available from a number of sources. DNS queries and responses are commonly sent and
received in the form of packets over UDP. These packets and the ability to passively capture them is basically the
same across all operating systems.

Another common source is the DNS server itself as it receives queries from clients, processes them and returns
the results. Although the DNS protocol is a common standard, the logging facilities implemented in each DNS
server can vary greatly across different operating systems. Bind 9 generates flat log files while Windows DNS
Server employs Event Tracing for Windows (ETW) for managing its DNS events.

DNS Audit Logging vs DNS Analytical Logging

Although Windows DNS Server has two event tracing channels named Audit and Analytical, the advantage gained
from classifying DNS events into these two categories, and treating them separately, is by no means proprietary
and can be applied to other DNS server environments.

A DNS server is basically a highly specialized database server, yet it still retains the same low-level CRUD (Create,
Read, Update, Delete) functionality of any other database. Analytical logging is focused primarily on client
queries, the read operations, while DNS Audit Logging is focused on the remaining CRUD operations: creating,
updating, and deleting DNS zone information. These are the most important operations to monitor from a
security perspective since unauthorized access to them can lead to interruption of network services, data loss,
and outages of other infrastructure services.

The goal of DNS Audit logging is to maintain an audit trail of any changes to the DNS Server’s configuration,
mainly for security purposes, while providing timely notification and easy access to any high severity events. By
logging changes to any of the more than 40 DNS resource record (RR) types in zone files, security analysts will
have the forensic information they need, should DNS records be maliciously or accidentally modified.

The realm of DNS Analytical Logging is completely different. The volume of data collected can be huge and the
events being analyzed are typically not time- sensitive. The bulk of these DNS queries can be useful for producing
metrics on user and application network traffic to various internal and external sites and services.

In the following two sections, the methods used to collect audit and analytical log data may differ greatly, but the
goal of managing them separately remains the same.

351
58.2. BIND 9
The BIND 9 DNS server is commonly used on Unix-like operating systems. It can act as both an authoritative
name server and a recursive resolver.

In addition to collecting BIND 9 logs, consider implementing File Integrity Monitoring or DNS Audit Logging for
the BIND 9 configuration files.

58.2.1. Configuring BIND 9 Logging


BIND 9 can be configured to log events to file or via Syslog. Log messages are organized into categories and log
destinations are configured as channels. The special default category can be used to specify the default for any
categories that have not been explicitly configured. For full details about BIND 9 configuration, see the
corresponding BIND Administrator Reference Manual.

Example 240. Logging All Categories via Syslog

This configuration logs all messages, of info severity or greater, to the local Syslog daemon. The queries
category is specified explicitly, because query logging is otherwise disabled by default. The print-* options
enable the inclusion of various metadata in the log messages—this metadata can later be parsed by NXLog.

named.conf
logging {

  # Add a Syslog channel, with info severity


  channel my_syslog {
  syslog daemon;
  severity info;

  # Enable all metadata


  print-time yes;
  print-category yes;
  print-severity yes;
  };

  # Set the default destination for all categories


  category default { my_syslog; };

  # Enable query logging by setting this category explicitly


  category queries { my_syslog; };
};

Log Format
<syslog-header> <date> <time> <category>: <severity>: <message>

Log Sample
<30>Apr 29 22:30:15 debian named[16373]: 29-Apr-2019 22:30:15.371 general: info: managed-keys-
zone: Key 20326 for zone . acceptance timer complete: key now trusted↵
<30>Apr 29 22:30:15 debian named[16373]: 29-Apr-2019 22:30:15.372 resolver: info: resolver
priming query complete↵
<30>Apr 29 22:30:20 debian named[16373]: 29-Apr-2019 22:30:20.770 queries: info: client
@0x7f9b6810ed50 10.80.0.1#44663 (google.com): query: google.com IN A +E(0) (10.80.1.88)↵

352
Example 241. Logging to File

BIND can be configured to write log messages to a file. This configuration also shows how a particular
category can be disabled.

named.conf
logging {

  # Add a file channel with info severity


  channel my_file {
  file "/var/log/bind.log" versions 3 size 100m;
  severity info;
  print-time yes;
  print-category yes;
  print-severity yes;
  };

  category default { my_file; };


  category queries { my_file; };

  # Disable a category by setting its destination to null


  category lame-servers { null; };
};

The resulting log format is the same as in the previous example, but without the Syslog header.

Log Sample
01-May-2019 00:26:56.579 general: info: managed-keys-zone: Key 20326 for zone . acceptance
timer complete: key now trusted↵
01-May-2019 00:26:56.617 resolver: info: resolver priming query complete↵
01-May-2019 00:27:48.084 queries: info: client @0x7f82bc11d4e0 10.80.0.1#53995 (google.com):
query: google.com IN A +E(0) (10.80.1.88)↵

58.2.2. Parsing BIND 9 Logs


BIND 9 uses a single basic logging format across the logging categories. This allows log data to be parsed reliably,
and further parsing can be configured as required for each individual category. Therefore, parsing of BIND 9 logs
can be implemented in these three steps:

1. the Syslog headers with xm_syslog (if logging via Syslog),


2. the BIND metadata (from the print-* options) with a regular expression, and

3. any category-specific syntax (such as for the queries category below)—additional parsing can be
implemented, if required, for any other category that uses a consistent format.

NOTE The following examples have been tested with BIND 9.10 and 9.11.

Example 242. Collecting BIND 9 Logs via Syslog

This configuration uses the im_uds module to accept local Syslog messages. BIND 9 should be configured
to log messages via Syslog as shown in Logging All Categories via Syslog above.

353
nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input syslog>
 6 Module im_uds
 7 UDS /dev/log
 8 <Exec>
 9 # 1. Parse Syslog header
10 parse_syslog_bsd();
11
12 # 2. Parse BIND 9 metadata
13 if $Message =~ /(?x)^(?<EventTime>\S+\s\S+)\s(?<Category>\S+):\s
14 (?<BINDSeverity>[^:]+):\s(?<Message>.+)$/i
15 {
16 $EventTime = parsedate($EventTime);
17
18 # 3. Parse messages from the queries category
19 if $Category == "queries"
20 {
21 $Message =~ /(?x)^client\s((?<ClientID>\S+)\s)?(?<Client>\S+)\s
22 \((?<OriginalQuery>\S+)\):\squery:\s
23 (?<QueryName>\S+)\s(?<QueryClass>\S+)\s
24 (?<QueryType>\S+)\s(?<QueryFlags>\S+)\s
25 \((?<LocalAddress>\S+)\)$/;
26 }
27
28 # Parse messages from another category
29 [...]

Event Sample
{
  "EventReceivedTime": "2019-04-29T22:30:20.856069+01:00",
  "SourceModuleName": "syslog",
  "SourceModuleType": "im_uds",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "debian",
  "EventTime": "2019-04-29T22:30:20.770000+01:00",
  "SourceName": "named",
  "ProcessID": "16373",
  "Message": "client @0x7f9b6810ed50 10.80.0.1#44663 (google.com): query: google.com IN A +E(0)
(10.80.1.88)",
  "BINDSeverity": "info",
  "Category": "queries",
  "Client": "10.80.0.1#44663",
  "ClientID": "@0x7f9b6810ed50",
  "LocalAddress": "10.80.1.88",
  "OriginalQuery": "google.com",
  "QueryClass": "IN",
  "QueryFlags": "+E(0)",
  "QueryName": "google.com",
  "QueryType": "A"
}

354
Example 243. Collecting BIND 9 Logs From File

This configuration uses the im_file module to read messages from the BIND 9 log file. BIND 9 should be
configured as shown in Logging to File above. The parsing here is very similar to the previous example, but
without Syslog header parsing.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File '/var/log/bind.log'
 4 <Exec>
 5 if $raw_event =~ /(?x)^(?<EventTime>\S+\s\S+)\s(?<Category>\S+):\s
 6 (?<Severity>[^:]+):\s(?<Message>.+)$/i
 7 {
 8 $EventTime = parsedate($EventTime);
 9 if $Category == "queries"
10 {
11 $Message =~ /(?x)^client\s((?<ClientID>\S+)\s)?(?<Client>\S+)\s
12 \((?<OriginalQuery>\S+)\):\squery:\s
13 (?<QueryName>\S+)\s(?<QueryClass>\S+)\s
14 (?<QueryType>\S+)\s(?<QueryFlags>\S+)\s
15 \((?<LocalAddress>\S+)\)$/;
16 }
17 }
18 </Exec>
19 </Input>

58.2.3. DNS Audit Logging


Different to File Integrity Monitoring which provides monitoring based on checking for changes to the
cryptographic checksums, DNS audit logging provides some additional details. Apply the following rules to watch
BIND 9 configuration files. The im_linuxaudit module can also be applied to audit other assets on Linux. See
Linux Audit System.

Example 244. Configuring DNS Audit Logging on Configuration Files

This configuration uses the im_linuxaudit module to watch the the BIND 9 configuration file
/etc/bind/named.conf for modifications and tags the events with conf-change-bind. Read more about
Audit Rules.

355
nxlog.conf
 1 <Input audit>
 2 Module im_linuxaudit
 3 FlowControl FALSE
 4 <Rules>
 5 # Delete all rules (This rule has no affect; it is performed
 6 # automatically by im_linuxaudit)
 7 -D
 8
 9 # Watch /etc/bind/named.conf for modifications and tag 'conf-change-bind'
10 -w /etc/bind/named.conf -p wa -k conf-change-bind
11
12 # Generate a log entry when the system time is changed
13 -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time
14
15 # Lock Audit rules until reboot
16 -e 2
17 </Rules>
18 </Input>

Event Sample of BIND 9 Configuration Change


{
  "type": "CONFIG_CHANGE",
  "time": "2020-02-21T01:30:36.027000+01:00",
  "seq": 31,
  "auid": 4294967295,
  "ses": "4294967295",
  "subj": "=unconfined",
  "op": 0,
  "key": "conf-change-bind",
  "list": "4",
  "res": "1",
  "EventReceivedTime": "2020-02-21T01:30:36.034819+01:00",
  "SourceModuleName": "audit",
  "SourceModuleType": "im_linuxaudit"
}

356
Event Sample of Audit Trail
{
  "type": "SYSCALL",
  "time": "2020-02-21T02:20:32.365000+01:00",
  "seq": 165,
  "arch": "c000003e",
  "syscall": "257",
  "success": "yes",
  "exit": 3,
  "a0": "ffffff9c",
  "a1": "563b82d382a0",
  "a2": "441",
  "a3": "1b6",
  "items": 2,
  "ppid": 1739,
  "pid": 1740,
  "auid": 1000,
  "uid": 0,
  "gid": 0,
  "euid": 0,
  "suid": 0,
  "fsuid": 0,
  "egid": 0,
  "sgid": 0,
  "fsgid": 0,
  "tty": "pts2",
  "ses": "2",
  "comm": "nano",
  "exe": "/bin/nano",
  "subj": "=unconfined",
  "key": "conf-change-bind",
  "EventReceivedTime": "2020-02-21T02:20:32.373192+01:00",
  "SourceModuleName": "audit",
  "SourceModuleType": "im_linuxaudit"
}

58.3. Windows DNS Server


DNS logging in an essential part of security monitoring. Windows DNS Server acts as the Global Catalog server
for the forest and domain within Active Directory and is installed by default. Windows DNS Server can also be
installed manually.

58.3.1. Windows DNS Monitoring Overview


NXLog offers four general event logging facilities for monitoring DNS events generated by Windows DNS Server
and its clients. They are discussed in their corresponding sections, listed below.

• DNS Logging via ETW Providers


• File-based DNS Debug Logging
• Collecting DNS Query Logs via Sysmon
• Monitoring DNS Event Sources Using Windows Event Log

The following table maps some of the key features and attributes unique to each NXLog logging facility available
for Windows DNS monitoring.

Table 57. Windows DNS Monitoring Overview

357
DNS Logging or Provider or Module(s) Feature(s) Requirements
Tracing Type Channel
Audit and Microsoft-Windows- im_etw Preferred method. Server versions 2012
Analytical DNSServer Native DNS Server R2 and later
(Tracing) auditing.
Best choice for
Analytical logs.

Debug im_file Fast. Windows Server


(Logging, xm_msdns_ The only way to log versions 2008 R2,
Details option DNS transaction 2012 R2, and 2016
disabled) information.

Debug im_file Fast.


(Logging, xm_multiline The only way to log
Details option DNS transaction
enabled) information.

Active Directory Microsoft-Windows- im_msvistalog Systems without Windows 8.1 or later


auditing Security-Auditing native DNS auditing
(Logging)

Native DNS Microsoft-Windows- im_msvistalog Preferred method for Windows Server


auditing DNSServer/Audit collecting DNS audit 2016, or 2012 R2
(Logging) logs with hotfix 2956577

Sysmon Microsoft-Windows- im_msvistalog Only DNS client Windows 8.1 or later


(Logging or Tracing) Sysmon/Operational query logging, but it Sysmon v10.0 or
Sysmon Event ID 22 is the only way to later
obtain the name and
path of the client
application executing
the query.

DNS Client Microsoft-Windows- im_msvistalog Another source of Windows 8.1 or later


(Logging or Tracing) DNS- DNS client query
Client/Operational logging.

58.3.2. DNS Logging via ETW Providers


Enhanced Windows DNS Event Log Logging is available from ETW providers. There are two categories of events
monitored:

1. Windows DNS Server Audit Events are enabled by default. An audit event is logged whenever the DNS
server settings, zones, or resource records are changed. Such DNS events are of utmost importance for
security audits. Each of the 53 types of audit events are identified by a unique EventID which is documented
in the Audit events table of Microsoft’s documentation. The Type column in this table contains a short
description of the event; however, it is not included in the actual logged event. For example, if a new zone is
created, it will not be possible to search for an event containing Record create, instead only EventID: 515 is
available for identifying this type of event.
2. Windows DNS Server Analytical Events must be specifically enabled. They represent the bulk of DNS
events—primarily lookups and other queries—and can be quite large in volume. The Analytic events table of
Microsoft’s documentation lists each of the 23 types of events that are monitored. Just like with Audit Events,
Windows logs the EventID, but not the more descriptive Type field. According to the Audit and analytic event
logging section of Microsoft’s documentation, when processing 100,000 queries per second (QPS) on modern
hardware the expected reduction in performance is around 5% if Analytical Event logging is enabled.

Event tracing offers significant advantages over DNS Debug Logging in terms of architecture, flexibility,
configurability, and performance. ETW events can be read directly without requiring events to be first written to
disk. However, ETW is not available on older Windows systems. To maintain its performance it is by design a

358
"best effort" framework and consequently does not guarantee that all events will be captured.

For more information, see the Installing and enabling DNS diagnostic logging section on Microsoft Docs.

With Analytical Logging enabled, NXLog can use the im_etw module to collect DNS logs from the Microsoft-
Windows-DNSServer ETW provider. This is the preferred method for collecting logs from Windows Server versions
2012 R2 and later.

NOTE On Windows Server 2012 R2, this feature is provided by hotfix 2956577.

58.3.2.1. Examples
Example 245. Using im_etw

The following configuration collects DNS logs via ETW from the Microsoft-Windows-DNSServer provider,
using the im_etw module. The collected logs are converted to JSON and saved to a file.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 </Input>

In this example, an Audit event has been logged. EventID: 515 identifies this as a Record create for this zone.
{
  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "EventId": 515,
  "Version": 0,
  "ChannelID": 17,
  "OpcodeValue": 0,
  "TaskValue": 5,
  "Keywords": "4611686018428436480",
  "EventTime": "2020-03-10T09:42:39.788511-07:00",
  "ExecutionProcessID": 4752,
  "ExecutionThreadID": 1732,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Domain": "WIN-R4QHULN6KLH",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-915329490-2962477901-227355065-500",
  "AccountType": "User",
  "Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
  "Type": "1",
  "NAME": "www.example.com",
  "TTL": "3600",
  "BufferSize": "4",
  "RDATA": "0x0A00020F",
  "Zone": "example.com",
  "ZoneScope": "Default",
  "VirtualizationID": ".",
  "EventReceivedTime": "2020-03-10T09:42:40.801598-07:00",
  "SourceModuleName": "etw",
  "SourceModuleType": "im_etw"
}

359
58.3.3. File-based DNS Debug Logging
Windows DNS Debug Logging is the only means of monitoring DNS events on Windows Server versions prior to
2012 R2. However, DNS Servers capable of ETW might be configured for file-based logging in cases where all
events must be captured without exception.

58.3.3.1. Enabling DNS Debug Logging


DNS logging can be enabled with debug logging mode. Queries are logged one per line.

To enable DNS Debug Logging, perform the following actions.

1. Open the DNS Management console (dnsmgmt.msc).

2. Right-click on the DNS server and choose Properties from the context menu.
3. Under the Debug Logging tab, enable Log packets for debugging.

4. Mark the check boxes corresponding to the data that should be logged.

The Details option will produce multi-line logs. To parse this detailed format, refer to
NOTE
Parsing Detailed DNS Logs With Regular Expressions below.

5. Set the File path and name to the desired log file location.

The Windows DNS service may not recreate the debug log file after a rollover. If you
WARNING encounter this issue, be sure to use the C: drive for the debug log path. See the post,
The disappearing Windows DNS debug log, on the NXLog website.

360
Log Sample (Standard Debug Mode)
4/21/2017 7:52:03 AM 06B0 PACKET 00000000028657F0 UDP Snd 10.2.0.1 6590 R Q [8081 DR
NOERROR] A (7)example(3)com(0)↵

See the following sections for information about parsing the logs.

58.3.3.2. Parsing Non-Detailed Logs With xm_msdns


The xm_msdns module, available in NXLog Enterprise Edition, can be used for parsing Windows DNS Server logs.

This module does not support parsing of logs from DNS Debug Logging generated with the
WARNING
Details option enabled.

NOTE This module has been tested on Windows Server versions 2008 R2, 2012 R2, and 2016.

Example 246. Using xm_msdns

This configuration uses the im_file and xm_msdns modules to read and parse the log file. Output is written
to file in JSON format for this example.

nxlog.conf
 1 <Extension dns_parser>
 2 Module xm_msdns
 3 EventLine TRUE
 4 PacketLine TRUE
 5 NoteLine TRUE
 6 </Extension>
 7
 8 <Input in>
 9 Module im_file
10 File 'C:\Server\dns.log'
11 InputType dns_parser
12 </Input>

Event Sample
{
  "EventTime": "2017-04-21 07:52:03",
  "ThreadId": "06B0",
  "Context": "PACKET",
  "InternalPacketIdentifier": "00000000028657F0",
  "Protocol": "UDP",
  "SendReceiveIndicator": "Snd",
  "RemoteIP": "10.2.0.1",
  "Xid": "6590",
  "QueryResponseIndicator": "Response",
  "Opcode": "Standard Query",
  "FlagsHex": "8081",
  "RecursionDesired": true,
  "RecursionAvailable": true,
  "ResponseCode": "NOERROR",
  "QuestionType": "A",
  "QuestionName": "example.com",
  "EventReceivedTime": "2017-04-21 7:52:03",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file"
}

361
58.3.3.3. Parsing Non-Detailed Logs With Regular Expressions
While the xm_msdns module is the preferred method for parsing DNS logs, and is about three times faster,
regular expressions can also be used.

This example does not parse logs from DNS Debug Logging generated with the Details
WARNING
option enabled.

NOTE This has been tested on Windows Server versions 2008 R2, 2012 R2, and 2016.

Example 247. Parsing DNS Logs With Regular Expressions

This example parses the log files generated by DNS Debug Logging and then writes the output to file in
JSON format.

nxlog.conf (truncated)
 1 define EVENT_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
 2 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
 3 (?<ThreadId>\w+)\s+ \
 4 (?<Context>\w+)\s+ \
 5 (?<InternalPacketIdentifier>[[:xdigit:]]+)\s+ \
 6 (?<Protocol>\w+)\s+ \
 7 (?<SendReceiveIndicator>\w+)\s \
 8 (?<RemoteIP>[[:xdigit:].:]+)\s+ \
 9 (?<Xid>[[:xdigit:]]+)\s \
10 (?<QueryType>\s|R)\s \
11 (?<Opcode>[A-Z]|\?)\s \
12 (?<QFlags>\[(.*?)\])\s+ \
13 (?<QuestionType>\w)\s+ \
14 (?<QuestionName>.*)/
15 define EMPTY_EVENT_REGEX /(^$|^\s+$)/
16 define DOMAIN_REGEX /\(\d+\)([\w-]+)\(\d+\)([\w-]+)/
17 define SUBDOMAIN_REGEX /\(\d+\)([\w-]+)\(\d+\)([\w-]+)\(\d+\)(\w+)/
18 define NOT_STARTING_WITH_DATE_REGEX /^(?!\d+\/\d+\/\d+).+/
19 define QFLAGS_REGEX /(?x)(?<FlagsHex>\d+)\s+ \
20 (?<FlagsCharCodes>\s+|([A-Z]{2}|[A-Z]))\s+ \
21 (?<ResponseCode>\w+)/
22
23 <Extension _json>
24 Module xm_json
25 </Extension>
26
27 <Input in>
28 Module im_file
29 [...]

362
Output Sample
{
  "EventReceivedTime": "2017-04-21 07:52:16",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "Context": "PACKET",
  "InternalPacketIdentifier": "00000000028657F0",
  "Opcode": "Q",
  "Protocol": "UDP",
  "QueryType": "response",
  "QuestionName": "notabilus.com",
  "QuestionType": "A",
  "RemoteIP": "10.2.0.1",
  "SendReceiveIndicator": "Snd",
  "ThreadId": "06B0",
  "Xid": "6590",
  "Regular": true,
  "EventTime": "2017-04-21 07:52:03",
  "Raw": "4/21/2017 7:52:03 AM 06B0 PACKET 00000000028657F0 UDP Snd 10.2.0.1 6590 R Q
[8081 DR NOERROR] A (9)notabilus(3)com(0)",
  "FlagsCharCodes": "DR",
  "FlagsHex": "8081",
  "ResponseCode": "NOERROR"
}

58.3.3.4. Parsing Detailed DNS Logs With Regular Expressions


Detailed DNS logging uses a multi-line format that can be parsed with xm_multiline and regular expressions.

Example 248. Parsing Multiple Line Detailed Debug DNS Logs

In this example, the xm_multiline module joins lines that belong to the same event, by using a regular
expression to match the header line. Then a regular expression is used to parse the content into fields.

Input Sample
6/1/2017 8:33:36 PM 09B8 PACKET 0000022041EED460 UDP Rcv 192.168.56.1 edaa Q [2001 D
NOERROR] A (6)google(3)com(0)↵
UDP question info at 0000022041EED460↵
  Socket = 680↵
  Remote addr 192.168.56.1, port 48210↵
  Time Query=6941, Queued=0, Expire=0↵
  Buf length = 0x0fa0 (4000)↵
  Msg length = 0x0027 (39)↵
  Message:↵
  XID 0xedaa↵
  Flags 0x0120↵
  QR 0 (QUESTION)↵
  OPCODE 0 (QUERY)↵
  AA 0↵
  TC 0↵
  RD 1↵
  RA 0↵
  Z 0↵
  CD 0↵
  AD 1↵
  RCODE 0 (NOERROR)↵

363
nxlog.conf (truncated)
 1 define EVENT_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
 2 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
 3 (?<ThreadId>\w+)\s+ \
 4 (?<Context>\w+)\s+ \
 5 (?<InternalPacketIdentifier>[[:xdigit:]]+)\s+ \
 6 (?<Protocol>\w+)\s+ \
 7 (?<SendReceiveIndicator>\w+)\s \
 8 (?<RemoteIP>[[:xdigit:].:]+)\s+ \
 9 (?<Xid>[[:xdigit:]]+)\s \
10 (?<QueryType>\s|R)\s \
11 (?<Opcode>[A-Z]|\?)\s \
12 (?<QFlags>\[(.*?)\])\s+ \
13 (?<QuestionType>\w+)\s+ \
14 (?<QuestionName>.*)\s+ \
15 (?<LogInfo>.+)\s+.+=\s \
16 (?<Socket>\d+)\s+ Remote\s+ addr\s \
17 (?<RemoteAddr>.+),\sport\s \
18 (?<PortNum>\d+)\s+Time\sQuery= \
19 (?<TimeQuery>\d+),\sQueued= \
20 (?<Queued>\d+),\sExpire= \
21 (?<Expire>\d+)\s+.+\( \
22 (?<BufLen>\d+)\)\s+.+\( \
23 (?<MsgLen>\d+)\)\s+Message:\s+ \
24 (?<Message>(?s).*)/
25
26 define HEADER_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
27 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
28 (?<ThreadId>\w+)\s+ \
29 [...]

364
Output Sample
{
  "EventReceivedTime": "2018-11-30T04:33:38.660127+01:00",
  "SourceModuleName": "filein",
  "SourceModuleType": "im_file",
  "BufLen": "512",
  "Context": "PACKET",
  "Expire": "0",
  "InternalPacketIdentifier": "000000D58F45A560",
  "LogInfo": "UDP response info at 000000D58F45A560",
  "Message": "XID 0x000d\r\n Flags 0x8180\r\n QR 1 (RESPONSE)\r\n
OPCODE 0 (QUERY)\r\n AA 0\r\n TC 0\r\n RD 1\r\n RA
1\r\n Z 0\r\n CD 0\r\n AD 0\r\n RCODE 0
(NOERROR)\r\n QCOUNT 1\r\n ACOUNT 1\r\n NSCOUNT 0\r\n ARCOUNT 0\r\n
QUESTION SECTION:\r\n Offset = 0x000c, RR count = 0\r\n Name \"
(6)google(3)com(0)\"\r\n QTYPE AAAA (28)\r\n QCLASS 1\r\n ANSWER SECTION:\r\n
Offset = 0x001c, RR count = 0\r\n Name \"[C00C](6)google(3)com(0)\"\r\n TYPE
AAAA (28)\r\n CLASS 1\r\n TTL 26\r\n DLEN 16\r\n DATA
2a00:1450:400d:805::200e\r\n AUTHORITY SECTION:\r\n empty\r\n ADDITIONAL
SECTION:\r\n empty\r\n",
  "MsgLen": "56",
  "Opcode": "Q",
  "PortNum": "60010",
  "Protocol": "UDP",
  "QFlags": "[8081 DR NOERROR]",
  "QueryType": "R",
  "QuestionName": "(6)google(3)com(0)",
  "QuestionType": "AAAA",
  "Queued": "0",
  "RemoteAddr": "::1",
  "RemoteIP": "::1",
  "SendReceiveIndicator": "Snd",
  "Socket": "512",
  "ThreadId": "044C",
  "TimeQuery": "12131",
  "Xid": "000d",
  "EventTime": "2018-11-30T04:32:43.000000+01:00"
}

58.3.4. Collecting DNS Query Logs via Sysmon


Another potential source of DNS event logs is Sysmon. It is a system service and device driver which monitors
system activity and logs to the Windows Event Log (see Setting up Sysmon for further details).

The DNS event log collection supported by Sysmon is not comparable to other types of DNS monitoring like DNS
Server Audit and Analytical logging or DNS Server Debug Logging. In fact, Sysmon DNS Query logging provides
only DNS client query logging, but the information it provides compliments the information from DNS Server
Analytical logs by adding the name and path of the application which is querying the DNS Server. It can monitor
the DNS queries executed by practically any Windows client software that is network-enabled, for instance web
browsers, FileZilla, WinSCP, ping, tracert, etc. It should be noted that direct DNS lookups using nslookup are
not logged by Sysmon’s DNS Query logging.

58.3.4.1. Configure DNS Query Logging


Once Sysmon is installed on a system, it does not log DNS client queries by default. Configuring it to do so is
relatively easy, however. Create or copy a Sysmon configuration file to the same directory where the Sysmon.exe
was installed:

365
config-dnsquery.xml
<Sysmon schemaversion="4.22">
  <EventFiltering>
  <DnsQuery onmatch="exclude"/>
  </EventFiltering>
</Sysmon>

With the XML configuration file confg-dnsquery.xml located in the same directory as Sysmon.exe, running the
following command will apply the new configuration:

Apply the XML configuration for Sysmon to log DNS queries


C:\Windows> Sysmon.exe -c config-dnsquery.xml

Once the configuration file has been applied, it can be confirmed by issuing the same command with the -c
option, but without any file specified:

Confirming Successful Rule configuration of Sysmon for DNS Query logging


C:\Windows> Sysmon.exe -c

A good resource for configuring Sysmon to perform DNS monitoring can be found in this
document on GitHub: sysmonconfig-export.xml. Despite being in XML, the DNS section starting
NOTE at line 835 is quite readable. Lines 871-1063 provide a complete RuleGroup example of how to
filter 180 domains to reduce noise from ads and other common sources of DNS traffic that can
generate a large number of events but are benign.

The last few lines of output returned from Sysmon should produce the following confirmation that DNS Query
logging is active.

Successful Rule configuration of Sysmon for DNS Query logging


Rule configuration (version 4.22):
 - DnsQuery onmatch: exclude combine rules using 'And'

Once Sysmon is active and running as a service, it will be logging various events in addition to DNS queries.
These events are visible in the Windows Event Viewer under Applications and Services Log > Microsoft >
Windows > Sysmon > Operational. Each event has an EventID. Sysmon Event ID 22, DNSEvent (DNS query), is
generated when a process executes a DNS query, whether the result is successful or fails, cached or not. The
telemetry for this event was added for Windows 8.1 so it is not available on Windows 7 and earlier. See the
Sysmon section for more information.

To collect DNS events, Sysmon creates an ETW trace session and writes the data into the
Windows Event Log which can then be collected with the im_msvistalog module. To avoid
WARNING
this performance overhead, it is recommended to use the im_etw module to collect event
data directly from the DNS ETW providers for greater efficiency.

366
Example 249. Collecting DnsQuery Logs with Sysmon

Environments that already utilize Sysmon monitoring (v10.0 or later) only need to use the im_msvistalog
module and add the relevant Sysmon filtering rules for DNS Query monitoring. In this example, the
im_msvistalog module will collect DnsQuery logs.

nxlog.conf
 1 <Input sysmon>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">
 7 *[System[(EventID='22')]]
 8 </Select>
 9 </Query>
10 </QueryList>
11 </QueryXML>
12 </Input>

Example of Sysmon logging a ping event in JSON


{
  "EventTime": "2019-10-29T15:47:43.685222+00:00",
  "Hostname": "HOST1",
  "Keywords": "9223372036854775808",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 22,
  "SourceName": "Microsoft-Windows-Sysmon",
  "ProviderGuid": "{5770385F-C22A-43E0-BF4C-06F5698FFBD9}",
  "Version": 5,
  "TaskValue": 22,
  "OpcodeValue": 0,
  "RecordNumber": 9152,
  "ExecutionProcessID": 3880,
  "ExecutionThreadID": 868,
  "Channel": "Microsoft-Windows-Sysmon/Operational",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "Message": "Dns query:\r\nRuleName: \r\nUtcTime: 2019-10-29 15:47:43.274\r\nProcessGuid:
{b3c285a4-5f1e-5db8-0000-0010c24d1d00}\r\nProcessId: 5696\r\nQueryName: example.com
\r\nQueryStatus: 0\r\nQueryResults: ::ffff:93.184.216.34;\r\nImage: C:\\Windows\\System32
\\PING.EXE",
  "Category": "Dns query (rule: DnsQuery)",
  "Opcode": "Info",
  "UtcTime": "2019-10-29 15:47:43.274",
  "ProcessGuid": "{b3c285a4-5f1e-5db8-0000-0010c24d1d00}",
  "ProcessId": "5696",
  "QueryName": "example.com",
  "QueryStatus": "0",
  "QueryResults": "::ffff:93.184.216.34;",
  "Image": "C:\\Windows\\System32\\PING.EXE",
  "EventReceivedTime": "2019-10-29T15:47:44.949924+00:00",
  "SourceModuleName": "sysmon",
  "SourceModuleType": "im_msvistalog"
}

367
58.3.4.2. Summary of DNS Query Fields
The fields of particular interest are the QueryName and Image fields, which together provide a wealth of
information about the network activity of the client machine. Each event discloses which site—internal or
external—was queried and which Windows application was preparing to access that remote site.

The Message field usually contains a long string of information, most of which is parsed out into the following
fields:

• UtcTime (what EventTime is based on)

• ProcessGuid

• ProcessId

• QueryName (the FQDN being looked up)

• QueryStatus

• QueryResults

• Image (the full path and file name of the client application’s executable which performed the DNS query)

58.3.5. Monitoring DNS Event Sources Using Windows Event Log


The im_msvistalog module is the most versatile input module for Windows since it can capture almost any type
of event. It can be used to collect DNS Server Audit events, DNS client events from Sysmon, and native DNS Client
events, all of which are accessible from Windows Event Log.

58.3.5.1. Monitoring Native DNS Client Events


Another source of DNS client events available for monitoring can be found in the provider/channel Microsoft-
Windows-DNS-Client/Operational.

Example 250. Configure DNS Client Logging

Using the im_msvistalog module for collecting DNS client events from this source is similar to the
configuration for getting events from Sysmon. A QueryXML block is used to select the source, some fields
are used to filter out unwanted events, while other fields are used to select only the events of interest. In
this configuration example, only four Event IDs are of interest, queries for "wpad" are not needed, and any
QueryType other than "1" will be dropped.

nxlog.conf
 1 <Input DNS_Client>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-DNS-Client/Operational">
 7 *[System[(EventID=3006 or EventID=3008 or
 8 EventID=3010 or EventID=3018)]]
 9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 Exec if ($QueryName == 'wpad') OR \
14 ($QueryType != '1') drop();
15 </Input>

368
Output Sample
{
  "EventTime": "2020-03-12T14:40:08.809107-07:00",
  "Hostname": "WIN-R4QHULN6KLH",
  "Keywords": "9223372036854775808",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 3006,
  "SourceName": "Microsoft-Windows-DNS-Client",
  "ProviderGuid": "{1C95126E-7EEA-49A9-A3FE-A378B03DDB4D}",
  "Version": 0,
  "TaskValue": 0,
  "OpcodeValue": 0,
  "RecordNumber": 42095,
  "ExecutionProcessID": 2224,
  "ExecutionThreadID": 4672,
  "Channel": "Microsoft-Windows-DNS-Client/Operational",
  "Domain": "WIN-R4QHULN6KLH",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-915329490-2962477901-227355065-500",
  "AccountType": "User",
  "Message": "DNS query is called for the name ntp.msn.com, type 1, query options
140738562228224, Server List , isNetwork query 0, network index 0, interface index 0, is
asynchronous query 0",
  "Opcode": "Info",
  "QueryName": "ntp.msn.com",
  "QueryType": "1",
  "QueryOptions": "140738562228224",
  "IsNetworkQuery": "0",
  "NetworkQueryIndex": "0",
  "InterfaceIndex": "0",
  "IsAsyncQuery": "0",
  "EventReceivedTime": "2020-03-12T14:40:10.674875-07:00",
  "SourceModuleName": "DNS_Client",
  "SourceModuleType": "im_msvistalog"
}

58.3.5.2. Monitoring DNS Server Audit Events


Although the im_msvistalog module can be used for capturing DNS Server Audit events, if performance is a
concern, using im_etw is a better choice and remains the recommended method.

369
Example 251. Configure DNS Server Audit logging

No filtering is used in this configuration since most audit events are important and audit logs tend to be
much lower in volume than analytical or debug logs.

nxlog.conf
 1 <Input DNS_Audit>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-DNSServer/Audit">*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>

Output Sample
{
  "EventTime": "2020-03-12T14:56:07.622472-07:00",
  "Hostname": "WIN-R4QHULN6KLH",
  "Keywords": "4611686018428436480",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 516,
  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "Version": 0,
  "TaskValue": 5,
  "OpcodeValue": 0,
  "RecordNumber": 98,
  "ExecutionProcessID": 2000,
  "ExecutionThreadID": 4652,
  "Channel": "Microsoft-Windows-DNSServer/Audit",
  "Domain": "WIN-R4QHULN6KLH",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-915329490-2962477901-227355065-500",
  "AccountType": "User",
  "Message": "A resource record of type 1, name ns2.example.com and RDATA 0x0A000210 was
deleted from scope Default of zone example.com.",
  "Category": "ZONE_OP",
  "Opcode": "Info",
  "Type": "1",
  "NAME": "ns2.example.com",
  "TTL": "0",
  "BufferSize": "4",
  "RDATA": "0A000210",
  "Zone": "example.com",
  "ZoneScope": "Default",
  "VirtualizationID": ".",
  "EventReceivedTime": "2020-03-12T14:56:09.343045-07:00",
  "SourceModuleName": "DNS_Audit",
  "SourceModuleType": "im_msvistalog"
}

370
58.3.5.3. Monitoring DNS Server Analytical Events
One limitation of the im_msvistalog module is that it cannot read event traces of analytical sources. For this
reason, the im_etw module remains the preferred choice for collecting events from the DNS Server Analytical log.
It is possible though, to leverage the File directive in im_msvistalog to read the DNS Server Analytical log file
directly, which is located here:

%SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-DNSServer%4Analytical.etl

Example 252. Configure DNS Server Analytical Logging

Analytical log sources, like debug log sources, tend to generate a high volume of events that are not always
useful. In this configuration example, an analysis of the log file determined that frequent lookups on 10
specific hosts were responsible for a sizable portion of the log file. Since none of these hosts are of interest
for security monitoring, they are being filtered out to reduce noise. The polling interval for reading the log
file is set to 60 seconds to reduce disk I/O in a low traffic environment.

nxlog.conf
 1 <Input DNS_Analytical>
 2 Module im_msvistalog
 3 File C:\Windows\System32\winevt\Logs\Microsoft-Windows-DNSServer%4Analytical.etl
 4 PollInterval 60
 5 Exec if ($QNAME == 'americas1.notify.windows.com.akadns.net.') OR \
 6 ($QNAME == 'cy2.vortex.data.microsoft.com.akadns.net.') OR \
 7 ($QNAME == 'dm3p.wns.notify.windows.com.akadns.net.') OR \
 8 ($QNAME == 'geo.vortex.data.microsoft.com.akadns.net.') OR \
 9 ($QNAME == 'v10-win.vortex.data.microsoft.com.akadns.net.') OR \
10 ($QNAME == 'v10-win.vortex.data.microsoft.com.akadns.NET.') OR \
11 ($QNAME == 'v10.vortex-win.data.microsoft.com.') OR \
12 ($QNAME == 'wns.notify.windows.com.akadns.net.') OR \
13 ($QNAME == 'wns.notify.windows.com.akadns.NET.') OR \
14 ($QNAME == 'client.wns.windows.com.') OR \
15 ($QTYPE == '15') \
16 drop();
17 </Input>

371
Output Sample
{
  "EventTime": "2020-03-12T19:21:47.052133-07:00",
  "Hostname": "WIN-R4QHULN6KLH",
  "Keywords": "9223372071214514176",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 279,
  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "Version": 0,
  "TaskValue": 1,
  "OpcodeValue": 0,
  "RecordNumber": 60,
  "ExecutionProcessID": 2000,
  "ExecutionThreadID": 4188,
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "Message": "INTERNAL_LOOKUP_CNAME: TCP=0; InterfaceIP=10.0.2.15; Source=10.0.2.15; RD=1;
QNAME=ns1.example.com.; QTYPE=1; Port=54171; Flags=34176; XID=2;
PacketData=0x00028580000100010000000003777777076578616D706C6503636F6D0000010001",
  "Category": "LOOK_UP",
  "Opcode": "Info",
  "TCP": "0",
  "InterfaceIP": "10.0.2.15",
  "Source": "10.0.2.15",
  "RD": "1",
  "QNAME": "ns1.example.com.",
  "QTYPE": "1",
  "Port": "54171",
  "Flags": "34176",
  "XID": "2",
  "BufferSize": "33",
  "PacketData": "00028580000100010000000003777777076578616D706C6503636F6D0000010001",
  "EventReceivedTime": "2020-03-12T19:28:51.560303-07:00",
  "SourceModuleName": "DNS_Analytical",
  "SourceModuleType": "im_msvistalog"
}

58.4. Passive DNS Monitoring


Another source of DNS events can be found at the network level by capturing network packets being sent to a
DNS server. Packet analyzers typically set their network adapters into promiscuous mode which allows them to
capture packets destined for other hosts. This enables network monitoring to occur on another host remote to
the DNS server. However, depending on the network architecture, it may be necessary to reconfigure the
network to explicitly route packets to the passive DNS monitoring host as well.

The packet capture module im_pcap provides capabilities for monitoring all common network protocols,
including network traffic that is specific to DNS clients and servers.

58.4.1. Configuring Packet Capture for Passive DNS Monitoring


Of the 24 network protocols available with the im_pcap module, those of interest with regard to DNS monitoring
are dns, ipv4, ipv6, udp, and tcp (if the DNS Server is configured for queries over TCP). Up to 14 fields can be
specified for dns type packet capture. Depending on the DNS query, a DNS packet can have more than 14 fields

372
via the extended field name pattern, $dns.additional.*, needed to store the various additional attributes of
DNS traffic.

58.4.2. Combining Packet Capture Protocols for Obtaining Necessary Fields


Since none of the DNS packet fields track the network source or destination of communication between the DNS
server and its clients, it is advisable to include other protocol types for tracking this essential information. For
this reason ipv4 and ipv6 are protocols of interest; they can provide correlation to the DNS events based on event
times.

Example 253. A Passive DNS Monitoring Example

This configuration uses the im_pcap module to capture DNS, IPv4, IPv6, TCP, and UDP packets which are
then formatted to JSON while writing to a local file. Each protocol and its fields are defined within its own
Protocol block.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input pcap>
 6 Module im_pcap
 7 Dev enp0s3
 8 <Protocol>
 9 Type dns
10 Field dns.opcode
11 Field dns.id
12 Field dns.flags.authoritative
13 Field dns.flags.recursion_available
14 Field dns.flags.recursion_desired
15 Field dns.flags.authentic_data
16 Field dns.flags.checking_disabled
17 Field dns.flags.truncated_response
18 Field dns.response
19 Field dns.response.code
20 Field dns.query
21 Field dns.additional
22 Field dns.answer
23 Field dns.authority
24 </Protocol>
25 <Protocol>
26 Type ipv4
27 Field ipv4.src
28 Field ipv4.dst
29 [...]

Samples of DNS and IPV4 Packets


{
  "dns.additional.count": "0",
  "dns.answer.3.class": "IN",
  "dns.answer.3.name": "ns2.example.com",
  "dns.answer.3.ttl": "86400",
  "dns.answer.3.type": "A",
  "dns.answer.class": "IN",
  "dns.answer.count": "2",
  "dns.answer.name": "www.example.com",
  "dns.answer.ttl": "86400",
  "dns.answer.type": "CNAME",

373
  "dns.authority.class": "IN",
  "dns.authority.count": "1",
  "dns.authority.name": "example.com",
  "dns.authority.type": "NS",
  "dns.flags.authentic_data": "false",
  "dns.flags.authoritative": "true",
  "dns.flags.checking_disabled": "false",
  "dns.flags.recursion_available": "true",
  "dns.flags.recursion_desired": "true",
  "dns.flags.truncated_response": "false",
  "dns.id": "18321",
  "dns.opcode": "Query",
  "dns.query.class": "IN",
  "dns.query.count": "1",
  "dns.query.name": "www.example.com",
  "dns.response.code": "NOERROR",
  "ipv4.dst": "192.168.1.7",
  "ipv4.src": "192.168.1.24",
  "udp.dst_port": "36486",
  "udp.src_port": "53",
  "EventTime": "2020-05-18T12:15:34.033655-05:00",
  "EventReceivedTime": "2020-05-18T12:15:34.301402-05:00",
  "SourceModuleName": "pcap",
  "SourceModuleType": "im_pcap"
}
{
  "dns.additional.count": "0",
  "dns.answer.count": "0",
  "dns.authority.count": "0",
  "dns.flags.authentic_data": "false",
  "dns.flags.authoritative": "false",
  "dns.flags.checking_disabled": "false",
  "dns.flags.recursion_available": "false",
  "dns.flags.recursion_desired": "false",
  "dns.flags.truncated_response": "false",
  "dns.id": "0",
  "dns.opcode": "Query",
  "dns.query.class": "IN",
  "dns.query.count": "1",
  "dns.query.name": "wpad.local",
  "dns.response.code": "NOERROR",
  "ipv6.dst": "ff02::fb",
  "ipv6.src": "fe80::3c3c:c860:df55:fd89",
  "udp.dst_port": "5353",
  "udp.src_port": "5353",
  "EventTime": "2020-05-18T12:22:48.291661-05:00",
  "EventReceivedTime": "2020-05-18T12:22:48.487235-05:00",
  "SourceModuleName": "pcap",
  "SourceModuleType": "im_pcap"
}

374
Chapter 59. Docker
Docker is a containerization technology that enables the creation and use of Linux containers. Containers allow a
developer to package an application with all of its dependencies and distribute it as a single package. The Docker
container technology is widely used in modern, micro-service architectures.

By concept, Docker images should be lightweight; usually only one application is present and running in the
container. Therefore, logs are written to the standard out and standard error streams and logging must be
performed from outside the image.

59.1. Configuring Logging in Docker


By default, Docker writes logs from each container to a separate JSON file, stored under the container’s directory
on the host machine. The logging of containers can be configured in two ways: by modifying the default logging
configuration of the Docker daemon, or by changing it in the runtime options for a specific container. For more
details about Docker’s logging drivers, see Configure logging drivers on Docker.com.

• The default logging driver can be set in the daemon.json configuration file. This file is located in
/etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows Server hosts. The default
logging driver is json-file.
• The default logging driver can be overridden at the container level. To accomplish this, the log driver and its
configuration options must be provided as parameters at container startup with the help of the docker run
command. The configuration options are the same as setting up logging options for the Docker daemon. See
the docker run command reference on Docker.com for more information.

59.2. Receiving Logs From Docker


Collecting logs from a Docker daemon or container is supported in four ways depending on the log driver in use.

To find the current logging driver for a running container, run the following docker inspect command,
substituting the container name or ID for <CONTAINER>.

$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>

59.2.1. JSON
With the json-file log driver, Docker produces a line-based log file in JSON format for each container. See the
JSON File logging driver guide on Docker.com for more information.

Because im_file recursively watches for log files in the containers directory, this may cause
NOTE
reduced performance in very large installations.

375
Example 254. Collecting Docker Logs in JSON Format

This example configuration reads from the JSON log files of all containers. The JSON fields are parsed and
added to the event record with the xm_json parse_json() procedure. A $HostID field, with the container ID, is
also added.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/lib/docker/containers/*/*-json.log'
12 <Exec>
13 parse_json();
14 $HostID = file_basename(file_name());
15 $HostID =~ s/-json.log//;
16 </Exec>
17 </Input>

59.2.2. GELF
The gelf logging driver is a convenient format that is understood by a number of tools such as NXLog. In GELF,
every log message is a dictionary with fields such as version, host, timestamp, short and long version of the
message, and any custom fields that have been configured. See the Graylog Extended Format logging driver
guide on Docker.com for more information.

Example 255. Collecting Docker Logs in GELF Format

In this example, NXLog accepts and parses logs in GELF format on TCP port 12201 with the im_tcp and
xm_gelf modules.

nxlog.conf
 1 <Extension _gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 12201
 9 InputType GELF_TCP
10 </Input>

59.2.3. Syslog
The syslog logging driver routes logs to a Syslog server, such as NXLog, via UDP, TCP, SSL/TLS, or a Unix domain
socket. See the Syslog logging driver guide on Docker.com for more information.

376
Example 256. Collecting Docker Logs in Syslog Format

Here, NXLog accepts logs on TCP port 1514 with the im_tcp module and parses the logs with the xm_syslog
parse_syslog() procedure.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 Exec parse_syslog();
10 </Input>

59.2.4. ETW
On Windows-based systems, the etwlogs logging driver forwards container logs to the Event Tracing for
Windows (ETW) system. Each ETW event contains a message with both the log and its context information. See
the ETW logging driver guide on Docker.com for more information.

Example 257. Collecting Docker Logs in ETW Format

This example collects logs from the DockerContainerLogs Event Tracing provider using the im_etw
module.

nxlog.conf
1 <Input in>
2 Module im_etw
3 Provider DockerContainerLogs
4 </Input>

377
Chapter 60. Elasticsearch and Kibana
Elasticsearch is a search engine and document database that is commonly used to store logging data. Kibana is a
popular user interface and querying front-end for Elasticsearch. Kibana is often used with the Logstash data
collection engine—together forming the ELK stack (Elasticsearch, Logstash, and Kibana).

However, Logstash is not actually required to load data into Elasticsearch. NXLog can do this as well, and offers
several advantages over Logstash—this is the KEN stack (Kibana, Elasticsearch, and NXLog).

• Because Logstash is written in Ruby and requires Java, it has high system resource requirements. NXLog has
a small resource footprint and is recommended by many ELK users as the log collector of choice for
Windows and Linux.
• Due to the Java dependency, Logstash requires system administrators to deploy the Java runtime onto their
production servers and keep up with Java security updates. NXLog does not require Java.
• The EventLog plugin in Logstash uses the Windows WMI interface to retrieve the EventLog data. This method
incurs a significant performance penalty. NXLog uses the Windows EventLog API natively in order to
efficiently collect EventLog data.

The following sections explain how to configure NXLog to:

• send logs directly to Elasticsearch, replacing Logstash; or


• forward collected logs to Logstash, acting as a log collector for Logstash.

60.1. Sending Logs to Elasticsearch


Consult the Elasticsearch Reference and the Kibana User Guide for more information about installing and
configuring Elasticsearch and Kibana. For NXLog Enterprise Edition 3.x, see Using Elasticsearch With NXLog
Enterprise Edition 3.x in the Reference Manual.

1. Configure NXLog.

Example 258. Using om_elasticsearch

The om_elasticsearch module is only available in NXLog Enterprise Edition. Because it sends data in
batches, it reduces the effect of the latency inherent in HTTP responses, allowing the Elasticsearch
server to process the data much more quickly (10,000 EPS or more on low-end hardware).

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output out>
 6 Module om_elasticsearch
 7 URL http://localhost:9200/_bulk
 8 FlushInterval 2
 9 FlushLimit 100
10
11 # Create an index daily
12 Index strftime($EventTime, "nxlog-%Y%m%d")
13
14 # Use the following if you do not have $EventTime set
15 #Index strftime($EventReceivedTime, "nxlog-%Y%m%d")
16 </Output>

378
Example 259. Using om_http

For NXLog Community Edition, the om_http module can be used instead to send logs to Elasticsearch.
Because it sends a request to the Elasticsearch HTTP REST API for each event, the maximum logging
throughput is limited by HTTP request and response latency. Therefore this method is suitable only for
low-volume logging scenarios.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output out>
 6 Module om_http
 7 URL http://localhost:9200
 8 ContentType application/json
 9 <Exec>
10 set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" +
11 $SourceModuleName));
12 rename_field("timestamp", "@timestamp");
13 to_json();
14 </Exec>
15 </Output>

2. Restart NXLog, and make sure the event sources are sending data. This can be checked with curl -X GET
'localhost:9200/_cat/indices?v&pretty'. There should be an index matching nxlog* and its
docs.count counter should be increasing.

3. Configure the appropriate index pattern for Kibana.


a. Open Management on the left panel and click on Index Patterns.
b. Set the Index pattern to nxlog*. A matching index should be listed. Click [ > Next step ].

379
c. Set the Time Filter field name selector to EventTime (or EventReceivedTime if the $EventTime field is
not set by the input module). Click [ Create index pattern ].

4. Test that the NXLog and Elasticsearch/Kibana configuration is working by opening Discover on the left panel.

380
60.2. Forwarding Logs to Logstash
NXLog can be configured to act as a log collector, forwarding logs to Logstash in JSON format.

1. Set up a configuration on the Logstash server to process incoming event data from NXLog.

logstash.conf
input {
  tcp {
  codec => json_lines { charset => CP1252 }
  port => "3515"
  tags => [ "tcpjson" ]
  }
}
filter {
  date {
  locale => "en"
  timezone => "Etc/GMT"
  match => [ "EventTime", "YYYY-MM-dd HH:mm:ss" ]
  }
}
output {
  elasticsearch {
  host => localhost
  }
  stdout { codec => rubydebug }
}

381
The json codec in Logstash sometimes fails to properly parse JSON—it will concatenate
more than one JSON record into one event. Use the json_lines codec instead.
NOTE
Although the im_msvistalog module converts data to UTF-8, Logstash seems to have trouble
parsing that data. The charset => CP1252 seems to help.

2. Configure NXLog.

nxlog.conf
1 <Output out>
2 Module om_tcp
3 Host 10.1.1.1
4 Port 3515
5 Exec to_json();
6 </Output>

3. Restart NXLog.

382
Chapter 61. F5 BIG-IP
F5 BIG-IP appliances are capable of sending their logs to a remote Syslog destination via TCP or UDP. When
sending logs over the network, it is recommended to use TCP as the more reliable protocol. With UDP there is a
potential to lose entries, especially when there is a high volume of messages.

There are multiple sub-systems that write logs to different files. Below is an example of Local Traffic
Management (LTM) logs reporting pool members being up or down.

Local Traffic Management (LTM) Log Sample


Mar 14 16:50:12 l-lb1 notice mcpd[7660]: 01070639:5: Pool /Common/q-qa-pool member /Common/q-qa1:25
session status forced disabled.↵
Mar 14 16:51:33 l-lb1 notice mcpd[7660]: 01070639:5: Pool /Common/q-qa-pool member /Common/q-qa1:25
session status enabled.↵

The following audit logs are written to a different local file.

Audit Log Sample


Mar 14 16:43:41 l-lb1 notice httpd[3064]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78 attempts=1
start="Tue Mar 14 16:43:41 2017".↵
Mar 14 17:10:33 l-lb1 notice httpd[1181]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78 attempts=1
start="Tue Mar 14 16:43:41 2017" end="Tue Mar 14 17:10:33 2017".↵

For more details on BIG-IP log files and how to view them, please refer to the K16197 knowledge base article.
Additional information on configuring logging options on BIG-IP devices can be found in the F5 Knowledge
Center. Select the appropriate software version and look for the "Log Files" section in the TMOS Operations
Guide.

NOTE The steps below have been tested with BIG-IP v11 but should also work for other versions.

61.1. Collecting BIG-IP Logs via TCP


The BIG-IP web interface does not provide a way to configure an external TCP Syslog destination, so this must be
done via the command line.

1. Configure NXLog to receive log entries via TCP and process them as Syslog (see the examples below). Then
restart NXLog.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured depending on the current setup.
3. Connect via SSH to the BIG-IP device. In case of a High Availability (HA) group, make sure you are logged into
the active unit. You should see (Active) in the command prompt.
4. Review the existing Syslog configuration on BIG-IP and remove it.

# tmsh list sys syslog include


# tmsh modify sys syslog include none

5. Configure a remote Syslog destination on BIG-IP. Replace IP_SYSLOG and PORT with the IP address and port
that the NXLog agent is listening on. Replace LEVEL with the required logging level.

# tmsh modify sys syslog include "destination remote_server \


  {tcp(\"IP_SYSLOG\" port (PORT));};filter f_alllogs \
  {level (LEVEL...emerg);};log {source(local);filter(f_alllogs);\
  destination(remote_server);};"

383
This command forwards all appliance logs to the remote destination, so nothing will be
NOTE
logged locally as soon as it is executed.

Example 260. Redirecting Informational Logs via TCP

This command redirects logs at the informational level (from info to emerg) to an NXLog agent at
192.168.6.43, via TCP port 1514.

# tmsh modify /sys syslog include "destination remote_server \


  {tcp(\"192.168.6.143\" port (1514));};filter f_alllogs \
  {level (info...emerg);};log {source(local);filter(f_alllogs);\
  destination(remote_server);};"

6. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.

# tmsh run cm config-sync to-group GROUP_NAME

Once the configuration has been synchronized to all members of the group, each member
NOTE will be sending logs, inserting its own hostname and IP address. In the event of failover,
logging will continue from both units regardless of which one is currently active.

Example 261. Receiving BIG-IP Logs via TCP

This configuration uses the im_tcp module to collect the BIG-IP logs. A JSON output sample shows the
resulting logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/f5.log"
19 Exec to_json();
20 </Output>

384
Output Sample
{
  "MessageSourceAddress": "192.168.6.161",
  "EventReceivedTime": "2017-03-14 17:03:16",
  "SourceModuleName": "in",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "l-lb2",
  "EventTime": "2017-03-14 17:03:53",
  "SourceName": "mcpd",
  "ProcessID": "7233",
  "Message": "notice httpd[5150]: 01070639:5: Pool /Common/q-qa-pool member /Common/q-qa1:25
session status enabled."
}
{
  "MessageSourceAddress": "192.168.6.91",
  "EventReceivedTime": "2017-03-14 17:10:18",
  "SourceModuleName": "in",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 16,
  "SyslogFacility": "LOCAL0",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "l-lb1",
  "EventTime": "2017-03-14 17:10:33",
  "SourceName": "httpd",
  "ProcessID": "1181",
  "Message": "notice httpd[5150]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Tue Mar 14 16:43:41 2017\" end=\"Tue Mar 14 17:10:33 2017\"."
}

NXLog can also be configured to extract additional fields from the messages, including those that contain key-
value pairs.

Example 262. Extracting Fields From the BIG-IP Logs

This configuration uses the xm_syslog parse_syslog() procedure to parse Syslog messages and the xm_kvp
module to extract additional fields.

385
nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Extension kvp>
10 Module xm_kvp
11 KVPDelimiter " "
12 KVDelimiter =
13 EscapeChar \\
14 </Extension>
15
16 <Input in>
17 Module im_tcp
18 Host 0.0.0.0
19 Port 1514
20 <Exec>
21 parse_syslog();
22 if $Message =~ /^([a-z]*) ([a-zA-Z]*)(.*)$/
23 {
24 $F5MsgLevel = $1;
25 $F5Proc = $2;
26 $F5Message = $3;
27 if $F5Message =~ /^\[[0-9]*\]: ([0-9]*):([0-9]+): (.*)$/
28 {
29 [...]

386
Output Sample
{
  "MessageSourceAddress": "192.168.6.91",
  "EventReceivedTime": "2017-04-16 00:06:43",
  "SourceModuleName": "in",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 10,
  "SyslogFacility": "AUTHPRIV",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "l-lb1",
  "EventTime": "2017-04-16 00:07:59",
  "Message": "notice httpd[5320]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Sun Apr 16 00:07:59 2017\".",
  "F5MsgLevel": "notice",
  "F5Proc": "httpd",
  "F5Message": "[5320]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Sun Apr 16 00:07:59 2017\".",
  "F5MsgID": "01070417",
  "F5MsgSev": "5",
  "F5Msg": "AUDIT - user john - RAW: httpd(mod_auth_pam): user=john(john) partition=[All]
level=Administrator tty=/usr/bin/tmsh host=192.168.9.78 attempts=1 start=\"Sun Apr 16 00:07:59
2017\".",
  "F5Process": "httpd",
  "F5Module": "mod_auth_pam",
  "user": "john(john)",
  "partition": "[All]",
  "level": "Administrator",
  "tty": "/usr/bin/tmsh",
  "host": "192.168.9.78",
  "attempts": "1",
  "start": "Sun Apr 16 00:19:55 2017"
}

61.2. Collecting BIG-IP Logs via UDP


When reliable delivery is not a concern, or in case there is a requirement to have local copies of log entries on
each appliance, BIG-IP logs can be sent to a remote Syslog destination via UDP.

1. Configure NXLog to receive log entries via UDP and process them as Syslog (see the example below). Then
restart the agent.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured, depending on the current setup.
3. Proceed with the Syslog configuration on BIG-IP, using either the command line or the web interface. See the
following sections.

387
Example 263. Receiving BIG-IP Logs via UDP

This configuration uses the im_udp module to collect the BIG-IP logs.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in_syslog_udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>

61.2.1. Configuring via the Command Line


1. Connect via SSH to the BIG-IP device. In case of a High Availability (HA) group, make sure you are logged into
the active unit. You should see (Active) in command prompt.
2. Configure a remote Syslog destination on BIG-IP. Replace IP_SYSLOG and PORT with the IP address and port
that the NXLog agent is listening on.

# tmsh modify sys syslog remote-servers add { nxlog { \


  host IP_SYSLOG remote-port PORT } }

Example 264. Redirecting Informational Logs via UDP

This command redirects Informational Logs to an NXLog agent at 192.168.6.143, via UDP port 514.

# tmsh modify sys syslog remote-servers add { nxlog { \


  host 192.168.6.143 remote-port 514 } }

3. In case of a High Availability (HA) group, synchronize configuration changes to the other units:

# tmsh run cm config-sync to-group GROUP_NAME

Once the configuration has been synchronized to all members of the group, each member
NOTE will be sending logs, inserting its own hostname and IP address. In the event of failover,
logging will continue from both units regardless of which one is currently active.

61.2.2. Configuring via the Web Interface


1. Log in to the BIG-IP web interface. In case of a High Availability (HA) group, make sure you are logged into the
active unit. You should see ONLINE (Active) in the top left corner.
2. Go to System › Logs › Configuration › Remote Logging.

3. Type in the Remote IP and Remote Port, then click [ Add ] and [ Update ].

388
4. In case of a High Availability (HA) group, synchronize the configuration changes to the other units:
a. Click on the yellow Changes Pending in the top left corner.
b. Select Active unit which should be marked as (Self).
c. Make sure Sync Device to Group option is chosen and click [ Sync ].

389
Once the configuration has been synchronized to all members of the group, each member will
NOTE be sending logs, inserting its own hostname and IP address. In the event of failover, logging will
continue from both units regardless of which one is currently active.

61.3. Using SNMP Traps


BIG-IP devices are also capable of sending SNMP traps. The device contains predefined default SNMP traps which
can be enabled during SNMP configuration. There is also an option to create user-defined traps. More
information about SNMP support on BIG-IP devices can be found in the F5 Knowledge Center under the "Alerts"
section in the TMOS Operations Guide.

BIG-IP systems also come with Management Information Base (MIB) files stored on the device itself. Additional
information on that is available in K13322.

1. Configure NXLog with the xm_snmp module. See the example below.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured, depending on the current setup.
3. Proceed with the SNMP configuration on BIG-IP, using either the command line or the web interface. See the
following sections.

390
Example 265. Receiving SNMP Traps

This example NXLog configuration uses the im_udp and xm_snmp modules to receive SNMP traps. The
corresponding JSON-formatted output is shown below.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension snmp>
 6 Module xm_snmp
 7 MIBDir /usr/share/mibs/bigip
 8 # The following <User> section is required for SNMPv3
 9 #<User snmp_user>
10 # AuthProto sha1
11 # AuthPasswd q1w2e3r4
12 # EncryptPasswd q1w2e3r4
13 # EncryptProto aes
14 #</User>
15 </Extension>
16
17 <Input in>
18 Module im_udp
19 Host 0.0.0.0
20 Port 162
21 InputType snmp
22 </Input>
23
24 <Output out>
25 Module om_file
26 File "/var/log/f5.log"
27 Exec to_json();
28 </Output>

Output Sample
{
  "SNMP.CommunityString": "nxlog",
  "SNMP.RequestID": 449377444,
  "EventTime": "2017-03-18 16:37:41",
  "SeverityValue": 2,
  "Severity": "INFO",
  "OID.1.3.6.1.2.1.1.3.0": 1277437018,
  "OID.1.3.6.1.6.3.1.1.4.1.0": "1.3.6.1.4.1.3375.2.4.0.3",
  "OID.1.3.6.1.6.3.1.1.4.3.0": "1.3.6.1.4.1.3375.2.4",
  "MessageSourceAddress": "192.168.6.91",
  "EventReceivedTime": "2017-03-18 16:37:41",
  "SourceModuleName": "in",
  "SourceModuleType": "im_udp"
}

61.3.1. Configuring via the Command Line


1. Connect via SSH to the BIG-IP device. In case of a High Availability (HA) group, make sure you are logged into
the active unit. You should see (Active) in the command prompt.
2. Enable the pre-defined default traps as required.

391
# tmsh modify sys snmp bigip-traps enabled
# tmsh modify sys snmp agent-trap enabled
# tmsh modify sys snmp auth-trap enabled

3. Create an SNMP user (SNMPv3 only).

# tmsh modify sys snmp users add { \


  USERNAME { \
  username USERNAME \
  auth-protocol sha \
  privacy-protocol aes \
  auth-password **** \
  privacy-password **** } }

Example 266. User snmp_user configured for MD5 and AES

# tmsh modify sys snmp users add { \


  snmpv3_user { \
  username snmpv3_user \
  auth-protocol md5 \
  privacy-protocol aes \
  auth-password q1w2e3r4 \
  privacy-password q1w2e3r4 } }

4. Configure the remote SNMP destination on BIG-IP. Replace NAME, COMMUNITY, IP_ADDRESS, and PORT with
appropriate values. Replace NETWORK with other unless traps are going out the management interface, when
management should be specified instead.

# tmsh modify sys snmp traps add { NAME { community COMMUNITY \


  host IP_ADDRESS port PORT network NETWORK }

Example 267. Sending Traps via SNMPv2

This command enables sending SNMPv2 traps to 192.168.6.143.

# tmsh modify sys snmp traps add { 192_168_6_143 { community nxlog \


  host 192.168.6.143 port 162 network other }

In case of SNMPv3, this command needs additional parameters, including security-level, auth-protocol, auth-
password, privacy-protocol, and privacy-password.

Example 268. Sending Traps via SNMPv3

This command enables sending SNMPv3 traps to 192.168.6.143, using SHA and AES.

# tmsh modify sys snmp traps add { nxlog { \


  version 3 \
  host 192.168.6.143 \
  port 162 \
  network other \
  security-level auth-privacy \
  security-name snmp_user \
  auth-protocol sha \
  auth-password q1w2e3r4 \
  privacy-protocol aes \
  privacy-password q1w2e3r4 } }

392
If the BIG-IP configuration has been previously migrated or cloned, SNMPv3 may not work
NOTE
because the EngineID is not unique. In this case it must be reset as described in K6821.

5. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.

# tmsh run cm config-sync to-group GROUP_NAME

61.3.2. Configuring via the Web Interface


1. Log in to the BIG-IP web interface. In case of a High Availability (HA) group, make sure you are logged into the
active unit. You should see ONLINE (Active) in the top left corner.
2. Go to System › SNMP › Traps. Select the required SNMP events and click [ Update ].

3. Create an SNMP user (SNMPv3 only). Go to System › SNMP › Agent › Access (v3). Click [ Create ] and
specify the user name, authentication type and password, privacy protocol and password, and access type.
Specify an OID value to limit access to certain OIDs, or use .1 to allow full access.

4. Go to System › SNMP › Traps › Destination and click [ Create ]. Specify the SNMP version, community

393
name, destination IP address, destination port, and network to send traffic to. Then click [ Finished ].

SNMPv3 requires additional parameters. This example matches the settings shown in the NXLog
configuration above.

5. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.
a. Click on the yellow Changes Pending in the top left corner.
b. Select the Active unit which should be marked as (Self).
c. Make sure the Sync Device to Group option is chosen and click [ Sync ].

394
Once the configuration has been synchronized to all members of the group, each
member will be sending logs, inserting its own hostname and IP address. In the event of
NOTE
failover, logging will continue from both units regardless of which one is currently
active.

61.4. BIG-IP High Speed Logging


F5 BIG-IP devices support High Speed Logging (HSL). This protocol will send as much data as the remote
destination is able to accept. Combined with load balancing, BIG-IP makes it possible to have multiple NXLog
servers load balanced with one of the available load balancing methods.

BIG-IP is able to send its own logs via HSL in addition to logs for traffic passing through the device. Because the
load balancer is usually on the edge of the network and all web traffic passes through it, logging traffic on BIG-IP
itself may be an easier and faster solution than processing web server logs on each server separately.

When configuring HSL on BIG-IP, the administrator will have to choose between sending logs via TCP or UDP. TCP
can guarantee reliable delivery. However when load balancing traffic between multiple nodes, BIG-IP will reuse
existing TCP connections to each node in order to reduce overhead related to creating new connections. This
may result in less perfect load balancing between members.

NOTE The steps below have been tested with BIG-IP v12.

In order to configure HSL on BIG-IP, a node for each NXLog server must be created and then added to a pool.
Follow these steps.

1. Log in to BIG-IP via SSH.


2. Create a node for each NXLog agent.

395
# tmsh create ltm node NAME { address IP_ADDRESS session user-enabled }

3. Create a pool with all nodes.

# tmsh create ltm pool NAME { members add { NODE1:PORT { address \


  IP_ADDRESS1 }} NODE2:PORT { address IP_ADDRESS2 }} monitor PROTOCOL }

Example 269. Creating a Pool

These commands create a pool named nxlog with one NXLog node.

# tmsh create ltm node nxlog1 { address 192.168.6.143 session user-enabled }


# tmsh create ltm pool nxlog { members add { nxlog1:1514 { address \
  192.168.6.143 }} monitor tcp }

61.4.1. Forwarding BIG-IP Logs to an HSL Pool


To send logs generated on BIG-IP itself to the pool created above, follow these steps.

1. Log in to BIG-IP via SSH.


2. Create a remote logging destination. Replace NAME with a name for the destination, POOL with the name used
above when creating the pool, DISTRIBUTION with one of the distribution options shown below, and
PROTOCOL with tcp or udp. Distribution options include:

Adaptive
Sends traffic to one of the pool members until this member is either unable to process logs at the
required rate or the connection is lost.

Balanced
Uses the load balancing method configured on the pool and selects a new member each time it sends
data.

Replicated
Sends each log to all members of the pool.

# tmsh create sys log-config destination remote-high-speed-log NAME \


  pool-name POOL distribution DISTRIBUTION protocol PROTOCOL

3. Create a log publisher. Replace NAME with a name for the publisher and DESTINATION with the destination
name used in the previous step.

# tmsh create sys log-config publisher NAME destinations add {DESTINATION}

4. Create a log filter. Replace NAME with a name for the filter, LEVEL with the required logging level between
Emergency and Debugging, PUBLISHER with the name used in the previous step, and SOURCE with a
particular process running on BIG-IP (or all, which sends all logs).

# tmsh create sys log-config filter NAME level LEVEL \


  publisher PUBLISHER source SOURCE

396
Example 270. Sending All Logs to the NXLog Pool

The following commands will send all logs to the NXLog pool via the TCP protocol.

# tmsh create sys log-config destination remote-high-speed-log nxlog-hsl \


  pool-name nxlog distribution adaptive protocol tcp
# tmsh create sys log-config publisher bigip-local-logs \
  destinations add {nxlog-hsl}
# tmsh create sys log-config filter bigip-all-local-logs level debug \
  publisher bigip-local-logs source all

61.4.2. Forwarding Traffic Logs to an HSL Pool


Configuring BIG-IP to log traffic that goes through the unit is done per virtual server and requires the following
steps.

1. Configure NXLog (see the examples below), then restart NXLog.


2. Create a request logging profile. In most cases it is enough to log only requests, however if required the
same profile can be configured to log responses and logging errors. Replace NAME with a name for the
profile, PROTOCOL with mds-tcp or mds-udp, POOL with the pool name, and TEMPLATE with a list of HTTP
request fields that will be logged (see the LTM implementation guide).

# tmsh create ltm profile request-log NAME { \


  request-log-protocol PROTOCOL request-log-pool POOL request-logging enabled \
  request-log-template "TEMPLATE" }

3. Assign the logging profile to a virtual server. Replace NAME with the virtual server name and
LOGGING_PROFILE with the profile name used in the previous step. A logging profile can be assigned to
multiple virtual servers.

# tmsh modify ltm virtual NAME {profiles add {LOGGING_PROFILE {}}}

Example 271. Logging Traffic to the NXLog Pool

The following commands configure traffic logging to the NXLog pool via TCP.

# tmsh create ltm profile request-log traffic-to-nxlog { \


  request-log-protocol mds-tcp request-log-pool nxlog request-logging enabled \
  request-log-template "client $CLIENT_IP:$CLIENT_PORT request $HTTP_REQUEST \
  server $SERVER_IP:$SERVER_PORT status $HTTP_STATUS" }
# tmsh modify ltm virtual q-web-farm-HTTPS {profiles add {traffic-to-nxlog {}}}

397
Example 272. Receiving Traffic Logs From BIG-IP

This example shows BIG-IP traffic logs as received and processed by NXLog using im_tcp and xm_syslog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/f5.log"
19 Exec to_json();
20 </Output>

Below is an example of a request being logged in JSON format.

Output Sample
{
  "MessageSourceAddress": "192.168.6.91",
  "EventReceivedTime": "2017-05-10 19:16:43",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-05-10 19:16:43",
  "Hostname": "192.168.6.91",
  "Message": "client 192.168.9.78:63717 request GET /cmedia/img/icons/mime/mime-
unknown.png?v170509919 HTTP/1.1 server 192.168.6.101:80 status "
}

398
Example 273. Extracting Additional Fields

Further field extraction can be done with NXLog according to the sequence of fields specified in the
template. For the template string shown above, the following configuration extracts the four fields with a
regular expression.

nxlog.conf
 1 <Input in_syslog_tcp>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /^client (.*) request (.*) server (.*) status (.*)$/
 8 {
 9 $HTTP_Client = $1;
10 $HTTP_Request = $2;
11 $HTTP_Server = $3;
12 $HTTP_Status = $4;
13 }
14 </Exec>
15 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.6.91",
  "EventReceivedTime": "2017-05-10 20:06:24",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2017-05-10 20:06:24",
  "Hostname": "192.168.6.91",
  "Message": "client 192.168.9.78:65275 request GET /?disabledcookies=true HTTP/1.1 server
192.168.6.100:80 status ",
  "HTTP_Client": "192.168.9.78:65275",
  "HTTP_Request": "GET /?disabledcookies=true HTTP/1.1",
  "HTTP_Server": "192.168.6.100:80",
  "HTTP_Status": ""
}

61.4.3. Load Balancing Logs From External Sources via BIG-IP


Having a pool that is balancing traffic between multiple NXLog servers makes it possible to send logs from other
servers and devices through BIG-IP. In order to accomplish this, create a virtual server that accepts Syslog traffic.

399
Example 274. Creating a Virtual Server Forwarding Logs to the NXLog Pool

This example creates a virtual server listening on TCP port 1514 that forwards logs to the nxlog pool.

# tmsh create ltm virtual nxlog-virtual-server { destination 192.168.6.93:1514 \


  mask 255.255.255.255 pool nxlog profiles add { tcp{} } }

Once this has been set up, log producers can be configured to forward Syslog logs to 192.168.6.93.

400
Chapter 62. File Integrity Monitoring
File integrity monitoring (FIM) can be used to detect changes to files and directories. A file may be altered due to
an update to a newer version, a security breach, or data corruption. File integrity monitoring helps an
organization respond quickly and effectively to unexpected changes to files and is therefore a standard
requirement for many regulatory compliance objectives.

• PCI-DSS - Payment Card Industry Data Security Standard (Requirement 11.5)


• SOX - Sarbanes-Oxley Act (Section 404)
• NERC CIP - NERC CIP Standard (CIP-010-2)
• FISMA - Federal Information Security Management Act (NIST SP800-53 Rev3)
• HIPAA - Health Insurance Portability and Accountability Act of 1996 (NIST Publication 800-66)
• SANS - SANS Critical Security Controls (Control 3)

NXLog can be configured to provide file (or Windows Registry) integrity monitoring. An event is generated for
each detected modification. These events can then be used to generate alerts or be forwarded for storage and
auditing.

There are various ways that monitoring can be implemented; these fall into two categories.

Checksum Monitoring
The im_fim and im_regmon modules (available with NXLog Enterprise Edition only) provide monitoring based
on a cryptographic checksums. On the first run (when a file set or the registry is in a known secure state), a
database of checksums is created. Subsequent scans are performed at regular intervals, and the checksums
are compared. When a change is detected, an event is generated.

• The im_fim module is platform independent, available on all platforms supported by NXLog, and has no
external dependencies. Similarly, the im_regmon module requires no configuration outside of NXLog to
monitor the Windows Registry.
• If there are multiple changes between two scans, only the cumulative effect is logged. For example, if a
file is deleted and a new file is created in its place before the next scan occurs, a single modification event
will be generated.
• It is not possible to detect which user made a change because the filesystem/registry does not provide
that information, and there may be multiple changes by different users between scans.

Real-Time Monitoring
Files (and the Windows Registry) can also be monitored in real-time with the help of kernel-level auditing,
which does not require periodic scanning. This type of monitoring is platform specific.

• Kernel-level monitoring usually provides improved performance, especially for large file sets.
• All events are logged; the granularity of reporting is not limited by the scan interval (because there is no
scanning involved).
• Reported events may be very detailed, and usually include information about which user made the
change.

See the following sections for details about setting up file integrity monitoring on various platforms.

62.1. Monitoring on Linux


Checksum monitoring on Linux can be configured with the im_fim module.

NXLog must have permission to read the files that are to be monitored. Run NXLog as root, make sure the nxlog
user or group has permission to read the files, or change the user/group under which NXLog runs. See the User
and Group directives.

401
Example 275. Using im_fim on Linux

This configuration uses im_fim to monitor a common set of system directories containing configuration,
executables, and libraries. The RIPEMD-160 hash function is selected and the scan interval is set to 3,600
seconds (1 hour).

nxlog.conf
 1 <Input fim>
 2 Module im_fim
 3 File "/bin/*"
 4 File "/etc/*"
 5 File "/lib/*"
 6 File "/opt/nxlog/bin/*"
 7 File "/opt/nxlog/lib/*"
 8 File "/sbin/*"
 9 File "/usr/bin/*"
10 File "/usr/sbin/*"
11 Exclude "/etc/hosts.deny"
12 Exclude "/etc/mtab"
13 Digest rmd160
14 Recursive TRUE
15 ScanInterval 3600
16 </Input>

NXLog will report scan activity in its internal log.

Internal Log
2017-06-14 11:44:53 INFO Module 'fim': FIM scan started↵
2017-06-14 11:45:00 INFO Module 'fim': FIM scan finished in 7.24 seconds. Scanned folders: 833
Scanned files: 5081 Read file bytes: 379166339↵

Output Sample
{
  "EventTime": "2017-06-14 11:57:33",
  "Hostname": "ubuntu-xenial",
  "EventType": "CHANGE",
  "Object": "FILE",
  "PrevFileName": "/etc/ld.so.cache",
  "PrevModificationTime": "2017-06-14 11:20:47",
  "FileName": "/etc/ld.so.cache",
  "ModificationTime": "2017-06-14 11:56:55",
  "PrevFileSize": 46298,
  "FileSize": 46971,
  "DigestName": "rmd160",
  "PrevDigest": "1dbe24a108c044153d8499f073274b7ad5507119",
  "Digest": "ec0bc108b7c9e5d9eafde9cb1407b91e618d24c4",
  "EventReceivedTime": "2017-06-14 11:57:33",
  "SourceModuleName": "fim",
  "SourceModuleType": "im_fim"
}

See the Linux Audit System chapter for details about setting up kernel-level auditing. It is even possible to
combine the im_fim and im_linuxaudit modules for redundant monitoring.

Monitoring on Windows
The im_fim module can be used on Windows for monitoring a file set.

402
Example 276. Using im_fim on Windows

This configuration monitors the program directories for changes. The scan interval is set to 1,800 seconds
(30 minutes). The events generated by NXLog are similar to those shown in Using im_fim on Linux.

nxlog.conf
1 <Input fim>
2 Module im_fim
3 File 'C:\Program Files\*'
4 File 'C:\Program Files (x86)\*'
5 Exclude 'C:\Program Files\nxlog\data\*'
6 Recursive TRUE
7 ScanInterval 1800
8 </Input>

Example 277. Using im_regmon on Windows

The Windows Registry can be monitored with the im_regmon module. This configuration monitors all
registry keys in the specified path. The keys are scanned every 60 seconds.

nxlog.conf
1 <Input registry>
2 Module im_regmon
3 RegValue 'HKLM\Software\Policies\*'
4 Recursive TRUE
5 ScanInterval 60
6 </Input>

NXLog will report scan activity in its internal log.

Internal Log
2020-02-26 22:08:32 INFO Module 'in': Registry scan finished in 0.08 seconds. Scanned registry
keys: 337 Scanned registry values: 1250 Read value bytes: 106866↵

Output Sample
{
  "EventTime": "2018-01-31 04:01:12",
  "Hostname": "WINAD",
  "EventType": "CHANGE",
  "RegistryValueName": "HKLM\\Software\\Policies\\Microsoft\\TPM\\OSManagedAuthLevel",
  "PrevValueSize": 4,
  "ValueSize": 4,
  "DigestName": "SHA1",
  "PrevDigest": "0aaf76f425c6e0f43a36197de768e67d9e035abb",
  "Digest": "3c585604e87f855973731fea83e21fab9392d2fc",
  "Severity": "WARNING",
  "SeverityValue": 3,
  "EventReceivedTime": "2018-01-31 04:01:12",
  "SourceModuleName": "registry",
  "SourceModuleType": "im_regmon",
  "MessageSourceAddress": "10.8.0.121"
}

403
Example 278. Extended Hive Key Paths to Monitor

The following example uses the im_regmon module to monitor a list of hive key paths listed in documents
such as the MITRE ATT&CK framework and the JP/CERT Lateral Movements. This list can be modified as and
when needed.

When running a custom list, make sure to double check the internal log for the appropriate number of keys
and values that are being scanned.

nxlog.conf
 1 <Input extend_regmon_rules>
 2 Module im_regmon
 3 Recursive TRUE
 4 ScanInterval 30
 5
 6 RegValue "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\*"
 7 RegValue "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution
Options\*"
 8 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\WMI\Security\*"
 9 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\*"
10 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\BootExecute"
11 RegValue
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel\NameSpace\*"
12 RegValue "HKLM\SYSTEM\ControlSet001\Enum\STORAGE\VolumeSnapshot"
13 RegValue "HKLM\SYSTEM\ControlSet001\Services\VSS\*"
14 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\Runonce"
15 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\policies\Explorer\*"
16 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\Run\*"
17 RegValue "HKCU\Software\Microsoft\Windows\CurrentVersion\Run\*"
18 RegValue "HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce"
19 RegValue "HKLM\Software\Policies\*"
20 </Input>

Real-time Monitoring on Windows


Real-time monitoring can be implemented with Windows security auditing (see Security auditing on Microsoft
Docs). Sysmon also implements file and registry monitoring with a system service and device driver; see the
Sysmon chapter. In both cases, the generated events can be collected from the Windows Event Log with the
im_msvistalog module (see the Windows Event Log chapter).

404
Chapter 63. FreeRADIUS
Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized
authentication, authorization, and accounting management for users who connect and use a network service.
RADIUS accounting logs can be provided by many networking devices or by the open source Unix service called
FreeRADIUS.

NXLog can be configured to process FreeRADIUS authentication and accounting logs. For processing RADIUSs
NPS, see RADIUS NPS (xm_nps).

Example 279. Processing FreeRADIUS Authentication Logs With Regular Expressions

The configuration below uses the im_file module to read FreeRADIUS authentication log entries and
separate fields with regular expressions. The result is converted to JSON after fields EventReceivedTime,
SourceModuleName, and SourceModuleType are deleted from the $raw_event.

nxlog.conf
 1 <Input freeradius>
 2 Module im_file
 3 File '/tmp/input'
 4 <Exec>
 5 if $raw_event =~ /^(?<DateTime>\w{3} \w{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}) :
(?<EventType>\w+): (?<Message>.+)/
 6 {
 7 $raw_event = $DateTime + ' ' + $EventType + ' ' + $Message;
 8 }
 9 else drop();
10 </Exec>
11 </Input>
12
13 <Output out>
14 Module om_file
15 File '/tmp/output'
16 <Exec>
17 delete($EventReceivedTime);
18 delete($SourceModuleName);
19 delete($SourceModuleType);
20 to_json();
21 </Exec>
22 </Output>

Below are the log samples before and after processing.

Event Sample
Thu Dec 20 07:50:44 2018 : Info: Loaded virtual server inner-tunnel↵
Thu Dec 20 07:50:44 2018 : Info: Ready to process requests↵
Thu Dec 20 07:50:46 2018 : Auth: (0) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:46 2018 : Auth: (1) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:47 2018 : Auth: (2) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:49 2018 : Auth: (3) Login incorrect (pap: Cleartext password does not match
"known good" password): [testing/testing] (from client localhost port 0)↵

405
Output Sample
{
  "DateTime": "Thu Dec 20 07:50:44 2018",
  "EventType": "Info",
  "Message": "Loaded virtual server inner-tunnel"
}
{
  "DateTime": "Thu Dec 20 07:50:44 2018",
  "EventType": "Info",
  "Message": "Ready to process requests"
}
{
  "DateTime": "Thu Dec 20 07:50:46 2018",
  "EventType": "Auth",
  "Message": "(0) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
  "DateTime": "Thu Dec 20 07:50:46 2018",
  "EventType": "Auth",
  "Message": "(1) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
  "DateTime": "Thu Dec 20 07:50:47 2018",
  "EventType": "Auth",
  "Message": "(2) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
  "DateTime": "Thu Dec 20 07:50:49 2018",
  "EventType": "Auth",
  "Message": "(3) Login incorrect (pap: Cleartext password does not match \"known good\"
password): [testing/testing] (from client localhost port 0)"
}

Example 280. Processing FreeRADIUS Accounting Logs

The configuration below utilizes the im_file module to read FreeRADIUS accounting logs and the
xm_multiline module to match the start and end of a log entry. Each string is processed and converted to
key-value pairs using the xm_kvp and to JSON using the xm_json modules. The EventReceivedTime,
SourceModuleName, and SourceModuleType fields are deleted from the entry.

406
nxlog.conf
 1 <Extension radius>
 2 Module xm_multiline
 3 HeaderLine /^\s\S\S\S\s+\S\S\S\s+\d{1,2}\s+\d{1,2}\:\d{1,2}\: \
 4 \d{1,2}\s+\d{4}/
 5 EndLine /^\s+Timestamp = \d*/
 6 </Extension>
 7
 8 <Extension kvp>
 9 Module xm_kvp
10 KVDelimiter =
11 KVPDelimiter \n
12 </Extension>
13
14 <Input in>
15 Module im_file
16 File "/tmp/input"
17 ReadFromLast FALSE
18 SavePos FALSE
19 InputType radius
20 <Exec>
21 if $raw_event =~ /^(.+)\s*([\s\S]+)/
22 {
23 $EventTime = parsedate($1);
24 kvp->parse_kvp($2);
25 $Timestamp = datetime(integer($Timestamp) * 1000000);
26 }
27 else log_info("no match for " + $raw_event);
28 delete($EventReceivedTime);
29 delete($SourceModuleName);
30 delete($SourceModuleType);
31 </Exec>
32
33 </Input>

Below are the event samples before and after processing.

407
Event Sample
 Tue May 21 00:00:03 2013↵
  Acct-Session-Id = "1/3/0/3_00FA2701"↵
  Framed-Protocol = PPP↵
  Framed-IP-Address = 1.2.3.4↵
  Cisco-AVPair = "ppp-disconnect-cause=Received LCP TERMREQ from peer"↵
  User-Name = "user"↵
  Acct-Authentic = RADIUS↵
  Cisco-AVPair = "connect-progress=LAN Ses Up"↵
  Cisco-AVPair = "nas-tx-speed=1410065408"↵
  Cisco-AVPair = "nas-rx-speed=1410065408"↵
  Acct-Session-Time = 384↵
  Acct-Input-Octets = 4497↵
  Acct-Output-Octets = 7951↵
  Acct-Input-Packets = 64↵
  Acct-Output-Packets = 64↵
  Acct-Terminate-Cause = User-Request↵
  Cisco-AVPair = "disc-cause-ext=PPP Receive Term"↵
  Acct-Status-Type = Stop↵
  NAS-Port-Type = Ethernet↵
  NAS-Port = 402653187↵
  NAS-Port-Id = "1/3/0/3"↵
  Cisco-AVPair = "client-mac-address=fe00.5104.01ae"↵
  Service-Type = Framed-User↵
  NAS-IP-Address = 1.2.3.4↵
  X-Ascend-Session-Svr-Key = "DCCE87A5"↵
  Acct-Delay-Time = 0↵
  Proxy-State = 0x313133↵
  Proxy-State = 0x323339↵
  Client-IP-Address = 1.2.3.4↵
  Acct-Unique-Session-Id = "3ff5a50a3cea9cba"↵
  Timestamp = 1369087203↵

408
Output Sample
{
  "EventTime": "2013-05-21T00:00:03.000000+00:00",
  "Acct-Session-Id": "1/3/0/3_00FA2701",
  "Framed-Protocol": "PPP",
  "Framed-IP-Address": "1.2.3.4",
  "Cisco-AVPair": "client-mac-address=fe00.5104.01ae",
  "User-Name": "user",
  "Acct-Authentic": "RADIUS",
  "Acct-Session-Time": 384,
  "Acct-Input-Octets": 4497,
  "Acct-Output-Octets": 7951,
  "Acct-Input-Packets": 64,
  "Acct-Output-Packets": 64,
  "Acct-Terminate-Cause": "User-Request",
  "Acct-Status-Type": "Stop",
  "NAS-Port-Type": "Ethernet",
  "NAS-Port": 402653187,
  "NAS-Port-Id": "1/3/0/3",
  "Service-Type": "Framed-User",
  "NAS-IP-Address": "1.2.3.4",
  "X-Ascend-Session-Svr-Key": "DCCE87A5",
  "Acct-Delay-Time": 0,
  "Proxy-State": 3289913,
  "Client-IP-Address": "1.2.3.4",
  "Acct-Unique-Session-Id": "3ff5a50a3cea9cba",
  "Timestamp": "2013-05-20T22:00:03.000000+00:00"
}

409
Chapter 64. Graylog
Graylog is a popular open source log management tool with a GUI that uses Elasticsearch as a backend. It
provides centralized log collection, analysis, searching, visualization, and alerting features. NXLog can be
configured as a collector for Graylog, using one of the output writers provided by the xm_gelf module. In such a
setup, NXLog acts as a forwarding agent on the client machine, sending messages to a Graylog node.

See the Graylog documentation for more information about configuring and using Graylog.

64.1. Configuring GELF UDP Collection


1. In the Graylog web interface, go to System › Inputs.

2. Select input type GELF UDP and click the [ Launch new input ] button.
3. Select the Graylog node for your input or make it global. Provide a name for the input in the Title textbox.
Change the default port if needed. Use the Bind address option to limit the input to a specific network
interface.

4. After saving, the input will appear shortly.

410
Example 281. Sending GELF via UDP

This configuration loads the xm_gelf extension module and uses the GELF_UDP output writer to send GELF
messages via UDP.

nxlog.conf
 1 <Extension _gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/var/log/messages"
 8 </Input>
 9
10 <Output out>
11 Module om_udp
12 Host 127.0.0.1
13 Port 12201
14 OutputType GELF
15 </Output>

64.2. Configuring GELF TCP or TCP/TLS Collection


1. In the Graylog web interface, go to System › Inputs.

2. Select input type GELF TCP and click the [ Launch new input ] button.
3. Select the Graylog node for your input or make it global. Provide a name for the input in the Title textbox.
Change the default port if needed. Use the Bind address option to limit the input to a specific network
interface.

411
4. To use TLS configuration, provide the TLS cert file and the TLS private key file (a password is required if the
private key is encrypted). Check Enable TLS.

5. After saving, the input will appear shortly.

412
Example 282. Sending GELF via TCP

This configuration loads the xm_gelf extension module and uses the GELF_TCP output writer to send GELF
messages via TCP.

nxlog.conf
 1 <Extension _gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/var/log/messages"
 8 </Input>
 9
10 <Output out>
11 Module om_tcp
12 Host 127.0.0.1
13 Port 12201
14 OutputType GELF_TCP
15 </Output>

413
Example 283. Sending GELF via TCP/TLS

This configuration loads the xm_gelf extension module and uses the GELF_TCP output writer with the
om_ssl module to send GELF messages via TLS encrypted connection.

nxlog.conf
 1 <Extension _gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File "/var/log/messages"
 8 </Input>
 9
10 <Output out>
11 Module om_ssl
12 Host 127.0.0.1
13 Port 12201
14 CertFile %CERTDIR%/graylog.crt
15 AllowUntrusted TRUE
16 OutputType GELF_TCP
17 </Output>

64.3. Collector Sidecar Configuration


Graylog Collector Sidecar is a lightweight configuration management system for different log collectors. It can be
used to manage NXLog from the Graylog console. It supports GELF output via UDP, TCP, and TCP/TLS. The main
advantage of using Sidecar is that everything is orchestrated from a single Graylog console.

1. Stop and disable the NXLog system service, as the NXLog process will be managed by Graylog. Install and
configure the collector sidecar for the target system. The details can found in the Graylog Collector Sidecar
documentation.

collector_sidecar.yml
server_url: http://10.0.2.2:9000/api/
update_interval: 30
tls_skip_verify: true
send_status: true
list_log_files:
  - /var/log
node_id: graylog-collector-sidecar
collector_id: file:/etc/graylog/collector-sidecar/collector-id
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags:
  - linux
  - apache
  - redis
backends:
  - name: nxlog
  enabled: true
  binary_path: /usr/bin/nxlog
  configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf

2. Go to System › Collectors. After a successful sidecar installation, a new collector should appear.

414
3. Click the [ Create configuration ] button.

4. Apply a tag for the configuration.

5. Create a new output of the required type. See the Configuring GELF UDP Collection and Configuring GELF
TCP or TCP/TLS Collection sections above.
6. Create an input for NXLog (for example, a file input).

415
7. Go back to System › Collectors to verify the setup. If everything is fine the collector should be in the
Running state.

416
Chapter 65. HP ProCurve
HP ProCurve switches are capable of sending their logs to a remote Syslog destination via UDP or TCP. When
sending logs over the network it is recommended to use TCP as the more reliable protocol. With UDP there is a
potential to lose entries, especially when there is a high volume of messages. It is also possible to send logs via
TLS if additional security is required.

ProCurve Log Sample


I 03/17/17 18:06:15 ports: port B3 is Blocked by STP↵
I 03/17/17 18:06:15 ports: port B3 is now on-line↵
I 03/17/17 18:24:57 SNTP: updated time by -4 seconds↵
I 03/17/17 21:03:04 ports: port B3 is now off-line↵
I 03/18/17 02:00:53 SNTP: updated time by -4 seconds↵
I 03/18/17 09:36:49 SNTP: updated time by -4 seconds↵
I 03/18/17 17:00:45 SNTP: updated time by -4 seconds↵
I 03/18/17 23:34:25 mgr: SME TELNET from 192.168.9.78 - MANAGER Mode↵

The HP ProCurve web interface does not provide a way to configure an external Syslog server, so this must be
done via the command line (see the following sections). For more details on configuring logging for HP ProCurve
switches, refer to the HP ProCurve Management and Configuration Guide available from HP Enterprise Support.
The actual document depends on the model and firmware version in use.

In case of multiple switches running in redundancy mode (such as VRRP or similar), each
WARNING device must be configured separately as failover happens per VLAN and logging
configuration is not synchronized.

The steps below have been tested with HP 4000 series switches but should also work for 2000,
NOTE
6000, and 8000 series devices.

1. Configure NXLog to receive log entries over the network and process them as Syslog (see Accepting Syslog
via UDP, TCP, or TLS and the TCP example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the switch.
3. Connect to the switch via SSH or Telnet.
4. Run the following commands to configure Syslog logging. Replace LEVEL with the logging level (debug, major,
error, warning, or info). Replace FACILITY with the Syslog facility to be used for the logs. Replace
IP_ADDRESS with the address of the NXLog agent; PROTOCOL with udp, tcp, or tls; and PORT with the
required port. If PORT is omitted, the default will be used (514 for UDP, 1470 for TCP, or 6514 for TLS).

# configure
(config)# logging severity LEVEL
(config)# logging facility FACILITY
(config)# logging IP_ADDRESS PROTOCOL PORT
(config)# write memory

Example 284. Configuring Syslog Forwarding via TCP

The following commands configure the switch to send logs to 192.168.6.143 via the default TCP port.
Only logs with info severity level and higher will be sent, and the local5 Syslog facility will be used.

# configure
(config)# logging severity info
(config)# logging facility local5
(config)# logging 192.168.6.143 tcp
(config)# write memory

417
Example 285. Receiving ProCurve Logs via TCP

This example shows HP ProCurve logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1470
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/hp.log"
19 Exec to_json();
20 </Output>

Events like those at the beginning of the chapter will result in the following output.

Output Sample
{
  "MessageSourceAddress": "192.168.10.3",
  "EventReceivedTime": "2017-03-18 19:32:02",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 21,
  "SyslogFacility": "LOCAL5",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.10.3",
  "EventTime": "2017-03-19 00:27:27",
  "SourceName": "mgr",
  "Message": " SME TELNET from 192.168.9.78 - MANAGER Mode"
}

418
Chapter 66. IBM QRadar SIEM
IBM QRadar Security Information and Event Management (SIEM) collects event data and uses analytics,
correlation, and threat intelligence features to identify known or potential threats, provide alerting and reports,
and aid in incident investigations. For more information, see IBM QRadar SIEM on IBM.com.

NXLog can be configured to collect events and forward them to QRadar SIEM. This chapter provides information
about setting up this integration, both for generic structured logs and for several specific log types. The last
section shows output examples for forwarding the processed logs to QRadar.

NOTE The instructions and examples in this chapter were tested with QRadar 7.3.1.

66.1. Setting up the QRadar Appliance


Several tasks may be required to prepare IBM QRadar for receiving events from NXLog.

66.1.1. QRadar Dependencies and System Configuration


• The WinCollect agent SFS bundle may need to be installed in order to provide parsing capabilities for the
specific log types documented below. See Installing and upgrading the WinCollect application on QRadar
appliances in the IBM Knowledge Center.
• To parse DNS Server Debug logs, the Microsoft DNS Device Support Module (DSM) package must be installed
on the QRadar appliance. Look for the QRADAR-DSM-MicrosoftDNS package on IBM Fix Central.
• To send logs to QRadar using TLS, the TLS Syslog protocol must be installed. Look for the QRADAR-PROTOCOL-
TLSSyslog package on IBM Fix Central.

• Some events may exceed QRadar’s default Syslog payload length. Consider setting the maximum payload
length to 8,192 bytes. For instructions, see QRadar: How to increase the maximum TCP payload size for event
data on IBM Support.
• The QRadar appliance should be fully updated with recent patches and fixes.

66.1.2. Adding a TLS Syslog Log Source


Events can be sent to QRadar securely with TLS. With these instructions, the NXLog agent(s) will verify the
authenticity of the QRadar receiver and encrypt event data in transit. This requires that appropriate certificates
be created and a separate TLS Syslog "listener" log source be added on QRadar.

This log source will act as a gateway, passing each event on to another matching log source. Only one TLS listener
is required per port; see Configuring multiple log sources over TLS syslog on IBM Knowledge Center.

First, prepare the TLS certificate and key files (for more information, see OpenSSL Certificate Creation):

1. Locate a certificate authority (CA) certificate and private key, or generate and sign a new one. The CA
certificate (for example, rootCA.pem) will be used by the NXLog agent to authenticate the QRadar receiver in
Forwarding Logs below.
2. Create a certificate and private key for QRadar TLS Syslog (for example, server.crt and server.key).

3. Convert the QRadar private key to a DER-encoded PKCS8 key (see QRadar: TLS Syslog support of DER-
encoded PKCS8 custom certificates):

$ openssl pkcs8 -topk8 -inform PEM -outform DER -in server.key \


  -out server.key.der -nocrypt

4. Copy the private key and certificate files to QRadar (the steps below assume the files are copied to
/root/server.*).

419
Then add the log source on QRadar:

1. In the QRadar web interface, go to Menu > Admin > Data Sources > Events > Log Sources.

2. Click Add to add a new log source. The Add a log source window appears.

3. Enter a Log Source Name and, optionally, a Log Source Description.


4. For the Log Source Type, select Universal DSM.
5. For the Protocol Configuration, select TLS Syslog.
6. As the Log Source Identifier, enter the source device IP address or hostname. For multiple log sources, any
identifier can be used here.
7. For Certificate Type, select Provide Certificate.
8. Set Provided Server Certificate Path to the path of the server certificate (for example, /root/server.crt).

9. Set Provided Private Key Path to the path of the DER-encoded server key (for example,
/root/server.key.der).

10. Select the Target Event Collector. Use this to poll for and process events using the specified event collector,
rather than on the Console appliance.
11. Make any other changes required, and then click Save.

420
12. Go to Menu > Admin and click Advanced > Deploy Full Configuration after making all required log source
changes.

66.1.3. Adding a QRadar Log Source


Follow these steps to add a new log source to QRadar SIEM. This will need to be done once for each log source,
using the correct Log Source Type for each.

1. In the QRadar web interface, go to Menu > Admin > Data Sources > Events > Log Sources.

2. Click Add to add a new log source. The Add a log source window appears.

3. Enter a Log Source Name and, optionally, a Log Source Description.


4. Select a Log Source Type. Consult the sections below for the correct log type to use for each source.
5. For the Protocol Configuration, select Syslog.
6. As the Log Source Identifier, enter the source system’s IP address.

The Syslog hostname field is used by QRadar as the log source identifier to associate events
with a particular log source when received. This value can be adjusted by changing the
NOTE $Hostname = host_ip(); line in the examples below: keep the line as-is to use the
system’s first non-loopback IP address, remove the line to use the system hostname, or set
the line to a custom value (for example, $Hostname = "myhostname";).

421
7. Select the Target Event Collector. Use this to poll for and process events using the specified event collector,
rather than on the Console appliance.
8. Make any other changes required, and then click Save.
9. Go to Menu > Admin and click Advanced > Deploy Full Configuration after making all required log source
changes.

66.2. Sending Generic Structured Logs to QRadar


NXLog can be configured to send generic structured logs to QRadar using Log Event Extended Format (LEEF). The
xm_leef to_leef() procedure will generate LEEF events using certain NXLog fields for the event header and all
remaining fields as event attributes.

LEEF has several predefined event attributes that should be used where applicable—see LEEF event components
and Predefined LEEF event attributes on IBM Knowledge Center. These fields can be set during parsing, set to
static values manually ($usrName = "john";), renamed using the rename_field() directive, or renamed using the
xm_rewrite Rename directive (NXLog Enterprise Edition only). Additionally, to_leef() will set several predefined
attributes automatically.

Use Universal LEEF as QRadar’s Log Source Type. Once LEEF events have been received by QRadar, specific
fields can be selected for extraction as described in Writing an expression for structured data in LEEF format (in
the QRadar Security Intelligence Platform documentation). LEEF events can also be mapped to QRadar Identifiers
(QIDs). For more information, see the Universal LEEF section in the QRadar DSM Guide.

Example 286. Sending LEEF Logs to QRadar

This example reads Syslog messages from file, parses them, and sets some additional fields. Then the
xm_leef to_leef() procedure is used to convert the event to LEEF (and write it to the $raw_event field).
Because the event is converted in the scope of this input instance, it is not necessary to do additional
processing in the corresponding output instance—see Forwarding Logs for output examples that could be
used to send the events to QRadar.

This example is intended as a starting point for a configuration that provides a specific set
NOTE of fields to QRadar. For logs that are already structured, it may only be necessary to
rename a few fields according to the predefined LEEF attribute names.

Input Sample (auth.log)


Jul 31 07:17:01 debian CRON[968]: pam_unix(cron:session): session opened for user root by
(uid=0)↵
Aug 11 22:43:26 debian sshd[5584]: Invalid user baduser from 10.80.0.1 port 33122↵

422
nxlog.conf (truncated)
 1 <Extension _leef>
 2 Module xm_leef
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input auth>
10 Module im_file
11 File '/var/log/auth.log'
12 <Exec>
13 # Parse Syslog event and set fields in the event record
14 parse_syslog();
15
16 # Set event category and event ID (for QID mapping)
17 if $Message =~ /^Invalid/
18 {
19 $Category = "Failed";
20 $EventID = "Logon Failure";
21 }
22 else
23 {
24 $Category = "Success";
25 $EventID = "Logon Success";
26 }
27
28 # Extract user name for "usrName" event attribute
29 [...]

Output Sample
<13>Jul 31 07:17:01 10.80.1.49 CRON[968]: LEEF:1.0|NXLog|CRON|4.4.4347|Logon
Success|EventReceivedTime=2019-08-11 22:48:59 ⇥ SourceModuleName=file ⇥
SourceModuleType=im_file ⇥ SyslogFacilityValue=1 ⇥ SyslogFacility=USER ⇥ SyslogSeverityValue=5
⇥ SyslogSeverity=NOTICE ⇥ sev=2 ⇥ Severity=INFO ⇥ identHostName=debian ⇥ devTime=2019-07-31
07:17:01 ⇥ vSrcName=CRON ⇥ ProcessID=968 ⇥ Message=pam_unix(cron:session): session opened for
user root by (uid=0) ⇥ cat=Success ⇥ EventID=Logon Success ⇥ usrName=root ⇥ role=Administrator
⇥ devTimeFormat=yyyy-MM-dd HH:mm:ss↵
<13>Aug 11 22:43:26 10.80.1.49 sshd[5584]: LEEF:1.0|NXLog|sshd|4.4.4347|Logon
Failure|EventReceivedTime=2019-08-11 22:48:59 ⇥ SourceModuleName=file ⇥
SourceModuleType=im_file ⇥ SyslogFacilityValue=1 ⇥ SyslogFacility=USER ⇥ SyslogSeverityValue=5
⇥ SyslogSeverity=NOTICE ⇥ sev=2 ⇥ Severity=INFO ⇥ identHostName=debian ⇥ devTime=2019-08-11
22:43:26 ⇥ vSrcName=sshd ⇥ ProcessID=5584 ⇥ Message=Invalid user baduser from 10.80.0.1 port
33122 ⇥ cat=Failed ⇥ EventID=Logon Failure ⇥ usrName=baduser ⇥ role=User ⇥
devTimeFormat=yyyy-MM-dd HH:mm:ss↵

66.3. Sending Specific Log Types for QRadar to Parse


To take full advantage of QRadar’s parsing of specific log types, NXLog can be configured to send logs using the
specific format expected by the corresponding QRadar DSM. In each case, events are collected, parsed, and
converted to a tab-delimited key-value pair format that QRadar expects.

66.3.1. DHCP Server


To send DHCP Server audit log events to QRadar SIEM, set up DHCP Audit Logging and use the NXLog
configuration shown below. If QRadar does not auto-discover the log source, add one manually. The Log Source

423
Type should be set to Microsoft DHCP Server and the Protocol Configuration should be set to Syslog—see
Adding a QRadar Log Source.

For more information, see DHCP Server Audit Logging and the Microsoft DHCP Server page in the QRadar DSM
Guide.

424
Example 287. Sending Windows DHCP Events to QRadar

In this example, NXLog is configured to read logs from the following paths:

• C:\Windows\System32\dhcp\DhcpSrvLog-*.log

• C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log

NXLog parses the events and converts the structured data for forwarding to QRadar.

Input Sample (DhcpSrvLog)


13,07/31/19,07:18:29,Conflict,10.80.2.1,BAD_ADDRESS,,,0,6,,,,,,,,,0↵

Input Sample (DhcpV6SrvLog)


11004,07/31/19,07:32:34,DHCPV6
Renew,2001:db8::667a:1521:96ab:5f50,QRADARWIN.nxlog.org,,14,00010001244AC14F5254005DF4CC,,,,,↵

nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension dhcp_csv_parser>
 6 Module xm_csv
 7 Fields ID, Date, Time, Description, IPAddress, LogHostname, MACAddress, \
 8 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
 9 DHCID, VendorClassHex, VendorClassASCII, UserClassHex, \
10 UserClassASCII, RelayAgentInformation, DnsRegError
11 </Extension>
12
13 <Extension dhcpv6_csv_parser>
14 Module xm_csv
15 Fields ID, Date, Time, Description, IPAddress, LogHostname, MACAddress, \
16 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
17 DHCID, VendorClassHex
18 </Extension>
19
20 <Input dhcp>
21 Module im_file
22 File 'C:\Windows\System32\dhcp\DhcpSrvLog-*.log'
23 File 'C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log'
24 <Exec>
25 # Only process lines that begin with an event ID
26 if $raw_event =~ /^\d+,/
27 {
28 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
29 [...]

Output Sample (DhcpSrvLog)


<13>Jul 31 07:18:29 10.80.1.49 AgentDevice=WindowsDHCP ⇥ AgentLogFile=DhcpSrvLog-Wed.log ⇥
ID=13 ⇥ Date=07/31/19 ⇥ Time=07:18:29 ⇥ Description=Conflict ⇥ IP Address=10.80.2.1 ⇥ Host
Name=BAD_ADDRESS ⇥ MAC Address= ⇥ User Name= ⇥ TransactionID=0 ⇥ QResult=6 ⇥ Probationtime= ⇥
CorrelationID= ⇥ Dhcid= ⇥ VendorClass(Hex)= ⇥ VendorClass(ASCII)= ⇥ UserClass(Hex)= ⇥
UserClass(ASCII)= ⇥ RelayAgentInformation= ⇥ DnsRegError=0↵

425
66.3.2. DNS Debug Log
To send DNS debug log events to QRadar, enable debug logging and use the NXLog configuration shown below.

WARNING Do not enable Details in the DNS Server Debug Logging dialog.

If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft DNS Debug and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log
Source. If the Microsoft DNS Debug log source type is not available, see Setting up the QRadar Appliance above.

For more information, see Windows DNS Server and the Microsoft DNS Debug page in the QRadar DSM Guide.

Example 288. Sending DNS Debug Logs to QRadar

This configuration uses the xm_msdns extension module to parse the Windows DNS debug log.

nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension dns_parser>
 6 Module xm_msdns
 7 </Extension>
 8
 9 <Input dns>
10 Module im_file
11 File 'C:\logs\dns.log'
12 InputType dns_parser
13 <Exec>
14 $raw_event =~ /(?x)^(?<Date>\d+\/\d+\/\d+)\s(?<Time>\d+:\d+:\d+\s+\w{2})/;
15 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
16 $Message = "AgentDevice=WindowsDNS" +
17 "\tAgentLogFile=" + $FileName +
18 "\tDate=" + $Date +
19 "\tTime=" + $Time +
20 "\tThread ID=" + $ThreadID;
21 if $Context == "EVENT"
22 {
23 $EventDescription =~ s/,//g;
24 $Message = $Message +
25 "\tContext=EVENT" +
26 "\tMessage=" + $EventDescription;
27 }
28 else if $Context == "Note"
29 [...]

Output Sample
<13>Jul 20 08:42:07 10.80.1.49 AgentDevice=WindowsDNS ⇥ AgentLogFile=debug.log ⇥
Date=7/20/2019 ⇥ Time=8:42:07 AM ⇥ Thread ID=0710 ⇥ Context=EVENT ⇥ Message=The DNS server has
finished the background loading of zones. All zones are now available for DNS updates and zone
transfers as allowed by their individual zone configuration.↵

66.3.3. Microsoft Exchange Server


Microsoft Exchange Server logs can be collected and sent to QRadar SIEM as shown below.

426
QRadar does not support auto-discovery for Exchange Server logs, so it is necessary to add a log source
manually. The Log Source Type should be set to Microsoft Exchange Server and the Protocol Configuration
should be set to Syslog—see Adding a QRadar Log Source.

For more information, see the Microsoft Exchange chapter and the Microsoft Exchange Server pages in the
QRadar DSM Guide.

Example 289. Sending Exchange Server Logs to QRadar

The following configuration uses the im_file module to read message tracking, Outlook web access (OWA),
and SMTP logs from various paths. The logs are parsed and converted for forwarding to QRadar.

Make sure to use the correct ID for the Exchange Back End site. This can be verified using
NOTE the Internet Information Services (IIS) Manager. The following example collects logs from
the site with ID 2 (W3SVC2/).

nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension w3c_parser>
 6 Module xm_w3c
 7 </Extension>
 8
 9 <Extension w3c_comma_parser>
10 Module xm_w3c
11 Delimiter ,
12 </Extension>
13
14 <Input exchange_OWA>
15 Module im_file
16 File 'C:\inetpub\logs\LogFiles\W3SVC2\u_ex*.log'
17 InputType w3c_parser
18 <Exec>
19 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
20 if ${cs-uri-query} == undef ${cs-uri-query} = "-";
21 if ${cs-username} == undef ${cs-username} = "-";
22 if ${cs(Referer)} == undef ${cs(Referer)} = "-";
23 $Message = "AgentDevice=MicrosoftExchange" +
24 "\tAgentLogFile=" + $FileName +
25 "\tAgentLogFormat=W3C" +
26 "\tAgentLogProtocol=OWA" +
27 "\tdate=" + $date +
28 "\ttime=" + $time +
29 [...]

Output Sample (SMTPReceive)


<13>Jul 27 23:35:09 10.80.1.49 AgentDevice=MicrosoftExchange ⇥ AgentLogFile=RECV2019072723-
1.LOG ⇥ AgentLogFormat=SMTP ⇥ AgentLogProtocol=SMTP ⇥ date-time=2019-07-27T23:35:09.647Z ⇥
connector-id=QRADARWIN\Default QRADARWIN ⇥ session-id=08D7122B7BADF0F4 ⇥ sequence-number=1 ⇥
local-endpoint=10.80.1.49:2525 ⇥ remote-endpoint=10.80.1.49:21408 ⇥ event=> ⇥ data=220
QRADARWIN.nxlog.org Microsoft ESMTP MAIL Service ready at Sat, 27 Jul 2019 23:35:08 +0000 ⇥
context=↵

427
66.3.4. Microsoft IIS
Microsoft IIS logs can be collected using the W3C Extended Log File Format. The W3C logging should be
configured as described in the Configuring Microsoft IIS by using the IIS Protocol page of the QRadar DSM Guide.

NOTE For NXLog Community Edition, the xm_csv module can be used instead of xm_w3c.

If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft IIS and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log Source.

For more information, see the Microsoft IIS chapter and the QRadar DSM Guide Microsoft IIS Server pages.

428
Example 290. Sending Windows IIS Events to QRadar

This configuration uses the xm_w3c extension module to parse the IIS log, and converts the events to a tab-
delimited format for QRadar.

Input Sample
2019-07-24 09:21:55 127.0.0.1 POST /OWA/auth.owa &CorrelationID=<empty>;&cafeReqId=4b9353b7-
e17b-4bc5-9e54-bc6b4733d6dd;&encoding=; 443
HealthMailboxa733ff32a90d44bb970f7a147fb3f328@nxlog.org 127.0.0.1 AMProbe/Local/ClientAccess -
302 0 0 10171↵

nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension w3c_parser>
 6 Module xm_w3c
 7 </Extension>
 8
 9 <Input iis>
10 Module im_file
11 File 'C:\inetpub\logs\LogFiles\W3SVC1\u_ex*.log'
12 InputType w3c_parser
13 <Exec>
14 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
15 if ${cs-uri-query} == undef ${cs-uri-query} = "-";
16 if ${cs-username} == undef ${cs-username} = "-";
17 if ${cs(Referer)} == undef ${cs(Referer)} = "-";
18 $Message = "AgentDevice=MSIIS" +
19 "\tAgentLogFile=" + $FileName +
20 "\tAgentLogFormat=W3C" +
21 "\tAgentLogProtocol=W3C" +
22 "\tdate=" + $date +
23 "\ttime=" + $time +
24 "\ts-ip=" + ${s-ip} +
25 "\tcs-method=" + ${cs-method} +
26 "\tcs-uri-stem=" + ${cs-uri-stem} +
27 "\tcs-uri-query=" + ${cs-uri-query} +
28 "\ts-port=" + ${s-port} +
29 [...]

Output Sample
<13>Jul 24 09:21:55 10.80.1.49 AgentDevice=MSIIS ⇥ AgentLogFile=u_ex190724.log ⇥
AgentLogFormat=W3C ⇥ AgentLogProtocol=W3C ⇥ date=2019-07-24 ⇥ time=09:21:55 ⇥ s-ip=127.0.0.1
⇥ cs-method=POST ⇥ cs-uri-stem=/OWA/auth.owa ⇥ cs-uri-
query=&CorrelationID=<empty>;&cafeReqId=4b9353b7-e17b-4bc5-9e54-bc6b4733d6dd;&encoding=; ⇥ s-
port=443 ⇥ cs-username=HealthMailboxa733ff32a90d44bb970f7a147fb3f328@nxlog.org ⇥ c-
ip=127.0.0.1 ⇥ cs(User-Agent)=AMProbe/Local/ClientAccess ⇥ cs(Referer)=- ⇥ sc-status=302 ⇥ sc-
substatus=0 ⇥ sc-win32-status=0 ⇥ time-taken=10171↵

66.3.5. Microsoft SQL


Microsoft SQL logs can be collected using the xm_charconv and im_file modules.

If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft SQL Server and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log
Source.

429
For configuration information, see the Microsoft SQL Server section in the QRadar DSM Guide.

Example 291. Sending Microsoft SQL Logs to QRadar

This example reads and parses events from the SQL Server log file, then converts the events to a tab-
delimited format for QRadar.

nxlog.conf (truncated)
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension charconv>
 6 Module xm_charconv
 7 LineReader UTF-16LE
 8 </Extension>
 9
10 define ERRORLOG_EVENT /(?x)(?<Date>\d+-\d+-\d+)\s \
11 (?<Time>\d+:\d+:\d+.\d+)\s \
12 (?<Source>\S+)\s+ \
13 (?<Payload>.+)$/s
14
15 <Input sql>
16 Module im_file
17 File 'C:\Program Files\Microsoft SQL Server\' + \
18 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
19 InputType charconv
20 <Exec>
21 # Attempt to match regular expression
22 if $raw_event =~ %ERRORLOG_EVENT%
23 {
24 # Check if previous lines were saved
25 if defined(get_var('saved'))
26 {
27 $tmp = $raw_event;
28 $raw_event = get_var('saved');
29 [...]

Output Sample
<13>Aug 21 22:55:36 10.80.1.49 AgentDevice=MSSQL ⇥ AgentLogFile=ERRORLOG ⇥ Date=2019-08-21 ⇥
Time=22:55:36.23 ⇥ Source=spid16s ⇥ Message=The Service Broker endpoint is in disabled or
stopped state.↵

66.3.6. Windows EventLog


To send Windows EventLog data to QRadar, use the im_msvistalog module and convert the events to a tab-
delimited key-value pair format supported by the corresponding QRadar DSM.

This format is recommended instead of Snare or Log Event Extended Format (LEEF) in order to
NOTE take full advantage of the parsing provided by the QRadar DSM. Otherwise additional parsing
and/or mappings would be required to translate Windows EventLog fields to QRadar fields.

If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft Windows Security Event Log and the Protocol Configuration should be set to Syslog—see Adding a
QRadar Log Source.

For more information, see the Windows Event Log chapter and the Microsoft Windows Security Event Log section

430
in the QRadar DSM Guide.

Example 292. Sending Windows EventLog to QRadar

This configuration will collect from the Windows EventLog using im_msvistalog, convert the $Message field
to a specific tab-delimited format, and add a BSD Syslog header with xm_syslog.

This example does not filter events, but forwards all events to QRadar. Only a subset of
NOTE those events will be recognized and parsed by the QRadar DSM. For more information
about using EventLog queries to limit collected events, see Windows Event Log.

nxlog.conf (truncated)
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 <Exec>
 8 if $Category == undef $Category = 0;
 9 $EventTimeStr = strftime($EventTime, "YYYY-MM-DDThh:mm:ss.sUTC");
10 if $EventType == 'CRITICAL'
11 {
12 $EventTypeNum = 1;
13 $EventTypeStr = "Critical";
14 }
15 else if $EventType == 'ERROR'
16 {
17 $EventTypeNum = 2;
18 $EventTypeStr = "Error";
19 }
20 else if $EventType == 'INFO'
21 {
22 $EventTypeNum = 4;
23 $EventTypeStr = "Informational";
24 }
25 else if $EventType == 'WARNING'
26 {
27 $EventTypeNum = 3;
28 $EventTypeStr = "Warning";
29 [...]

Output Sample
<13>Jul 15 20:24:43 10.80.1.49 AgentDevice=WindowsLog ⇥ AgentLogFile=System ⇥ Source=Service
Control Manager ⇥ Computer=QRW.nxlog.org ⇥ OriginatingComputer=10.80.1.49 ⇥ User= ⇥ Domain= ⇥
EventIDCode=7036 ⇥ EventType=4 ⇥ EventCategory=0 ⇥ RecordNumber=9830 ⇥ TimeGenerated=2019-07-
15T20:24:43.296533Z ⇥ TimeWritten=2019-07-15T20:24:43.296533Z ⇥ Level=Informational ⇥
Keywords=9259400833873739776 ⇥ Task=None ⇥ Opcode=Info ⇥ Message=The WinCollect service
entered the stopped state.↵

66.4. Forwarding Logs


Use an output instance to forward the processed logs to QRadar SIEM. The configurations shown here can be
used with any of the above input instances. Because all event formatting is done in the input instances above,
the output instances here do not require any Exec directives (the $raw_event field is passed without any further
modification).

431
Example 293. Forwarding Logs via TCP

This om_tcp instance sends logs to QRadar via TCP. In this example, events are sent from the Microsoft IIS
and Windows EventLog sources.

nxlog.conf
1 <Output qradar>
2 Module om_tcp
3 Host 10.0.0.2
4 Port 514
5 </Output>
6
7 <Route r>
8 Path iis, eventlog => qradar
9 </Route>

Forwarding logs with TLS requires adding a TLS Syslog listener, as described in Adding a TLS Syslog Log Source
above. The root certificate authority (CA) certificate, which is used to verify the authenticity of the QRadar
receiver’s certificate, should be provided to om_ssl with either CADir or CAFile.

Example 294. Forwarding Logs With TLS

In this example, the om_ssl module is used to send logs to QRadar securely, with TLS encryption.

nxlog.conf
1 <Output qradar>
2 Module om_ssl
3 Host 10.0.0.2
4 Port 6514
5 CAFile C:\Program Files\cert\rootCA.pem
6 </Output>

432
Chapter 67. Linux Audit System
The Linux Audit system provides fine-grained logging of security related events. The system administrator
configures rules to specify what events are logged. For example, rules may be configured for logging of:

• access of a specific file or directory,


• specific system calls,
• commands executed by a user,
• authentication events, or
• network access.

The Audit system architecture includes:

• a kernel component which generates events,


• the auditd daemon which collects events from the kernel component and writes them to a log file,

• the audisp dispatcher daemon which relays events to other applications for additional processing, and

• the auditctl control utility which provides configuration of the kernel component.

These tools are provided for reading the Audit log files:

• aulast prints out a listing of the last logged in users,

• aulastlog prints out the last login for all users of a machine,

• aureport produces summary reports of the Audit logs,

• ausearch searches Audit logs for events fitting given criteria, and

• auvirt prints a list of virtual machine sessions found in the Audit logs.

For more information about the Audit system, see the System Auditing chapter of the Red Hat Enterprise Linux
Security Guide, the installed manual pages, and the Linux Audit Documentation Project.

67.1. Audit Rules


The Audit system generates events according to Audit rules. These rules can be set dynamically with auditctl or
stored persistently in /etc/audit/rules.d. Persistent rule files in /etc/audit/rules.d are automatically
compiled to /etc/audit/audit.rules when auditd is initialized.

There are three types of rules: a control rule modifies Audit’s behavior, a file system rule watches a file or
directory, and a system call rule generates a log event for a particular system call. For more details about Audit
rules, see the Defining Audit Rules page of the Red Hat Enterprise Linux Security Guide.

Common control rules include the following.

• -b backlog: Set the maximum number of audit buffers. This should be higher for busier systems or for
heavy log volumes.
• -D: Delete all rules and watches. Normally used as the first rule.

• -e [0..2]: Temporarily disable auditing with 0, enable it with 1, or lock the configuration until the next
reboot with 2 (used as the last rule).

433
Example 295. Control Rules

This is a set of basic rules, some form of which is likely to be found in any ruleset.

# Delete all rules (normally used first)


-D

# Increase buffers from default 64


-b 320

# Lock Audit rules until reboot (used last)


-e 2

To create a file system rule, use -w path -p permissions -k key_name.

• The path argument defines the file or directory to be watched.

• The permissions argument sets the kinds of accesses that are logged, and is a string containing one or
more of r (read access), w (write access), x (execute access), and a (attribute change).
• The key_name argument is an optional tag for identifying the rule.

Example 296. A File System Rule

This rule watches /etc/passwd for modifications and tags these events with passwd.

-w /etc/passwd -p wa -k passwd

To create a system call rule, use -a action,filter -S system_call -F field=value -k key_name.

• The action argument can be either always (to generate a log entry) or never (to suppress a log entry).
Generally, use never rules before always rules, because rules are matched from first to last.
• The filter argument is one of task (when a task is created), exit (when a system call exits), user (when a
call originates from user space) or exclude (to filter events).
• The system_call argument specifies the system call by name, and can be repeated by using multiple -S
flags.
• The field=value pair can be used to specify additional match options, and can also be used more than
once.
• The key_name argument is an optional tag for identifying the rule.

Example 297. A System Call Rule

This rule generates a log entry when the system time is changed.

-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time

System call rules can also monitor activities around files, such as:

• creation,
• modification,
• deletion,
• access, permission, and owner modifications.

434
Example 298. Deletion rule

This rule generates a log entry when a file is deleted with the unlink or rename command:

-a always,exit -F arch=b64 -S unlink,unlinkat,rename,renameat -F success=1 -F


auid>=1000 -F auid!=unset -F key=successful-delete

External connections can be monitored with the example below.

Example 299. Networking rule

This rule checks whether an incoming or outgoing external network connection has been established.

-a always,exit -F arch=b64 -S accept,connect -F key=external-access

The different types of rules are combined to form a ruleset.

Example 300. An Audit Rules File

This is a simple Audit ruleset based on the above examples.

/etc/audit/rules.d/audit.rules
# Delete all rules
-D

# Increase buffers from default 64


-b 320

# Watch /etc/passwd for modifications and tag with 'passwd'


-w /etc/passwd -p wa -k passwd

# Generate a log entry when the system time is changed


-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time

# Lock Audit rules until reboot


-e 2

For more examples of rules, see the The Linux Audit Project and auditd-attack sections on GitHub.

67.2. Logging Audit Messages to Local Syslog


The Audit system can also be customized to forward log messages to Syslog.

Example 301. Logging Audit Messages to Local Syslog

The Audit’s syslog plugin should be enabled to forward logs to the /dev/log socket. In this case, the
/etc/audisp/plugins.d/syslog.conf file should be edited to look like the sample below.

1 active = yes
2 direction = out
3 path = builtin_syslog
4 type = builtin
5 args = LOG_INFO
6 format = string

435
A sample rule can be created in the /etc/audit/rules.d/audit.rules file to monitor modifications of
the /tmp/audit_syslog file.

1 -w /tmp/audit_syslog -p wa

NXLog needs to be configured to accept Syslog messages from the /dev/log socket.

By default, NXLog cannot bind to the /dev/log socket due to the limitations of the nxlog
NOTE
user. See the Running Under a Non-Root User on Linux section for ways to handle this.

The configuration below accepts logs from the socket using the im_uds module and the Exec block selects
only messages which contain the audit_syslog string. These messages are parsed with the
parse_syslog_bsd() procedure of the xm_syslog module and converted to JSON using the to_json() procedure
of the xm_json module.

 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input from_uds>
10 Module im_uds
11 UDS /dev/log
12 <Exec>
13 if not ($raw_event =~/.+audit_syslog.+/) drop();
14 parse_syslog_bsd();
15 to_json();
16 </Exec>
17 </Input>

After configuring, auditd and NXLog should be restarted:

# systemctl restart auditd


# systemctl restart nxlog

Below is an output sample of a JSON-formatted log entry which can be obtained using this configuration.

{
  "EventReceivedTime": "2020-04-28T21:09:13.959876+00:00",
  "SourceModuleName": "from_uds",
  "SourceModuleType": "im_uds",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "administrator",
  "EventTime": "2020-04-28T21:09:13.000000+00:00",
  "SourceName": "audispd",
  "Message": "node=administrator type=SYSCALL msg=audit(1588108153.953:1246): arch=c000003e
syscall=257 success=yes exit=3 a0=ffffff9c a1=55a4e3921110 a2=41 a3=1a4 items=2 ppid=2417
pid=3374 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3
comm=\"vim\" exe=\"/usr/bin/vim.basic\" key=\"audit_syslog\""
}

436
67.3. Using im_linuxaudit
NXLog Enterprise Edition includes an im_linuxaudit module for directly accessing the kernel component of the
Audit System. With this module, NXLog can be configured to configure Audit rules and collect logs without
requiring auditd or any other userspace software.

If an im_linuxaudit module instance is suspended and the Audit backlog limit is exceeded,
all processes that generate Audit messages will be blocked. For this reason, it is
recommended for most cases that FlowControl be disabled for im_linuxaudit module
WARNING instances. With flow control disabled, a blocked route will cause Audit messages to be
discarded. To reduce the risk of log data being discarded, make sure the route’s processing
is fast enough to handle the Audit messages by adjusting the LogQueueSizes of the
following modules and/or adding a pm_buffer instance.

Example 302. Auditing With im_linuxaudit

This configuration uses a <Rules> block to specify a rule set.

nxlog.conf
 1 <Input audit>
 2 Module im_linuxaudit
 3 FlowControl FALSE
 4 <Rules>
 5 # Delete all rules (This rule has no affect; it is performed
 6 # automatically by im_linuxaudit)
 7 -D
 8
 9 # Increase buffers from default 64
10 -b 320
11
12 # Watch /etc/passwd for modifications and tag with 'passwd'
13 -w /etc/passwd -p wa -k passwd
14
15 # Generate a log entry when the system time is changed
16 -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time
17
18 # Lock Audit rules until reboot
19 -e 2
20 </Rules>
21 </Input>

Example 303. Using a Separate Rules File With im_linuxaudit

This configuration is the same as the previous, but it uses a separate rules file. The referenced
audit.rules file is identical to the one shown in the above example, but it is stored in a different location
(because auditd is not required).

nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 LoadRule '/opt/nxlog/etc/audit.rules'
5 </Input>

437
67.4. Using auditd Userspace
There are also several ways to collect Audit logs via the regular Audit userspace tools, including from auditd logs
and by network via audispd.

67.4.1. Setting up auditd


First, the Audit userspace components must be installed and configured.

1. Install the Audit package. Include the audispd-plugins package if required for use with audispd (see the
Collecting via Network With audispd section below).
◦ For RedHat/CentOS:

# yum install audit

◦ For Debian/Ubuntu:

# apt-get install auditd

2. Configure Auditd by editing the /etc/audit/auditd.conf configuration file, which contains parameters for
auditd. See the Configuring the Audit Service page in the Red Hat Enterprise Linux Security Guide and the
auditd.conf(5) man page.
3. After modifying the configuration or rules, enable or restart the auditd service to reload the configuration
and update the rules (if they are not locked).
◦ For RedHat/CentOS:

# service auditd start


# systemctl enable auditd

◦ For Debian/Ubuntu:

# systemctl restart auditd

67.4.2. Reading auditd Logs


By default, auditd logs events to /var/log/audit/audit.log with root ownership. NXLog can be configured to
read logs from that file.

1. NXLog cannot read logs owned as root when running as the nxlog user. Either omit the User option in
nxlog.conf to run NXLog as root, or adjust the permissions as follows (see Reading Rsyslog Log Files for
more information about /var/log permissions):
a. use the log_group option in /etc/audit/audit.conf to set the group ownership for Audit log files,

b. change the current ownership of the log directory and files with chgrp -R adm /var/log/audit, and

c. add the nxlog user to the adm group with usermod -a -G adm nxlog.

2. Configure NXLog (see the example below) and restart.

438
Example 304. Reading From audit.log

In the Input block of this configuration, Audit logs are read from file, the key-value pairs are parsed with
xm_kvp, and then some additional fields are added. In the Output block, the messages are converted to
JSON format, BSD Syslog headers are added, and the logs are sent to another host via TCP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Extension audit_parser>
10 Module xm_kvp
11 KVPDelimiter ' '
12 KVDelimiter =
13 EscapeChar '\'
14 </Extension>
15
16 <Input in>
17 Module im_file
18 File "/var/log/audit/audit.log"
19 <Exec>
20 audit_parser->parse_kvp();
21 $Hostname = hostname();
22 $FQDN = hostname_fqdn();
23 $Tag = "audit";
24 $SourceName = "selinux";
25 </Exec>
26 </Input>
27
28 <Output out>
29 Module om_tcp
30 Host 192.168.1.1
31 Port 1514
32 Exec to_json(); to_syslog_bsd();
33 </Output>

67.4.3. Collecting via Network With audispd


The Audit dispatcher (audispd) can be configured to forward log events to a remote server using the audisp-
remote plugin included in the audispd-plugins package.

1. Configure the audisp-remote plugin. Use appropriate values for the remote_server and format directives.

439
/etc/audisp/audisp-remote.conf
remote_server = 127.0.0.1
port = 60
transport = tcp
queue_file = /var/spool/audit/remote.log
mode = immediate
queue_depth = 2048
format = ascii
network_retry_time = 1
max_tries_per_record = 3
max_time_per_record = 5
heartbeat_timeout = 0

network_failure_action = stop
disk_low_action = ignore
disk_full_action = ignore
disk_error_action = syslog
remote_ending_action = reconnect
generic_error_action = syslog
generic_warning_action = syslog
overflow_action = syslog

2. Activate the plugin by editing /etc/audisp/plugins.d/au-remote.conf and setting active = yes.

3. Optionally, auditd may be configured to forward logs only (and not write to log files). Edit
/etc/audit/auditd.conf and set write_logs = no (this option replaces log_format = NOLOG).

4. Configure NXLog (see the example below), then restart NXLog.


5. Restart the auditd service.

Example 305. Collecting via Network

With the following configuration, NXLog will accept Audit logs via TCP from audispd on the local host, parse
the key-value pairs with xm_kvp, and add some additional fields to the event record.

nxlog.conf
 1 <Extension audit_parser>
 2 Module xm_kvp
 3 KVPDelimiter ' '
 4 KVDelimiter =
 5 EscapeChar '\'
 6 </Extension>
 7
 8 <Input in>
 9 Module im_tcp
10 Host 127.0.0.1
11 Port 60
12 <Exec>
13 audit_parser->parse_kvp();
14 $Hostname = hostname();
15 $FQDN = hostname_fqdn();
16 $Tag = "audit";
17 $SourceName = "auditd";
18 </Exec>
19 </Input>

440
Chapter 68. Linux System Logs
NXLog can be used to collect and process logs from a Linux system.

Linux distributions normally use a "Syslog" system logging agent to retrieve events from the kernel (/proc/kmsg)
and accept log messages from user-space applications (/dev/log). Originally, this logger was syslogd; later
syslog‑ng added additional features, and finally Rsyslog is the logger in common use today. For more
information about Syslog, see Syslog.

Many modern Linux distributions also use the Systemd init system, which includes a journal component for
handling log messages. All messages generated by Systemd-controlled processes are sent to the journal. The
journal also handles messages written to /dev/log. The journal stores logs in a binary format, either in memory
or on disk; the logs can be accessed with the journalctl tool. Systemd can also be configured to forward logs via
a socket to a local logger like Rsyslog or NXLog.

There are several ways that NXLog can be configured to collect Linux logs. See Replacing Rsyslog for details
about replacing Rsyslog altogether, handling all logs with NXLog instead. See Forwarding Messages via Socket for
a simple way to forward all logs to NXLog without disabling Rsyslog (this is the least intrusive option). Finally, it is
also possible to read the log files written by Rsyslog; see Reading Rsyslog Log Files.

68.1. Replacing Rsyslog


Follow these steps to disable Rsyslog and configure NXLog to collect logs in its place.

1. Configure NXLog to collect events from the kernel, the Systemd journal socket, and the /dev/log socket. See
the example below.
2. Configure Systemd to forward log messages to a socket by enabling the ForwardToSyslog option.

/etc/systemd/journald.conf
[Journal]
ForwardToSyslog=yes

3. Stop and disable Rsyslog by running systemctl stop rsyslog and systemctl disable rsyslog as root.

4. Restart NXLog.
5. Reload the journald configuration by running systemctl force-reload systemd-journald.

441
Example 306. Replacing Rsyslog With NXLog

This example configures NXLog to read kernel events with the im_kernel module, read daemon messages
from the Systemd journal socket with the im_uds module, and accept other user-space messages from the
/dev/log socket with im_uds. In the om_tcp module instance, all of the logs are converted to JSON format,
BSD Syslog headers are added, and the logs are forwarded to another host via TCP.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input kernel>
10 Module im_kernel
11 Exec parse_syslog_bsd();
12 </Input>
13
14 <Input journal>
15 Module im_uds
16 UDS /run/systemd/journal/syslog
17 Exec parse_syslog_bsd();
18 </Input>
19
20 <Input devlog>
21 Module im_uds
22 UDS /dev/log
23 FlowControl FALSE
24 Exec $raw_event =~ s/\s+$//; parse_syslog_bsd();
25 </Input>
26
27 <Output out>
28 Module om_tcp
29 [...]

Some local Syslog sources will add a trailing newline (\n) to each log message. The
NOTE $raw_event =~ s/\s+$//; statement in the devlog input section above will
automatically remove this and any other trailing whitespace before processing the
message.

68.2. Forwarding Messages via Socket


By adding a short configuration file, Rsyslog can be configured to forward messages to NXLog via a Unix domain
socket. This is the least intrusive of the options documented here.

By default, SELinux blocks communication via Unix domain sockets in CentOS7. To enable
socket communication, run the following commands.

audit2allow -i /var/log/messages -M nxlog-fix


NOTE
and then

semodule -i nxlog-fix.pp

442
The description below contains steps for configuring Rsyslog to work with NXLog.

1. Configure NXLog to accept log messages from Rsyslog via a socket. See the example below.
2. Configure Rsyslog to write to the socket by adding the following configuration file. See the Rsyslog
documentation for more information about configuring what is forwarded to NXLog.

/etc/rsyslog.d/nxlog.conf
# Load omuxsock module
$ModLoad omuxsock

# Set socket path


$OMUxSockSocket /opt/nxlog/var/spool/nxlog/rsyslog_sock

# Configure template to preserve PRI part (must be on a single line)


$template SyslogWithPRI,"<%PRI%>%timegenerated% %HOSTNAME% %syslogtag%%msg:::drop-last-lf%"

# Forward all log messages


*.* :omuxsock:;SyslogWithPRI

# Only forward log messages of "notice" priority and higher


#*.notice :omuxsock:;SyslogWithPRI

3. Restart NXLog and Rsyslog in that order to create and use the socket (NXLog must create the socket before
Rsyslog will write to it). Run systemctl restart nxlog and systemctl restart rsyslog.

Example 307. Collecting Logs via Socket From Rsyslog

With this example configuration, NXLog will create the socket and accept log messages from Rsyslog
through the socket. The messages will then be parsed as Syslog, converted to JSON format, prefixed with a
BSD Syslog header, and forwarded to another host via TLS.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_uds
11 UDS /opt/nxlog/var/spool/nxlog/rsyslog_sock
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_ssl
17 Host 192.168.1.1
18 Port 6514
19 CAFile %CERTDIR%/ca.pem
20 CertFile %CERTDIR%/client-cert.pem
21 CertKeyFile %CERTDIR%/client-key.pem
22 Exec $Message = to_json(); to_syslog_bsd();
23 </Output>

443
68.3. Reading Rsyslog Log Files
NXLog can be configured to read from log messages written by Rsyslog, /var/log/messages for example. This is
a slightly more intrusive option than the steps given in Forwarding Messages via Socket.

NXLog will not have access to the facility and severity codes because Rsyslog, by default, follows
NOTE
the BSD Syslog convention of not writing the PRI code to the /var/log/messages file.

By default, NXLog runs as user nxlog and does not have permission to read files in /var/log. The simplest
solution for this is to run NXLog as root by omitting the User option, but it is more secure to provide the
necessary permissions explicitly.

1. Check the user or group ownership of the files in /var/log and configure if necessary. Some distributions
use a group for the log files by default. On Debian/Ubuntu, for example, Rsyslog is configured to use the adm
group. Otherwise, modify the Rsyslog configuration to use different ownership for log files as shown below.

/etc/rsyslog.conf or /etc/rsyslog.d/nxlog.conf
$FileOwner root
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022

# Default on Debian/Ubuntu
$FileGroup adm

# Or use the "nxlog" group directly


#$FileGroup nxlog

2. Run NXLog under a user or group that has permission to read the log files. Either use a user or group directly
with the User or Group option in nxlog.conf, or add the nxlog user to a group that has permission. For
example, on Debian/Ubuntu add the nxlog user to the adm group by running usermod -a -G adm nxlog.
3. If necessary, fix permissions for any files NXLog will be reading from that already exist (use the correct group
for your system).

# chgrp adm /var/log/messages


# chmod g+r /var/log/messages

4. Configure NXLog to read from the required file(s) (see the example below). Then restart NXLog.
5. If the Rsyslog configuration has been modified, restart Rsyslog (systemctl restart rsyslog).

444
Example 308. Reading Rsyslog Log Files

With the following configuration, NXLog will read logs from /var/log/messages, parse the events as
Syslog, convert them to JSON, and forward the plain JSON to another host via TCP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 1514
19 Exec $raw_event = to_json();
20 </Output>

445
Chapter 69. Log Event Extended Format (LEEF)
NXLog Enterprise Edition can be configured to collect or forward logs in the LEEF format.

The LEEF log format is used by IBM Security QRadar products and supports Syslog as a transport. It describes an
event using key-value pairs, and provides a list of predefined event attributes. Additional attributes can be used
for specific applications.

Basic LEEF Syntax


SYSLOG_HEADER LEEF_HEADER|EVENT_ATTRIBUTES↵

The LEEF_HEADER part contains the following pipe-delimited fields.

• LEEF version
• Vendor
• Product name
• Product version
• Event ID
• Optional delimiter character, as the character or its hexadecimal value prefixed by 0x or x (LEEF version 2.0)

The EVENT_ATTRIBUTES part contains a list of key-value pairs separated by a tab or the delimiter specified in the
LEEF header.

Full LEEF Syntax


Oct 11 11:27:23 myserver LEEF:Version|Vendor|Product|Version|EventID|Delimiter|src=192.168.1.1 ⇥
dst=10.0.0.1↵

69.1. Collecting LEEF Logs


NXLog Enterprise Edition can parse LEEF logs with the xm_leef module’s parse_leef() procedure.

446
Example 309. Accepting LEEF Logs via TCP

With the following configuration, NXLog will accept LEEF logs via TCP, convert them to JSON, and output the
result to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _leef>
 6 Module xm_leef
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_leef();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File '/var/log/json'
19 Exec to_json();
20 </Output>

Input Sample
Oct 11 11:27:23 myserver LEEF:2.0|Microsoft|MSExchange|2013 SP1|15345|src=10.50.1.1 ⇥
dst=2.10.20.20 ⇥ spt=1200↵

Output Sample
{
  "EventReceivedTime": "2016-10-11 11:27:24",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "Hostname": "myserver",
  "LEEFVersion": "LEEF:2.0",
  "Vendor": "Microsoft",
  "SourceName": "MSExchange",
  "Version": "2013 SP1",
  "EventID": "15345"
}

69.2. Generating LEEF Logs


NXLog Enterprise Edition can also generate LEEF logs, using the to_leef() procedure provided by the xm_leef
extension module.

447
Example 310. Sending LEEF Logs via TCP

With this configuration, NXLog will parse the input JSON format from file and forward it as LEEF via TCP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _leef>
 6 Module xm_leef
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/json'
12 Exec parse_json();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 10.12.0.1
18 Port 514
19 Exec to_leef();
20 </Output>

Input Sample
{
  "EventTime": "2016-09-13 11:23:11",
  "Hostname": "myserver",
  "Purpose": "test",
  "Message": "This is a test log message."
}

Output Sample
<13>Sep 13 11:23:11 myserver LEEF:1.0|NXLog|in|3.0.1775|unknown|EventReceivedTime=2016-09-13
11:23:12 ⇥ SourceModuleName=in ⇥ SourceModuleType=im_file ⇥ devTime=2016-09-13 11:23:11 ⇥
identHostName=myserver ⇥ Purpose=test ⇥ Message=This is a test log message. ⇥
devTimeFormat=yyyy-MM-dd HH:mm:ss↵

448
Chapter 70. McAfee Enterprise Security Manager
(ESM)
McAfee Enterprise Security Manager (ESM) is a security information and event management (SIEM) solution that
can collect logs from various sources and correlate events for investigation and incident response. For more
information, see McAfee Enterprise Security Manager on McAfee.com.

NXLog can be configured to collect events and forward them to ESM. This chapter provides information about
setting up NXLog to forward events from several types of log sources.

NOTE The instructions and examples in this chapter were tested with ESM 11.2.0.

70.1. Configuring McAfee ESM


The following steps may be required to prepare ESM for receiving events from NXLog.

70.1.1. Set up TLS Transport


NXLog can send logs to ESM securely with TLS. This can be set up as follows. For more information about
generating certificate and key files, see OpenSSL Certificate Creation.

1. Create or locate a certificate authority (CA) certificate and private key. The CA certificate (for example,
rootCA.pem) will be used by the NXLog agent to authenticate the ESM receiver in Forwarding Logs below.

2. Create a certificate and private key for ESM (for example, server.crt and server.key).

3. Upload the server.crt and server.key files to ESM (for more information, see Install SSL certificate on
McAfee.com):
a. On the McAfee web interface, open the menu in the upper left corner, click on System Properties, and
choose ESM Management in the left panel.
b. Open the Key Management tab and click Certificate.
c. Select Upload Certificate, click Upload, acknowledge the notification, and upload the certificate files.

4. When adding or editing a log source, check Require syslog TLS (see Adding a Log Source below).

70.1.2. Adding a Log Source


Each log source type must have a corresponding data source (or parent source) configured in the ESM local

449
receiver.

1. On the McAfee web interface, open the menu in the upper left corner and click on More Settings.
2. Select the Local Receiver-ELM in the left panel and click on Add Data Source.

3. Choose a Data Source Vendor, Data Source Model, Data Format, and Data Retrieval. Consult the
sections below for the correct values to use for each log source type.

4. Enable Parsing, and ELM storage if required.


5. Enter appropriate Name, IP Address, and Host Name values.
6. For Syslog Relay, select None.
7. Enter a Mask to use an IP address range, if required.
8. To require TLS transport, check Require syslog TLS (see Set up TLS Transport).
9. For Port, use the default of 514 or click Interface to change the available Syslog ports.
10. For Support Generic Syslogs, select Log "unknown syslog" event.
11. Click OK to save the changes. When the Apply Data Source Settings dialog appears, click Yes. Then click OK
on the Rollout window to deploy the changes.

450
70.2. Sending Specific Log Types for ESM to Parse
To take full advantage of ESM’s log parsing and rules, NXLog can be configured to send log types in a format
expected by ESM. A few common log types are shown here.

70.2.1. DHCP Server


In order to send DHCP Server audit log events to ESM, set up DHCP Audit Logging and use the NXLog
configuration below. When adding an ESM data source, use the following parsing configuration (see Adding a Log
Source):

Field Value
Data Source Vendor Microsoft

Data Source Model Windows DHCP

Data Format Default

Data Retrieval SYSLOG (Default)

For more information, see DHCP Server Audit Logging and the Microsoft DHCP Server page in the McAfee ESM
Data Source Configuration Reference Guide.

Example 311. Sending Windows DHCP Events to McAfee ESM

In this example, NXLog is configured to read logs from the DhcpSrvLog and DhcpV6SrvLog log files. NXLog
then adds a Syslog header with xm_syslog to prepare the events for forwarding to ESM.

Input Sample
64,08/31/19,14:38:17,No static IP address bound to DHCP server,,,,,0,6,,,,,,,,,0↵

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input dhcp>
 6 Module im_file
 7 File 'C:\Windows\System32\dhcp\DhcpSrvLog-*.log'
 8 File 'C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log'
 9 <Exec>
10 # Discard header lines
11 if $raw_event !~ /^\d+,/ drop();
12
13 # Add Syslog header
14 $Message = $raw_event;
15 to_syslog_bsd();
16 </Exec>
17 </Input>

Output Sample
<13>Aug 31 14:38:17 Host 64,08/31/19,14:38:17,No static IP address bound to DHCP
server,,,,,0,6,,,,,,,,,0↵

70.2.2. DNS Debug Log


In order to send DNS debug log events to ESM, enable debug logging and use the NXLog configuration below.

451
When adding an ESM data source, use the following parsing configuration (see Adding a Log Source):

Field Value
Data Source Vendor Microsoft

Data Source Model Windows DNS

Data Format Default

Data Retrieval SYSLOG (Default)

For more information, see Windows DNS Server and the Microsoft DNS Debug page in the McAfee ESM Data
Source Configuration Reference Guide.

Example 312. Sending DNS Debug Logs to McAfee ESM

The following configuration uses im_file to read from the Windows DNS debug log. A Syslog header is
added with the xm_syslog to_syslog_bsd() procedure.

Input Sample
8/31/2019 15:17:04 PM 2AE8 PACKET 00000005D03B4CE0 UDP Snd 192.168.1.42 fdd7 R Q [8081 DR
NOERROR] A (9)imap-mail(7)outlook(3)com(0)↵

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_file
 7 File 'C:\logs\dns.log'
 8 <Exec>
 9 # Discard header lines
10 if $raw_event !~ /^\d+\/\d+\/\d+/ drop();
11
12 # Add Syslog header
13 $Message = $raw_event;
14 to_syslog_bsd();
15 </Exec>
16 </Input>

Output Sample
<13>Aug 31 15:17:04 Host 8/31/2019 15:17:04 PM 2AE8 PACKET 00000005D03B4CE0 UDP Snd
192.168.1.42 fdd7 R Q [8081 DR NOERROR] A (9)imap-mail(7)outlook(3)com(0)↵

70.2.3. Windows Event Log


Microsoft Windows Event Log data can be collected and sent to McAfee ESM with the NXLog configuration below.
When adding an ESM data source, use the following parsing configuration (see Adding a Log Source):

Field Value
Data Source Vendor Microsoft

Data Source Model Windows Event Log – CEF

Data Format Default

Data Retrieval SYSLOG (Default)

452
For more information about collecting Windows Event Log, see the Windows Event Log chapter.

Example 313. Sending Windows Event Log Data to ESM

In this configuration, Windows Event Log data is collected from the Security channel with im_msvistalog and
converted to CEF with a Syslog header.

nxlog.conf
 1 <Extension _cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 Channel Security
12 <Exec>
13 $Message = to_cef();
14 to_syslog_bsd();
15 </Exec>
16 </Input>

Output Sample
<14>Sep 25 23:25:53 WINSERV Microsoft-Windows-Security-Auditing[568]:
CEF:0|NXLog|NXLog|4.99.5128|0|-|7|end=1569453953000 dvchost=WINSERV
Keywords=9232379236109516800 outcome=AUDIT_SUCCESS SeverityValue=2 Severity=INFO
externalId=4801 SourceName=Microsoft-Windows-Security-Auditing ProviderGuid={54849625-5478-
4994-A5BA-3E3B0328C30D} Version=0 TaskValue=12551 OpcodeValue=0 RecordNumber=395661
ActivityID={61774D29-73EB-0000-4B4D-7761EB73D501} ExecutionProcessID=568 ExecutionThreadID=3164
deviceFacility=Security msg=The workstation was unlocked.\r\n\r\nSubject:\r\n\tSecurity
ID:\t\tS-1-5-21-2262720663-2632382095-2856924348-500\r\n\tAccount
Name:\t\tAdministrator\r\n\tAccount Domain:\t\tWINSERV\r\n\tLogon ID:\t\t0x112FE1\r\n\tSession
ID:\t1 cat=Other Logon/Logoff Events Opcode=Info duid=S-1-5-21-2262720663-2632382095-
2856924348-500 duser=Administrator dntdom=WINSERV TargetLogonId=0x112fe1 SessionId=1
EventReceivedTime=1569453953949 SourceModuleName=eventlog SourceModuleType=im_msvistalog↵

70.3. Forwarding Logs


Use an output instance to forward the processed logs to McAfee ESM. The configurations shown below can be
used with any of the above input instances. Because all event formatting is done in the input sections, the output
instances here do not require any Exec directives (the $raw_event field is passed without any further
modification).

453
Example 314. Forwarding Logs via TCP

This om_tcp instance sends logs to ESM via TCP. In this example, events are sent from the Windows Event
Log source.

nxlog.conf
1 <Output esm>
2 Module om_tcp
3 Host 10.10.1.10
4 Port 514
5 </Output>
6
7 <Route r>
8 Path eventlog => esm
9 </Route>

Forwarding logs with TLS requires adding a certificate to ESM and setting Require syslog TLS on the data
source(s), as described in the Set up TLS Transport section.

Example 315. Forwarding Logs With TLS

The om_ssl module is used here to send logs to ESM securely, with TLS encryption.

nxlog.conf
1 <Output esm>
2 Module om_ssl
3 Host 10.10.1.10
4 Port 6514
5 CAFile C:\Program Files\cert\rootCA.pem
6 </Output>

454
Chapter 71. McAfee ePolicy Orchestrator
McAfee® ePolicy Orchestrator® (McAfee® ePO™) enables centralized policy management and enforcement for
endpoints and enterprise security products. McAfee ePO monitors and manages the network, detecting threats
and protecting endpoints against these threats.

NXLog can be configured to collect events and audit logs from the ePO SQL databases.

The instructions and examples in this section were tested with ePolicy Orchestrator 5.10.0 and
NOTE
NXLog running on the same server.

ePO will need to have the associated packages installed first, prior to log collection from these
NOTE sources. For example, VirusScan Enterprise or Host Intrusion Prevention Content must be
installed.

71.1. Collecting ePO Audit Logs


The Audit log contains McAfee ePO user actions and action details which can be viewed from the ePO
dashboard.

Figure 3. Queries and Reports Dashboard for Audit Entries

ePO stores these logs in the dbo.OrionAuditLog table in the SQL database. The following configuration
will query dbo.OrionAuditLog using the im_odbc module configured to collect these audit log events. It
will then format them to JSON via xm_json.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString DSN=MQIS;database=ePO_Host; \
 4 uid=user;pwd=password;
 5 IdType timestamp
 6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 7 # record when reading from the database for the first time.
 8 ReadFromLast TRUE
 9 MaxIdSQL SELECT MAX(StartTime) AS maxid FROM dbo.OrionAuditLog
10 SQL SELECT StartTime as id,StartTime as EventTime, \
11 * FROM dbo.OrionAuditLog \
12 WHERE StartTime > CAST(? AS datetime)
13 Exec delete($id);to_json();
14 </Input>

455
Raw Audit Log Sample of a Successful Logon
EventTime: 2020-02-12 18:36:00↵
AutoId: 7↵
UserId: 1↵
UserName: admin↵
Priority: 3↵
CmdName: Logon Attempt↵
Message: Successful Logon for user "admin" from IP address: 10.0.0.4↵
Success: TRUE↵
StartTime: 2020-02-12 18:36:00↵
EndTime: 2020-02-12 18:36:00↵
RemoteAddress: 10.0.0.4↵
LocalAddress: 2001:0:34f1:8072:2c3a:3f1e:f5ff:fffb↵
TenantId: 1↵
DetailMessage: NULL↵
AdditionalDetailsURI: NULL↵
2020-02-12 18:37:28 McAfeeEPO INFO↵
id: 2020-02-12 18:37:28↵

Audit Event Sample in JSON of a Successful Logon


{
  "EventTime": "2019-07-27T09:51:08.630000+02:00",
  "AutoId": 83147,
  "UserId": 1,
  "UserName": "admin",
  "Priority": 3,
  "CmdName": "Logon Attempt",
  "Message": "Successful Logon for user \"admin\" from IP address: 192.168.134.165",
  "Success": true,
  "StartTime": "2019-07-27T09:51:08.630000+02:00",
  "EndTime": "2019-07-27T09:51:08.630000+02:00",
  "RemoteAddress": "192.168.134.165",
  "LocalAddress": "192.168.134.165",
  "TenantId": 1,
  "DetailMessage": null,
  "AdditionalDetailsURI": null,
  "EventReceivedTime": "2019-07-27T11:51:09.641428+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

71.2. Collecting VirusScan Enterprise (VSE) Events


The McAfee VirusScan Enterprise provides strong virus protection with lower maintenance requirements and
zero-impact scans for users to protect against malware. These events are stored in the dbo.EPOEvents SQL view.

456
The following configuration uses the im_odbc module to collect VirusScan events from the dbo.EPOEvents
SQL view. The AnalyzerName column determines the source module of the events in the view, therefore the
query contains the conditional clause AnalyzerName LIKE 'VirusScan%.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString DSN=MQIS;database=ePO_Host; \
 4 uid=user;pwd=password;
 5 IdType timestamp
 6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 7 # record when reading from the database for the first time.
 8 #ReadFromLast TRUE
 9 #MaxIdSQL SELECT MAX(ReceivedUTC) AS maxid FROM dbo.EPOEvents
10 SQL SELECT ReceivedUTC as id,ReceivedUTC as EventTime,AutoID,ServerID,\
11 AnalyzerName,AnalyzerHostName,\
12 dbo.RSDFN_ConvertIntToIPString \
13 (cast (AnalyzerIPV4 as varchar(15))) as 'IPv4',\
14 AnalyzerDetectionMethod,SourceHostName,\
15 dbo.RSDFN_ConvertIntToIPString \
16 (cast (SourceIPV4 as varchar(15))) as 'Source IPv4',\
17 SourceProcessName,TargetHostName,\
18 dbo.RSDFN_ConvertIntToIPString \
19 (cast (TargetIPV4 as varchar(15))) as 'Target IPv4',\
20 TargetUserName,TargetFileName,ThreatCategory,ThreatEventID,\
21 ThreatSeverity,ThreatName,ThreatType,ThreatActionTaken,TenantID\
22 FROM dbo.EPOEvents\
23 WHERE ReceivedUTC > CAST(? AS datetime) AND AnalyzerName LIKE 'VirusScan%'
24 Exec delete($id);to_json();
25 </Input>

VirusScan Enterprise Event Sample in JSON of an EICAR Test File


{
  "EventTime": "2019-07-30T14:17:22.067000+02:00",
  "AutoID": 22113,
  "ServerID": "HOST",
  "AnalyzerName": "VirusScan Enterprise",
  "AnalyzerHostName": "HOST",
  "IPv4": "192.168.134.189",
  "AnalyzerDetectionMethod": "OAS",
  "SourceHostName": null,
  "Source IPv4": "192.168.134.189",
  "SourceProcessName": "C:\\Windows\\explorer.exe",
  "TargetHostName": "HOST",
  "Target IPv4": "192.168.134.189",
  "TargetUserName": "DOMAIN\\admin",
  "TargetFileName": "C:\\Users\\admin\\Desktop\\eicar.com",
  "ThreatCategory": "av.detect",
  "ThreatEventID": 1278,
  "ThreatSeverity": 1,
  "ThreatName": "EICAR test file",
  "ThreatType": "test",
  "ThreatActionTaken": "deleted",
  "TenantID": 1,
  "EventReceivedTime": "2019-07-30T16:18:15.279397+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

457
71.3. Collecting Data Loss Prevention (DLP) Events
The McAfee Data Loss Prevention (DLP) Endpoint is a content-based agent solution to inspect user actions. It
scans data-in-use on endpoints, blocks transfer of sensitive data, and it can store its findings as evidence.

458
The configuration below uses the im_odbc module to collect Data Loss Prevention events from the
dbo.EPOEvents SQL view. The AnalyzerName column determines the source module of events in the view,
therefore the query contains the conditional clause AnalyzerName LIKE 'Data%.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString DSN=MQIS;database=ePO_Host; \
 4 uid=user;pwd=password;
 5 IdType timestamp
 6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 7 # record when reading from the database for the first time.
 8 #ReadFromLast TRUE
 9 #MaxIdSQL SELECT MAX(ReceivedUTC) AS maxid FROM dbo.EPOEvents
10 SQL SELECT ReceivedUTC as id,ReceivedUTC as EventTime,AutoID,ServerID,\
11 AnalyzerName,AnalyzerHostName,\
12 dbo.RSDFN_ConvertIntToIPString \
13 (cast (AnalyzerIPV4 as varchar(15))) as 'IPv4',\
14 AnalyzerDetectionMethod,SourceHostName,\
15 dbo.RSDFN_ConvertIntToIPString \
16 (cast (SourceIPV4 as varchar(15))) as 'Source IPv4',\
17 SourceProcessName,TargetHostName,\
18 dbo.RSDFN_ConvertIntToIPString \
19 (cast (TargetIPV4 as varchar(15))) as 'Target IPv4',\
20 TargetUserName,TargetFileName,ThreatCategory,ThreatEventID,\
21 ThreatSeverity,ThreatName,ThreatType,ThreatActionTaken,TenantID\
22 FROM dbo.EPOEvents\
23 WHERE ReceivedUTC > CAST(? AS datetime) AND AnalyzerName LIKE 'Data%'
24 Exec delete($id);to_json();
25 </Input>

Data Loss Prevention Event Sample of a USB Plugin


{
  "EventTime": "2019-08-24T12:46:15.603000+02:00",
  "AutoID": 94123,
  "ServerID": "HOST",
  "AnalyzerName": "Data Loss Prevention",
  "AnalyzerHostName": "HOST",
  "IPv4": "192.168.134.198",
  "AnalyzerDetectionMethod": "DLP for Windows",
  "SourceHostName": "HOST",
  "Source IPv4": "192.168.134.198",
  "SourceProcessName": "",
  "TargetHostName": "HOST",
  "Target IPv4": "192.168.134.198",
  "TargetUserName": "DOMAIN\\admin",
  "TargetFileName": null,
  "ThreatCategory": "policy",
  "ThreatEventID": 19115,
  "ThreatSeverity": 1,
  "ThreatName": "USB",
  "ThreatType": "DEVICE_PLUG",
  "ThreatActionTaken": "BL|MON|ON",
  "TenantID": 1,
  "EventReceivedTime": "2019-08-24T14:46:16.066322+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

459
Chapter 72. Microsoft Active Directory Domain
Controller
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. An AD domain
controller responds to security authentication requests within a Windows domain. Most domain controller
logging, especially for security related activity, is done via the Windows EventLog.

72.1. Active Directory Security Events


Windows Server generates events for suspicious activities, including attempts to change Active Directory modes,
or attempted replay attacks. Security events can be monitored through the Windows EventLog. Events specific to
domain controller security are stored in the EventLog Event source ActiveDirectory_DomainService.

For a full list of Active Directory events that should be monitored, see Events to Monitor on Microsoft Docs.

Table 58. Active Directory Events With High Potential Criticality

Event Description
ID
4618 A monitored security event pattern has occurred.

4649 A replay attack was detected. May be a harmless false positive due to a misconfiguration error.

4719 System audit policy was changed.

4765 SID History was added to an account.

4766 An attempt to add SID History to an account failed.

4794 An attempt was made to set the Directory Services Restore Mode.

4897 Role separation was enabled.

4964 Special groups have been assigned to a new logon.

5124 A security setting was updated on OCSP Responder Service.

1102 The audit log was cleared.

460
Example 316. Collecting Active Directory Security Events

In this example, im_msvistalog is used to capture the most important security-related events on a Windows
Server 2012/2016 domain controller.

The EventLog supports a limited number of Event IDs in a query. Due to this limitation, an
NOTE Exec block is used to match the required Event IDs rather than listing every Event ID in the
query.

nxlog.conf (truncated)
 1 define HighEventIds 4618, 4649, 4719, 4765, 4766, 4794, 4897, 4964, 5124, 1102
 2
 3 define MediumEventIds 4621, 4675, 4692, 4693, 4706, 4713, 4714, 4715, 4716, 4724, \
 4 4727, 4735, 4737, 4739, 4754, 4755, 4764, 4764, 4780, 4816, \
 5 4865, 4866, 4867, 4868, 4870, 4882, 4885, 4890, 4892, 4896, \
 6 4906, 4907, 4908, 4912, 4960, 4961, 4962, 4963, 4965, 4976, \
 7 4977, 4978, 4983, 4984, 5027, 5028, 5029, 5030, 5035, 5037, \
 8 5038, 5120, 5121, 5122, 5123, 5376, 5377, 5453, 5480, 5483, \
 9 5484, 5485, 6145, 6273, 6274, 6275, 6276, 6277, 6278, 6279, \
10 6280, 24586, 24592, 24593, 24594
11
12 define LowEventIds 4608, 4609, 4610, 4611, 4612, 4614, 4615, 4616, 4624, 4625, \
13 4634, 4647, 4648, 4656, 4657, 4658, 4660, 4661, 4662, 4663, \
14 4672, 4673, 4674, 4688, 4689, 4690, 4691, 4696, 4697, 4698, \
15 4699, 4700, 4701, 4702, 4704, 4705, 4707, 4717, 4718, 4720, \
16 4722, 4723, 4725, 4726, 4728, 4729, 4730, 4731, 4732, 4733, \
17 4734, 4738, 4740, 4741, 4742, 4743, 4744, 4745, 4746, 4747, \
18 4748, 4749, 4750, 4751, 4752, 4753, 4756, 4757, 4758, 4759, \
19 4760, 4761, 4762, 4767, 4768, 4769, 4770, 4771, 4772, 4774, \
20 4775, 4776, 4778, 4779, 4781, 4783, 4785, 4786, 4787, 4788, \
21 4789, 4790, 4869, 4871, 4872, 4873, 4874, 4875, 4876, 4877, \
22 4878, 4879, 4880, 4881, 4883, 4884, 4886, 4887, 4888, 4889, \
23 4891, 4893, 4894, 4895, 4898, 5136, 5137
24
25 <Input events>
26 Module im_msvistalog
27 <QueryXML>
28 <QueryList>
29 [...]

72.2. Advanced Security Audit Policy


Additional logging can be enabled via the Group Policy Advanced Audit Policy. This policy provides a more
granular level of information about security changes. To enable the Advanced Audit Policy on Windows Server
2012 and above, follow these steps:

1. Log in to the server as Domain Administrator.


2. Load the Group Policy Management Editor from Server Manager > Tools.
3. Expand the Domain Controllers organizational unit (OU), right-click on Default Domain Controllers Policy,
and click Edit.

461
4. Go to Computer Configuration > Policies > Windows Settings > Security Settings > Advanced Audit
Policy Configuration > Audit Policies > DS Access.

462
5. Enable the four listed polices to provide access to security auditing events.

For more information on configuring the Advanced Security Auditing Policy, and descriptions of event IDs, please
view Step-By-Step: Enabling Advanced Security Audit Policy via DS Access on Microsoft TechNet.

463
Example 317. Collecting Auditing Policy events via im_msvistalog

Once security auditing has been enabled, the related events in the EventLog can be queried and collected
by NXLog with the im_msvistalog module. This configuration collects all Windows Security Auditing events
that have an Event Level of critical, warning, or error.

nxlog.conf
 1 <Input SecurityAuditEvents>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Security">
 6 <Select Path="Security">*[System[Provider[@Name='Microsoft-Windows
 7 -Security-Auditing'] and (Level=1 or Level=2 or Level=3) and
 8 ((EventID=4928 and EventID=4931) or (EventID=4932 and EventID=4937)
 9 or EventID=4662 or (EventID=5136 and EventID = 5141))]]</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>

72.3. Troubleshooting Domain Controller Promotions and


Installations
The %systemroot%\debug\dcpromo.log log file stores information about installations, promotions, and
demotions of domain controllers. Successive runs of dcpromo will write to other log files at
%systemroot%\debug\dcpromo.001.log, etc.a

For more information on troubleshooting domain controller promotions and installations, please view
Troubleshooting Domain Controller Deployment

464
Example 318. Collecting dcpromo Log Messages via im_file

This configuration uses the im_file module to read from all dcpromo log files. Each event is parsed with a
regular expression, and then the timestamp is parsed with the parsedate() function.

Log Sample
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1270
objects remaining↵
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1269
objects remaining↵
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1268
objects remaining↵

nxlog.conf
 1 <Input dcpromo>
 2 Module im_file
 3 File "%systemroot%\debug\DCPROMO.log"
 4 File "%systemroot%\debug\DCPROMO.*.log"
 5 <Exec>
 6 if $raw_event =~ /^(\S+ \S+) \[(\S+)\] (.+)$/
 7 {
 8 $EventTime = parsedate($1);
 9 $Severity = $2;
10 $Message = $3;
11 }
12 </Exec>
13 </Input>

465
Chapter 73. Microsoft Azure
Azure is a Microsoft-hosted cloud computing service for building and deploying applications. It supports many
different programming languages, frameworks, and integrations.

73.1. Azure Active Directory and Office 365


Office 365 offers Microsoft Office products as a cloud-based service. Office 365 can be integrated with Azure
Active Directory (AD), a cloud-based identity management service that is part of Microsoft Azure.

NXLog can be set up to collect event data from Azure AD and Office 365 APIs. This functionality is available as an
add-on. See Microsoft Azure and Office 365 for more information.

73.2. Azure Operations Management Suite (OMS)


The Azure Operations Management Suite is a set of Microsoft cloud services providing log management, backup,
automation, and high availability features. Azure Log Analytics is the part of OMS used for log collection,
correlation, and analysis.

NXLog can be configured to connect to the OMS Log Analytics service and forward or collect log data via its REST
API. See the Azure OMS and Log Analytics documentation for more information about configuring and using
Azure OMS and its log management service.

73.2.1. Forwarding Data to Log Analytics


A Python script can be used to perform REST API calls to send log data to the Log Analytics service. To configure
NXLog, complete the following steps.

1. Log in to the Azure portal and go to the Log Analytics service (for instance by typing the service name into
the search bar).
2. Select an existing OMS Workspace or create a new one by clicking the Add button.
3. From the Management section in the main workspace screen, click OMS Portal.

4. In the Microsoft Operations Management Suite, click the settings icon in the top right corner, navigate to
Settings > Connected Sources > Linux Servers, and copy the WORKSPACE ID and PRIMARY KEY values.
These are needed for API access.

466
5. Enable Custom Logs. As of this writing it is a preview feature, available under Settings > Preview Features >
Custom Logs.

6. Place the oms-pipe.py script in a location accessible by NXLog and make sure it is executable by NXLog.

7. Set the customer ID, shared key, and log type values in the script.
8. Configure NXLog to execute the script with the om_exec module. The contents of the $raw_event field will
be forwarded.

467
Example 319. Sending Raw Syslog Events

This configuration reads raw events from file and forwards them to Azure OMS.

nxlog.conf
1 <Input messages>
2 Module im_file
3 File '/var/log/messages'
4 </Input>
5
6 <Output azure_oms>
7 Module om_exec
8 Command oms-pipe.py
9 </Output>

oms-pipe.py (truncated)
#!/usr/bin/env python

# This is a PoF script that can be used with 'om_exec' NXLog module to
# ship logs to Microsoft Azure Cloud (Log Analytics / OMS) via REST API.

# NXLog configuration:
# -------------------
# <Output out>
# Module om_exec
# Command /tmp/samplepy
# </Output>
# -------------------

import requests
import datetime
import hashlib
import hmac
import base64
[...]

468
Example 320. Sending JSON Log Data

With this configuration, NXLog Enterprise Edition reads W3C records with from file with im_file, parses the
records with xm_w3c, converts the internal event fields to JSON format with xm_json to_json(), and forwards
the result to Azure OMS with om_exec.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension w3c_parser>
 6 Module xm_w3c
 7 </Extension>
 8
 9 <Input messages>
10 Module im_file
11 File '/var/log/httpd-log'
12 InputType w3c_parser
13 </Input>
14
15 <Output azure_oms>
16 Module om_exec
17 Command oms-pipe.py
18 Exec to_json();
19 </Output>

73.2.2. Downloading Data From Log Analytics


It is also possible to download data from Log Analytics with a Python script. To set this up with NXLog, follow
these steps:

1. Register an application in Azure Active Directory and generate an access key for the application.
2. Under your Subscription, go to Access control (IAM) and assign the Log Analytics Reader role to this
application.
3. Place the oms-download.py script in a location accessible by NXLog.

4. Set the resource group, workspace, subscription ID, tenant ID, application ID, and application key values in
the script. Adjust the query details as required.

The Tenant ID can be found as Directory ID under the Azure Active Directory Properties
NOTE
tab.

5. Configure NXLog to execute the script with the im_python module.

Detailed instructions on this topic can be found in the Azure documentation.

469
Example 321. Collecting Logs From OMS

This configuration uses the im_python module and the oms-download.py script to periodically collect log
data from the Log Analytics service.

nxlog.conf
1 <Input oms>
2 Module im_python
3 PythonCode oms-download.py
4 </Input>

oms-download.py (truncated)
import datetime
import json
import requests

import adal
import nxlog

class LogReader:

  def __init__(self, time_interval):


  # Details of workspace. Fill in details for your workspace.
  resource_group = '<YOUR_RESOURCE_GROUP>'
  workspace = '<YOUR_WORKSPACE>'

  # Details of query. Modify these to your requirements.


  query = "Type=*"
  end_time = datetime.datetime.utcnow()
  start_time = end_time - datetime.timedelta(seconds=time_interval)
[...]

73.3. Azure SQL Database


The Azure SQL Database is a managed cloud database service that shares the SQL Server 2016 engine. Azure
SQL Database includes scalability, high availability, data protection, and other features. For more information,
see the Azure SQL Database Documentation on Microsoft Docs.

Azure SQL database includes auditing features that can be used to generate events based on audit policies.
NXLog can be used as a collector for audit data from an Azure SQL Database instance.

It is also possible to send SQL audit logs directly to OMS Log Analytics. This can be configured on
the Azure portal; see Get started with SQL database auditing on Microsoft Docs. In this case, see
NOTE
Azure Operations Management Suite (OMS) for information about integrating NXLog with OMS
Log Analytics.

To start with, auditing for an instance must be enabled; see Get started with SQL database auditing in the Azure
documentation for detailed steps. Once this is done, NXLog can be configured to periodically download the audit
logs using either PowerShell or Python.

73.3.1. Using a PowerShell Script


The im_exec module can be used with the azure-sql.ps1 PowerShell script to download Azure SQL audit logs.
The script logs in to the Azure account and downloads the latest audit file from the blob storage container. Then
it reads the file from disk and prints the events for the latest time period (default is one hour). Finally, the
program waits until the next execution.

470
• The script requires Microsoft.SqlServer.XE.Core.dll and Microsoft.SqlServer.XEvent.Linq.dll to
run. These libraries are distributed with Microsoft SQL Server installations (including XE edition).
• Azure PowerShell needs to be installed as well; this can be done by executing Install-Module AzureRM
-AllowClobber in PowerShell. For detailed documentation about installing Azure PowerShell, see Install and
configure Azure PowerShell in the Azure documentation.
• There are several variables in the script header that need to be set.

The procedure for non-interactive Azure authentication might vary, depending on the account
type. This example assumes that a service principal to access resources has been created. For
detailed information about creating an identity for unattended script execution, see Use Azure
NOTE PowerShell to create a service principal with a certificate in the Azure documentation.
Alternatively, Save-AzureRmContext can be used to store account information in a JSON file
and it can be loaded later with Import-AzureRmContext.

471
Example 322. Collecting Azure SQL Audit Logs With PowerShell

This configuration uses im_exec to run the azure-sql.ps1 PowerShell script. The xm_json module is used
to parse the JSON event data into NXLog fields.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 envvar systemroot
 6 <Input azure_sql>
 7 Module im_exec
 8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
 9 # Bypass the system execution policy for this session only.
10 Arg "-ExecutionPolicy"
11 Arg "Bypass"
12 # Skip loading the local PowerShell profile.
13 Arg "-NoProfile"
14 # This specifies the path to the PowerShell script.
15 Arg "-File"
16 Arg "%systemroot%\azure_sql.ps1"
17 <Exec>
18 # Parse JSON
19 parse_json();
20
21 # Convert $EventTime field to datetime
22 $EventTime = parsedate($event_time);
23 </Exec>
24 </Input>

azure-sql.ps1 (truncated)
# If running 32-bit on a 64-bit system, run 64-bit PowerShell instead.
if ( $env:PROCESSOR_ARCHITEW6432 -eq "AMD64" ) {
  Write-Output "Running 64-bit PowerShell."
  &"$env:SYSTEMROOT\SysNative\WindowsPowerShell\v1.0\powershell.exe" `
  -NonInteractive -NoProfile -ExecutionPolicy Bypass `
  -File "$($myInvocation.InvocationName)" $args
  exit $LASTEXITCODE
}
################################################################################

# Update these parameters.

# The path to MSSQL Server DLLs


$SharedPath = "C:\Program Files\Microsoft SQL Server\140\Shared";

# The path to local working directory


$localTargetDirectory = "C:\temp\"
[...]

73.3.2. Using a Python Script


The azure-sql.py script can be used with the im_python module to query the audit file from the database level,
save rows into objects, and pass them to NXLog as events.

• The script requires installation of the Microsoft ODBC Driver; see Installing the Microsoft ODBC Driver for
SQL Server on Linux and macOS on Microsoft Docs.

472
• The azure-storage and pyodbc Python packages are also required.
• There are several variables in the script header that need to be set.

Example 323. Collecting Azure SQL Audit Logs With Python

This configuration uses the im_python module to execute the azure-sql.py Python script. The script logs
in to Azure, collects audit logs, and creates NXLog events.

nxlog.conf
1 <Input sql>
2 Module im_python
3 PythonCode azure_sql.py
4 Exec $EventTime = parsedate($EventTime);
5 </Input>

azure-sql.py (truncated)
import binascii, collections, datetime, nxlog, pyodbc
from azure.storage.blob import PageBlobService

################################################################################

# Update these parameters.

# MSSQL details
DRIVER = "{ODBC Driver 13 for SQL Server}"
SERVER = 'tcp:XXXXXXXX.database.windows.net'
DATABASE = 'XXXXXXXX'
USERNAME = 'XXXXXXXX@XXXXXXXX'
PASSWORD = 'XXXXXXXX'

# Azure Storage details


STORAGE_ACCOUNT = 'XXXXXXXX'
STORAGE_KEY = 'XXXXXXXX=='
CONTAINER_NAME = 'sqldbauditlogs'
[...]

473
Chapter 74. Microsoft Exchange
Microsoft Exchange is a widely used enterprise level email server running on Windows Server operating systems.
The following sections describe various logs generated by Exchange and provide solutions for collecting logs
from these sources with NXLog.

Exchange stores most of its operational logs in a comma-delimited format similar to W3C. These files can be read
with im_file and the xm_w3c extension module. For NXLog Community Edition, the xm_csv extension module can
be used instead, with the fields listed explicitly and the header lines skipped. In some of the log files, the W3C
header is prepended by an additional CSV header line enumerating the same fields as the #Fields directive;
NXLog must be configured to skip that line also. See the sections under Transport Logs for examples.

The information provided here is not intended to be comprehensive, but rather provides a general overview of
NXLog integration with some of the major log mechanisms used by Exchange. Other logs generated by Exchange
can be found in the Logging and other subdirectories of the installation directory.

This Guide focuses on Exchange Server 2010 SP1 and later versions. Older versions are either
NOTE not supported by Microsoft or are being decomissioned. Apart from passing their end of life
date, these versions also lack the audit logging feature.

74.1. Transport Logs


Exchange Server writes various transport logs. Three of those logs are covered in the following sections. For
more information about additional Exchange transport logs, see the Transport logs in Exchange 2016 TechNet
article.

74.1.1. Configuring Transport Logs


Message tracking, connectivity, and protocol logs are enabled by default and written to comma-delimited log
files, in a format similar to W3C. The logs can be enabled or disabled, and the log file locations modified, through
the Exchange Admin Center (EAC).

1. Log in to the Exchange Admin Center (at https://server/ecp).

2. Click servers in the list on the left.


3. Select the server and click the Edit icon.

474
4. Click transport logs in the list on the left.

5. Modify the logging configuration as required, then click [ Save ].

475
74.1.2. Message Tracking Logs
Message tracking logs provide a detailed record of message activity as mail flows through the transport pipeline
on an Exchange server.

Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.01.1034.026↵
#Log-type: Message Tracking Log↵
#Date: 2017-09-15T20:01:45.863Z↵
#Fields: date-time,client-ip,client-hostname,server-ip,server-hostname,source-context,connector-
id,source,event-id,internal-message-id,message-id,network-message-id,recipient-address,recipient-
status,total-bytes,recipient-count,related-recipient-address,reference,message-subject,sender-
address,return-path,message-info,directionality,tenant-id,original-client-ip,original-server-
ip,custom-data,transport-traffic-type,log-id,schema-version↵
2017-09-15T20:01:45.863Z,,,,WINEXC,No suitable shadow
servers,,SMTP,HAREDIRECTFAIL,34359738369,<49b4b9a2781a45cba555008075f7bffa@test.com>,8e1061b7-a376-
497c-3172-
08d4fc7497bf,test1@test.com,,6533,1,,,test,Administrator@test.com,Administrator@test.com,,Originatin
g,,,,S:DeliveryPriority=Normal;S:AccountForest=test.com,Email,63dc9d79-5b4e-4f6c-1358-
08d4fc7497c3,15.01.1034.026↵

NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.

Example 324. Collecting Message Tracking Logs With xm_w3c

This configuration collects message tracking logs from the defined BASEDIR and parses them using the
xm_w3c module. The logs are then converted to JSON format and forwarded via TCP.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension _json>
 4 Module xm_json
 5 </Extension>
 6
 7 <Extension w3c_parser>
 8 Module xm_w3c
 9 Delimiter ,
10 </Extension>
11
12 <Input messagetracking>
13 Module im_file
14 File '%BASEDIR%\TransportRoles\Logs\MessageTracking\MSGTRK*.LOG'
15 InputType w3c_parser
16 </Input>
17
18 <Output tcp>
19 Module om_tcp
20 Host 10.0.0.1
21 Port 1514
22 Exec to_json();
23 </Output>

For NXLog Community Edition, the xm_csv module can be configured to parse these files.

476
Example 325. Using xm_csv for Message Tracking Logs

This configuration uses the xm_csv module to parse the message tracking logs.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension csv_parser>
 4 Module xm_csv
 5 Fields date-time, client-ip, client-hostname, server-ip, server-hostname, \
 6 source-context, connector-id, source, event-id, \
 7 internal-message-id, message-id, network-message-id, \
 8 recipient-address, recipient-status, total-bytes, recipient-count, \
 9 related-recipient-address, reference, message-subject, \
10 sender-address, return-path, message-info, directionality, \
11 tenant-id, original-client-ip, original-server-ip, custom-data, \
12 transport-traffic-type, log-id, schema-version
13 </Extension>
14
15 <Input messagetracking>
16 Module im_file
17 File '%BASEDIR%\TransportRoles\Logs\MessageTracking\MSGTRK*.LOG'
18 <Exec>
19 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
20 else
21 {
22 csv_parser->parse_csv();
23 $EventTime = parsedate(${date-time});
24 }
25 </Exec>
26 </Input>

74.1.3. Connectivity Logs


Connectivity logging records outbound message transmission activity by the transport services on the Exchange
server.

Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.0.0.0↵
#Log-type: Transport Connectivity Log↵
#Date: 2017-09-15T03:09:34.541Z↵
#Fields: date-time,session,source,Destination,direction,description↵
2017-09-15T03:09:33.526Z,,Transport,,*,service started; #MaxConcurrentSubmissions=20;
MaxConcurrentDeliveries=20; MaxSmtpOutConnections=Unlimited↵

NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.

477
Example 326. Collecting Connectivity Logs With xm_w3c

This configuration collects connectivity logs from the defined BASEDIR and parses them using the xm_w3c
module.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension w3c_parser>
 4 Module xm_w3c
 5 Delimiter ,
 6 </Extension>
 7
 8 <Input connectivity>
 9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\Connectivity\CONNECTLOG*.LOG'
11 InputType w3c_parser
12 </Input>

For NXLog Community Edition, the xm_csv module can be configured to parse these files.

Example 327. Using xm_csv for Connectivity Logs

This configuration uses the xm_csv module to parse the connectivity logs.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension csv_parser>
 4 Module xm_csv
 5 Fields date-time, session, source, Destination, direction, description
 6 </Extension>
 7
 8 <Input connectivity>
 9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\Connectivity\CONNECTLOG*.LOG'
11 <Exec>
12 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
13 else
14 {
15 csv_parser->parse_csv();
16 $EventTime = parsedate(${date-time});
17 }
18 </Exec>
19 </Input>

74.1.4. Protocol/SMTP Logs


Protocol logging records the SMTP conversations that occur on Send and Receive connectors during message
delivery.

478
Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.0.0.0↵
#Log-type: SMTP Send Protocol Log↵
#Date: 2017-09-20T21:00:47.866Z↵
#Fields: date-time,connector-id,session-id,sequence-number,local-endpoint,remote-
endpoint,event,data,context↵
2017-09-20T21:00:47.167Z,internet,08D5006A392BE443,0,,64.8.70.48:25,*,,attempting to connect↵

NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.

Example 328. Collecting Protocol Logs With xm_w3c

This configuration collects protocol logs from the defined BASEDIR and parses them using the xm_w3c
module.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension w3c_parser>
 4 Module xm_w3c
 5 Delimiter ,
 6 </Extension>
 7
 8 <Input smtp_receive>
 9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpReceive\RECV*.LOG'
11 InputType w3c_parser
12 </Input>
13
14 <Input smtp_send>
15 Module im_file
16 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpSend\SEND*.LOG'
17 InputType w3c_parser
18 </Input>

For NXLog Community Edition, the xm_csv module can be configured to parse these files.

479
Example 329. Using xm_csv for Protocol Logs

This configuration uses the xm_csv module to parse the protocol logs.

nxlog.conf
 1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
 2
 3 <Extension csv_parser>
 4 Module xm_csv
 5 Fields date-time, connector-id, session-id, sequence-number, \
 6 local-endpoint, remote-endpoint, event, data, context
 7 </Extension>
 8
 9 <Input smtp_receive>
10 Module im_file
11 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpReceive\RECV*.LOG'
12 <Exec>
13 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
14 else
15 {
16 csv_parser->parse_csv();
17 $EventTime = parsedate(${date-time});
18 }
19 </Exec>
20 </Input>
21
22 <Input smtp_send>
23 Module im_file
24 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpSend\SEND*.LOG'
25 <Exec>
26 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
27 else
28 {
29 csv_parser->parse_csv();
30 $EventTime = parsedate(${date-time});
31 }
32 </Exec>
33 </Input>

74.2. EventLog
Exchange Server also logs events to Windows EventLog. Events are logged to the Application and Systems
channels, as well as multiple Exchange-specific crimson channels (see your server’s Event Viewer). For more
information about events generated by Exchange, see the following TechNet articles.

• Error and Event Reference for Client Access Servers


• Error and Event Reference for Mailbox Servers
• Error and Event Reference for Transport Servers
• Error and Event Reference for Unified Messaging Servers
• Manage Diagnostic Logging Levels
• Managed Availability
• Messaging records management errors and events
• Monitoring database availability groups

See also Windows Event Log for more information about using NXLog to collect logs from Windows EventLog.

480
Example 330. Collecting Exchange Events From the EventLog

With this configuration, NXLog will use the im_msvistalog module to subscribe to the Application and
System channels (Critical, Error, and Warning event levels only) and the MSExchange Management crimson
channel (all event levels). Note that the Application and System channels will include other non-Exchange
events.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Application">
 6 <Select Path="Application">
 7 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
 8 <Select Path="System">
 9 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
10 <Select Path="MSExchange Management">*</Select>
11 </Query>
12 </QueryList>
13 </QueryXML>
14 </Input>

74.3. IIS Logs


Exchange is closely integrated with the Internet Information Server (IIS), which itself logs Outlook Web Access
(OWA) and Exchange Admin Center (EAC) events.

See the Microsoft IIS chapter for more information about collecting events from IIS with NXLog.

74.4. Audit Logs (nxlog-xchg)


Exchange also provides two types of audit logs: administrator audit logs and mailbox audit logs. For more
information, see Administrator audit logging in Exchange 2016 and Mailbox audit logging in Exchange 2016 on
TechNet.

The nxlog-xchg utility can be used to retrieve these logs. See the Exchange (nxlog-xchg) add-on documentation.

481
Chapter 75. Microsoft IIS
Microsoft Internet Information Server supports several logging formats. This chapter provides information about
configuring IIS logging and NXLog collection. The recommended W3C format is documented below as well as
other supported IIS formats.

This chapter also includes sections about collecting logs from the SMTP Server and about Automatic Retrieval of
IIS Site Log Locations.

75.1. Configuring Logging


IIS logging can be configured at the site level or server level as follows. For more detailed information, see
Configure Logging in IIS on Microsoft Docs.

1. Open IIS Manager, which can be accessed from the Tools menu in the Server Manager or from
Administrative Tools.
2. In the Connections pane on the left, select the server or site for which to configure logging. Select a server
to configure logging server-wide, or a site to configure logging for that specific site.
3. Double-click the Logging icon in the center pane.

4. Modify the logging configuration as required. The W3C format is recommended.

482
The resulting logs can be collected by NXLog as shown in the following sections.

75.2. W3C Extended Log File Format


IIS can write logs in the W3C format, and the logged fields can be configured via the [ Select Fields… ] button
(see the Configuring Logging section). W3C is the recommended format for use with NXLog.

Log Sample
#Software: Microsoft Internet Information Services 10.0↵
#Version: 1.0↵
#Date: 2017-10-02 17:11:27↵
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent)
cs(Referer) sc-status sc-substatus sc-win32-status time-taken↵
2017-10-02 17:11:27 fe80::b5d8:132c:cec9:daef%6 RPC_IN_DATA /rpc/rpcproxy.dll 1d4026cb-6730-43bf-
91eb-df80f41c050f@test.com:6001&CorrelationID=<empty>;&RequestId=11d6a78a-7c34-4f43-9400-
ad23b114aa62&cafeReqId=11d6a78a-7c34-4f43-9400-ad23b114aa62; 80 TEST\HealthMailbox418406e
fe80::b5d8:132c:cec9:daef%6 MSRPC - 500 0 0 7990↵
2017-10-02 17:12:57 fe80::a425:345a:7143:3b15%2 POST /powershell
clientApplication=ActiveMonitor;PSVersion=5.1.14393.1715 80 - fe80::a425:345a:7143:3b15%2
Microsoft+WinRM+Client - 500 0 0 11279↵

Note that field names with special characters must be referenced with curly braces (for example, ${s-ip} and

483
${cs(User-Agent)}).

See also the W3C Extended Log File Format section and the W3C Extended Log File Format (IIS 6.0) and W3C
Extended Log File Examples (IIS 6.0) articles on Microsoft TechNet.

Example 331. Collecting W3C Format Logs With xm_w3c

This configuration reads from file with im_file and parses with xm_w3c.

nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis_w3c>
6 Module im_file
7 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_ex*.log'
8 InputType w3c_parser
9 </Input>

For NXLog Community Edition, the xm_csv module can be used instead for parsing the records.

484
Example 332. Collecting W3C Format Logs With xm_csv

This configuration parses the logs with the xm_csv module. The header lines are discarded and the $date
and $time fields are parsed in order to set an $EventTime field.

The field list must be set according to the configured IIS fields. The fields shown here
WARNING
correspond with the default field selection in IIS versions 8.5 and 10.

nxlog.conf
 1 <Extension w3c_parser>
 2 Module xm_csv
 3 Fields date, time, s-ip, cs-method, cs-uri-stem, cs-uri-query, \
 4 s-port, cs-username, c-ip, cs(User-Agent), cs(Referer), \
 5 sc-status, sc-substatus, sc-win32-status, time-taken
 6 FieldTypes string, string, string, string, string, string, integer, \
 7 string, string, string, string, integer, integer, integer, \
 8 integer
 9 Delimiter ' '
10 EscapeChar '"'
11 QuoteChar '"'
12 EscapeControl FALSE
13 UndefValue -
14 </Extension>
15
16 <Input iis_w3c>
17 Module im_file
18 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_ex*.log'
19 <Exec>
20 if $raw_event =~ /^#/ drop();
21 else
22 {
23 w3c_parser->parse_csv();
24 $EventTime = parsedate($date + "T" + $time + ".000Z");
25 }
26 </Exec>
27 </Input>

75.3. Configuring IIS HTTP API Error logs


IIS can be configured to write HTTP Server API Error logs. There are three registry values that control HTTP API
error logging. These keys are located at the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters

For detailed information about this registry key’s specific values, please see Error logging in HTTP APIs on
Microsoft Support.

Log Sample
#Software: Microsoft HTTP API 2.0↵
#Version: 1.0↵
#Date: 2018-10-01 22:10:02↵
#Fields: date time c-ip c-port s-ip s-port cs-version cs-method cs-uri sc-status s-siteid s-reason
s-queuename↵
2018-10-01 22:10:02 ::1%0 49211 ::1%0 47001 - - - - - Timer_ConnectionIdle -↵
2018-10-01 22:10:02 ::1%0 49212 ::1%0 47001 - - - - - Timer_ConnectionIdle -↵
2018-10-01 23:45:09 172.31.77.6 2094 172.31.77.6 80 HTTP/1.1 GET /qos/1kbfile.txt 503 – ConnLimit↵

485
Example 333. Collecting IIS HTTP API Logs With xm_csv

This configuration parses the logs with the xm_w3c module. The header lines are discarded and the $date
and $time fields are parsed in order to set an $EventTime field.

nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis_http>
6 Module im_file
7 File 'C:\Windows\System32\LogFiles\HTTPERR\httperr1.log'
8 InputType w3c_parser
9 </Input>

The xm_w3c module is not included in NXLog Community Edition, so the xm_csv module should
NOTE
be used.

75.4. IIS Log File Format


The IIS format is line-based, with comma-separated fields and no header. See IIS Log File Format (IIS 6.0) on
TechNet for more information.

Log Sample
::1, HealthMailbox418406e8ac5b4b61a6b731ac4c660553@test.com, 9/28/2017, 14:49:00, W3SVC1, WINEXC,
::1, 7452, 592, 2538, 302, 0, POST, /OWA/auth.owa, &CorrelationID=<empty>;&cafeReqId=728beb5e-98de-
4680-acb2-45968bef533c;&encoding=;,
127.0.0.1, -, 9/28/2017, 14:49:01, W3SVC1, WINEXC, 127.0.0.1, 6798, 2502, 682, 302, 0, GET, /ecp/,
&CorrelationID=<empty>;&cafeReqId=0ed28871-4083-492f-99c2-
2fbdb06a9466;&LogoffReason=NoCookiesGetOrE14AuthPost,

486
Example 334. Collecting Logs From the IIS Format

This configuration reads from file with im_file and parses the fields with xm_csv. The $Date and $Time
fields are parsed in order to set an $EventTime field.

nxlog.conf
 1 <Extension iis_parser>
 2 Module xm_csv
 3 Fields ClientIPAddress, UserName, Date, Time, ServiceAndInstance, \
 4 ServerName, ServerIPAddress, TimeTaken, ClientBytesSent, \
 5 ServerBytesSent, ServerStatusCode, WindowsStatusCode, RequestType, \
 6 TargetOfOperation, Parameters
 7 FieldTypes string, string, string, string, string, string, string, integer, \
 8 integer, integer, integer, integer, string, string, string
 9 UndefValue -
10 </Extension>
11
12 <Input iis>
13 Module im_file
14 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_in*.log'
15 <Exec>
16 iis_parser->parse_csv();
17 $EventTime = strptime($Date + " " + $Time, "%m/%d/%Y %H:%M:%S");
18 </Exec>
19 </Input>

75.5. NCSA Common Log File Format


The NCSA format is a line-based plain text format that separates fields with spaces and uses hyphens (-) as
placeholders for empty fields. See the Common & Combined Log Formats section for more information about
this format. See NCSA Common Log File Format (IIS 6.0) on Microsoft TechNet for more information about this
format as used by IIS.

Log Sample
fe80::a425:345a:7143:3b15%2 - - [02/Oct/2017:13:16:18 -0700] "POST
/mapi/emsmdb/?useMailboxOfAuthenticatedUser=true HTTP/1.1" 401 7226
fe80::a425:345a:7143:3b15%2 - TEST\HealthMailboxc0bafd1 [02/Oct/2017:13:16:20 -0700] "POST
/mapi/emsmdb/?useMailboxOfAuthenticatedUser=true HTTP/1.1" 200 1482

487
Example 335. Collecting NCSA Format Logs

This configuration reads from file with the im_file module and uses a regular expression to parse each
record.

nxlog.conf
 1 <Input iis_ncsa>
 2 Module im_file
 3 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_nc*.log'
 4 <Exec>
 5 if $raw_event =~ /(?x)^(\S+)\ -\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
 6 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)/
 7 {
 8 $RemoteHostAddress = $1;
 9 if $2 != '-' $UserName = $2;
10 $EventTime = parsedate($3);
11 $HTTPMethod = $4;
12 $HTTPURL = $5;
13 $HTTPResponseStatus = $6;
14 $BytesSent = $7;
15 }
16 </Exec>
17 </Input>

75.6. SMTP Server


IIS 6.0 in Windows Server 2008 R2 includes an SMTP server. This SMTP server has been deprecated beginning
with Windows Server 2012, but it is still available in Windows Server 2016.

During operation, the IIS SMTP Server pads the W3C log to 64 KiB with NUL characters.
WARNING When the SMTP Server stops, it truncates the file to remove the padding, causing im_file to
re-read the log file and generate duplicate events.

IIS SMTP Server logging can be configured as follows.

1. Open Internet Information Services (IIS) 6.0 Manager from Administrative Tools.
2. Right click on the corresponding SMTP Virtual Server and click Properties.

488
3. Check Enable logging and choose the logging format from the Active log format drop-down menu. The
W3C format is recommended.

4. Click the [ Properties… ] button to configure the log location and other options.

5. If using the W3C format, adjust the logged fields under the Advanced tab. Include the Date and Time fields
and whatever extended properties are required.

489
Example 336. Collecting W3C Logs From the IIS SMTP Server

The following configuration retrieves W3C logs and parses them using the xm_w3c module.

nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input smtp>
6 Module im_file
7 File 'C:\Windows\System32\LogFiles\SmtpSvc1\ex*.log'
8 InputType w3c_parser
9 </Input>

See the preceding sections for more information about processing the other log formats or using xm_csv for
processing W3C logs with NXLog Community Edition.

75.7. Automatic Retrieval of IIS Site Log Locations


The IIS per-site log file locations can be automatically fetched with a batch/PowerShell polyglot script via the
include_stdout directive. For more details, see the PowerShell Generating Configuration section.

490
Example 337. Retrieving Log Locations via Script

The following polyglot script should be installed in the NXLog installation (or ROOT) directory. It uses the
WebAdministration PowerShell module to return the configured log path for each site. If IIS is configured to
use one log file per server, the path should instead be configured manually.

If there are multiple log formats in the log directory due to configuration changes, the
wildcard path should be adjusted to match only those files that are in the
WARNING
corresponding format. For example, for W3C logging use u_ex*.log in the last line of
the script.

get_iis_log_paths.cmd
@( Set "_= (
Rem " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
if defined PROCESSOR_ARCHITEW6432 (
set powershell=%SystemRoot%\SysNative\WindowsPowerShell\v1.0\powershell.exe
) else (
set powershell=powershell.exe
)
%powershell% -ExecutionPolicy Bypass -NoProfile ^
-Command "iex ((gc '%~f0') -join [char]10)"
EndLocal & Exit /B %ErrorLevel%
#>
Import-Module -Name WebAdministration
foreach($Site in $(get-website)) {
$LogDir=$($Site.logFile.directory.replace("%SystemDrive%",$env:SystemDrive))

# WARNING: adjust path to match format (for example, for W3C use `u_ex*.log`).
Write-Output "File '$LogDir\W3SVC$($Site.id)\*.log'" }

nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis>
6 Module im_file
7 include_stdout %ROOT%\get_iis_log_paths.cmd
8 InputType w3c_parser
9 </Input>

491
Chapter 76. Microsoft SharePoint
Microsoft SharePoint Server provides many different types of logs, many of which are configurable. Logs are
written to files, databases, and the Windows EventLog. NXLog can be configured to collect these logs, as is shown
in the following sections.

See Monitoring and Reporting in SharePoint Server on TechNet for more information about SharePoint logging.

76.1. Diagnostic Logs


SharePoint diagnostic logs are handled by the Unified Logging Service (ULS), the primary logging mechanism in
SharePoint. The ULS writes events to the Windows EventLog and to trace log files. The EventLog and trace log
levels of each category or subcategory can be adjusted individually.

The trace log files are generated by and stored locally on each server running SharePoint in the farm, using file
names containing the server hostname and timestamp (HOSTNAME-YYYYMMDD-HHMM.log). SharePoint trace logs
are created at regular intervals and whenever there is an IISRESET. It is common for many trace logs to be
generated within a 24-hour period.

If configured in the farm settings, each SharePoint server also writes trace logs to the logging database. These
logs are written by the Diagnostic Data Provider: Trace Log job. NXLog can be configured to collect these logs
from the logging database.

For more information about diagnostic logging, see Configure diagnostic logging in SharePoint Server on
TechNet.

76.1.1. ULS Trace Log Format


The Unified Logging Service (ULS) trace log files are tab-delimited.

Trace Log Sample


Timestamp ⇥ Process ⇥ TID ⇥ Area
⇥ Category ⇥ EventID ⇥ Level ⇥ Message ⇥ Correlation↵
10/12/2017 16:02:18.30 ⇥ hostcontrollerservice.exe (0x0948) ⇥ 0x191C ⇥ SharePoint Foundation
⇥ Topology ⇥ aup1c ⇥ Medium ⇥ Current app domain: hostcontrollerservice.exe
(1)↵
10/12/2017 16:02:18.30 ⇥ OWSTIMER.EXE (0x11B8) ⇥ 0x1AB4 ⇥ SharePoint Foundation
⇥ Config DB ⇥ azcxo ⇥ Medium ⇥ SPPersistedObjectCollectionCache: Missed
memory and file cache, falling back to SQL query. CollectionType=Children,
ObjectType=Microsoft.SharePoint.Administration.SPWebApplication, CollectionParentId=30801f0f-cca6-
40bc-9f30-5a4608bbb420, Object Count=1, Stack= at
Microsoft.SharePoint.Administration.SPPersistedObjectCollectionCache.Get[T](SPPersistedObjectCollect
ion`1 collection) at
Microsoft.SharePoint.Administration.SPConfigurationDatabase.Microsoft.SharePoint.Administration.ISPP
ersistedStoreProvider.GetBackingList[U](SPPersistedObjectCollection`1 persistedCollection) at
Microsoft.SharePoint.Administration.SPPersistedObjectCollection`1.get_BackingList() at
Microsoft.SharePoint.Administration.SPPersistedObjectCollection`1.<GetEnumeratorImpl>d__0.MoveNext()
at Microsoft.Sh...↵
10/12/2017 16:02:18.30* ⇥ OWSTIMER.EXE (0x11B8) ⇥ 0x1AB4 ⇥ SharePoint Foundation
⇥ Config DB ⇥ azcxo ⇥ Medium ⇥
...arePoint.Utilities.SPServerPerformanceInspector.GetLocalWebApplications() at
Microsoft.SharePoint.Utilities.SPServerPerformanceInspector..ctor() at
Microsoft.SharePoint.Utilities.SPServerPerformanceInspector..cctor() at
Microsoft.SharePoint.Administration.SPTimerStore.InitializeTimer(Int64& cacheVersion, Object&
jobDefinitions, Int32& timerMode, Guid& serverId, Boolean& isServerBusy) at
Microsoft.SharePoint.Administration.SPNativeConfigurationProvider.InitializeTimer(Int64&
cacheVersion, Object& jobDefinitions, Int32& timerMode, Guid& serverId, Boolean& isServerBusy)↵

492
The ULS log file contains the following fields.

• Timestamp: When the event was logged, in local time


• Process: Image name of the process logging its activity followed by its process ID (PID) inside parentheses
• TID: Thread ID
• Area: Component that produced event (SharePoint Portal Server, SharePoint Server Search, etc.)
• Category: Detailed category of the event (Topology, Taxonomy, User Profiles, etc.)
• EventID: Internal Event ID
• Level: Log level of message (Critical, Unexpected, High, etc.)
• Message: The message from the application
• Correlation: Unique GUID-based ID, generated for each request received by the SharePoint server (unique
to each request, not each error)

As shown by the second and third events in the log sample above, long messages span multiple records. In this
case, the timestamp of each subsequent record is followed by an asterisk (*). However, trace log messages are
not guaranteed to appear consecutively within the trace log. See Writing to the Trace Log on MSDN.

76.1.2. Configuring Diagnostic Logging


Adjust the log levels, trace log retention policy, and trace log location as follows.

WARNING The diagnostic logging settings are farm-wide.

1. Log in to Central Administration and go to Monitoring › Reporting › Configure diagnostic logging.

2. In the Event Throttling section, use the checkboxes to select a set of categories or subcategories for which
to modify the logging level. Expand categories as necessary to view the corresponding subcategories.

3. Set the event log and trace log levels for the selected categories or subcategories.

Only select the verbose level for troubleshooting, as a large number of logs will be
WARNING
generated.

493
4. To set other levels for other categories or subcategories, click [ OK ] and repeat from step 1.
5. In the Trace Log section, adjust the trace log path and retention policy as required. The specified log location
must exist on all servers in the farm.

6. Click [ OK ] to apply the settings.

Further steps are required to enable writing trace logs to the logging database. For configuring the logging
database itself (server, name, and authentication), see the Configuring Usage Logging section.

1. Log in to Central Administration and go to Monitoring › Timer Jobs › Review job definitions.

2. Click on the Diagnostic Data Provider: Trace Log job.


3. Click the [ Enable ] button to enable the job.
4. Open the Diagnostic Data Provider: Trace Log job again and click [ Run Now ] to run the job immediately.

76.1.3. Collecting Diagnostic Logs


The xm_csv module can be used to parse the tab-delimited trace log files on the local server.

494
Example 338. Reading the Trace Log Files

This configuration collects logs from the ULS trace log files and uses xm_csv to parse them. $EventTime
and $Hostname fields are added to the event record. Each event is converted to JSON format and written to
file.

The defined SHAREPOINT_LOGS path should be set to the trace log file directory configured
NOTE
in the Configuring Diagnostic Logging section.

nxlog.conf (truncated)
 1 define SHAREPOINT_LOGS C:\Program Files\Common Files\microsoft shared\Web Server \
 2 Extensions\16\LOGS
 3
 4 <Extension json>
 5 Module xm_json
 6 </Extension>
 7
 8 <Extension uls_parser>
 9 Module xm_csv
10 Fields Timestamp, Process, TID, Area, Category, EventID, Level, Message, \
11 Correlation
12 Delimiter \t
13 </Extension>
14
15 <Input trace_file>
16 Module im_file
17 # Use a file mask to read from ULS trace log files only
18 File '%SHAREPOINT_LOGS%\*-????????-????.log'
19 <Exec>
20 # Drop header lines and empty lines
21 if $raw_event =~ /^(\xEF\xBB\xBF|Timestamp)/ drop();
22 else
23 {
24 # Remove extra spaces
25 $raw_event =~ s/ +(?=\t)//g;
26
27 # Parse with uls_parser instance defined above
28 uls_parser->parse_csv();
29 [...]

Output Sample
{
  "EventReceivedTime": "2017-10-12 16:02:20",
  "SourceModuleName": "uls",
  "SourceModuleType": "im_file",
  "Timestamp": "10/12/2017 16:02:18.30",
  "Process": "hostcontrollerservice.exe (0x0948)",
  "TID": "0x191C",
  "Area": "SharePoint Foundation",
  "Category": "Topology",
  "EventID": "aup1c",
  "Level": "Medium",
  "Message": "Current app domain: hostcontrollerservice.exe (1)",
  "EventTime": "2017-10-12 16:02:18",
  "Hostname": "WIN-SHARE.test.com"
}

495
The im_odbc module can be used to collect diagnostic logs from the farm-wide logging database.

Example 339. Collecting Trace Logs From Database

The following Input configuration collects logs from the ULSTraceLog view in the WSS_UsageApplication
database.

The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.

nxlog.conf
 1 <Input trace_db>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 13 for SQL Server};\
 4 SERVER=SHARESERVE1;DATABASE=WSS_UsageApplication;\
 5 Trusted_Connection=yes
 6 IdType timestamp
 7
 8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(LogTime) AS maxid FROM dbo.ULSTraceLog
12
13 SQL SELECT LogTime AS id, * FROM dbo.ULSTraceLog \
14 WHERE LogTime > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($LogTime);
20 </Exec>
21 </Input>

See the Windows EventLog section below for an example configuration that reads events from the Windows
EventLog.

76.2. Usage and Health Data Logs


SharePoint also collects usage and health data to show how it is used. The system generates health and
administrative reports from these logs. Usage and health data logs are written as tab-delimited data to various
*.usage files in the configured log location path, and also to the logging database.

Log Sample
FarmId ⇥ UserLogin ⇥ SiteSubscriptionId ⇥ TimestampUtc ⇥ CorrelationId ⇥ Action ⇥ Target ⇥
Details↵
42319181-e881-44f1-b422-d7ab5f8b0117 ⇥ TEST\Administrator ⇥ 00000000-0000-0000-0000-000000000000 ⇥
2017-10-17 23:15:26.667 ⇥ 00000000-0000-0000-0000-000000000000 ⇥ Administration.Feature.Install ⇥
AccSrvRestrictedList ⇥ {"Id":"a4d4ee2c-a6cb-4191-ab0a-21bb5bde92fb"}↵
42319181-e881-44f1-b422-d7ab5f8b0117 ⇥ TEST\Administrator ⇥ 00000000-0000-0000-0000-000000000000 ⇥
2017-10-17 23:15:26.839 ⇥ 00000000-0000-0000-0000-000000000000 ⇥ Administration.Feature.Install ⇥
ExpirationWorkflow ⇥ {"Id":"c85e5759-f323-4efb-b548-443d2216efb5"}↵

For more information, see Overview of monitoring in SharePoint Server on TechNet.

496
76.2.1. Configuring Usage Logging
Usage and health data collection can be enabled and configured as follows. For more information about
configuring usage and health data logging, see Configure usage and health data collection in SharePoint Server
on TechNet.

WARNING The usage and health data collection settings are farm-wide.

1. Log in to Central Administration and go to Monitoring › Reporting › Configure usage and health data
collection.
2. In the Usage Data Collection section, check Enable usage data collection to enable it.
3. In the Event Selection section, use the checkboxes to select the required event categories. It is
recommended that only those categories be enabled for which regular reports are required.

4. In the Usage Data Collection Settings section, specify the path for the usage log files. The specified log
location must exist on all servers in the farm.
5. In the Health Data Collection section, check Enable health data collection to enable it. Click Health
Logging Schedule to edit the job definitions for the Microsoft SharePoint Foundation Timer service.
6. Click the Log Collection Schedule link to edit the job definitions for the Microsoft SharePoint Foundation
Usage service.
7. In the Logging Database Server section, adjust the authentication method as required. To change the
database server and name, see Log usage data in a different logging database by using Windows PowerShell
on TechNet.

8. Click [ OK ] to apply the settings.

497
76.2.2. Collecting Usage Logs
The xm_csv module can be used to parse the tab-delimited usage and health log files on the local server.

Example 340. Reading Usage Log Files

This configuration collects logs from the AdministrativeActions usage log file (see Using Administrative
Actions logging in SharePoint Server 2016 on TechNet) and uses xm_csv to parse them. $EventTime and
$Hostname fields are added to the event record. Each event is converted to JSON format and written to file.

The defined SHAREPOINT_LOGS path should be set to the trace log file directory configured
NOTE
in the Configuring Diagnostic Logging section.

Unlike the diagnostic/trace logs, the various usage/health data categories generate logs
NOTE with differing field sets. Therefore it is not practical to parse multiple types of usage/health
logs with a single xm_csv parser.

nxlog.conf (truncated)
 1 define SHAREPOINT_LOGS C:\Program Files\Common Files\microsoft shared\Web Server \
 2 Extensions\16\LOGS
 3
 4 <Extension json>
 5 Module xm_json
 6 </Extension>
 7
 8 <Extension admin_actions_parser>
 9 Module xm_csv
10 Fields FarmId, UserLogin, SiteSubscriptionId, TimestampUtc, \
11 CorrelationId, Action, Target, Details
12 Delimiter \t
13 </Extension>
14
15 <Input admin_actions_file>
16 Module im_file
17 # Use a file mask to read from the USAGE files only
18 File '%SHAREPOINT_LOGS%\AdministrativeActions\*.usage'
19 <Exec>
20 # Drop header lines and empty lines
21 if $raw_event =~ /^(\xEF\xBB\xBF|FarmId)/ drop();
22 else
23 {
24 # Parse with parser instance defined above
25 admin_actions_parser->parse_csv();
26
27 # Set $EventTime field
28 $EventTime = parsedate($TimestampUtc + "Z");
29 [...]

498
Output Sample
{
  "EventReceivedTime": "2017-10-17 20:46:14",
  "SourceModuleName": "admin_actions",
  "SourceModuleType": "im_file",
  "FarmId": "42319181-e881-44f1-b422-d7ab5f8b0117",
  "UserLogin": "TEST\\Administrator",
  "SiteSubscriptionId": "00000000-0000-0000-0000-000000000000",
  "TimestampUtc": "2017-10-17 23:15:26.667",
  "CorrelationId": "00000000-0000-0000-0000-000000000000",
  "Action": "Administration.Feature.Install",
  "Target": "AccSrvRestrictedList",
  "Details": {
  "Id": "a4d4ee2c-a6cb-4191-ab0a-21bb5bde92fb"
  },
  "EventTime": "2017-10-17 16:15:26",
  "Hostname": "WIN-SHARE.test.com"
}

The im_odbc module can be used to collect usage and health logs from the farm-wide logging database.

Example 341. Collecting Usage Logs From Database

The following Input configuration collects Administrative Actions logs from the AdministrativeActions view
in the WSS_UsageApplication database.

The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.

nxlog.conf
 1 <Input admin_actions_db>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 13 for SQL Server};\
 4 SERVER=SHARESERVE1;DATABASE=WSS_UsageApplication;\
 5 Trusted_Connection=yes
 6 IdType timestamp
 7
 8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(LogTime) AS maxid FROM dbo.AdministrativeActions
12
13 SQL SELECT LogTime AS id, * FROM dbo.AdministrativeActions \
14 WHERE LogTime > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($LogTime);
20 </Exec>
21 </Input>

See the Windows EventLog section for an example configuration that reads events from the Windows EventLog.

499
76.3. Audit Logs
SharePoint Information Management provides an audit feature that allows tracking of user actions on a site’s
content. The audit events are stored in the dbo.AuditData table in the WSS_Content database. The events can
be collected via the SharePoint API or by reading the database directly.

Audit logging is disabled by default, and can be enabled on a per-site basis. To enable audit logging, follow these
steps. For more details, see the Configure audit settings for a site collection article on Office Support.

1. Log in to Central Administration and go to Security › Information policy › Configure Information


Management Policy.
2. Verify that the Auditing policy is set to Available.
3. On the site collection home page, click Site actions (gear icon), then Site settings.

4. On the Site Settings page, in the Site Collection Administration section, click Site collection audit
settings.

If the Site Collection Administration section is not shown, make sure you have adequate
NOTE
permissions.

5. Set audit log trimming settings, select the events to audit, and click [ OK ].

76.3.1. Reading Audit Logs via the API


A PowerShell script can be used to collect audit logs via SharePoint’s API.

In order for NXLog to have SharePoint Shell access when running as a service, run the following PowerShell
commands. This will add the NT AUTHORITY\SYSTEM user to the SharePoint_Shell_Access role for the
SharePoint configuration database.

PS> Add-PSSnapin Microsoft.SharePoint.Powershell


PS> Add-SPShellAdmin -UserName "NT AUTHORITY\SYSTEM"

Example 342. Collecting Audit Logs via the SharePoint API

This configuration collects audit events via SharePoint’s API with the auditlog.ps1 PowerShell script. The

500
script also adds the following fields (performing lookups as required): $ItemName, $Message, $SiteURL, and
$UserName. Audit logs are collected from all available sites and the site list is updated each time the logs
are collected. See the options in the script header.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 envvar systemroot
 6 <Input audit_powershell>
 7 Module im_exec
 8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
 9 Arg "-ExecutionPolicy"
10 Arg "Bypass"
11 Arg "-NoProfile"
12 Arg "-File"
13 Arg "C:\auditlog.ps1"
14 <Exec>
15 parse_json();
16 $EventTime = parsedate($EventTime);
17 </Exec>
18 </Input>

Event Sample
{
  "EventReceivedTime": "2018-03-01 02:12:45",
  "SourceModuleName": "audit_ps",
  "SourceModuleType": "im_exec",
  "UserID": 18,
  "LocationType": 0,
  "EventName": null,
  "MachineName": null,
  "ItemName": null,
  "EventData": "<Version><AllVersions/></Version><Recycle>1</Recycle>",
  "Event": 4,
  "UserName": "i:0#.w|test\\test",
  "SourceName": null,
  "SiteURL": "http://win-share",
  "EventTime": "2018-03-01 02:12:12",
  "EventSource": 0,
  "Message": "The audited object is deleted.",
  "DocLocation": "Shared Documents/document.txt",
  "ItemID": "48341996-7844-4842-bef6-94b43ace0582",
  "SiteID": "51108732-0903-4721-aae7-0f9fb5aebfc2",
  "MachineIP": null,
  "AppPrincipalID": 0,
  "ItemType": 1
}

501
auditlog.ps1 (truncated)
# This script can be used with NXLog to fetch Audit logs via the SharePoint
# API. See the configurable options below. Based on:
# <http://shokochino-sharepointexperience.blogspot.ch/2013/05/create-auditing-reports-in-
sharepoint.html>

#Requires -Version 3

# The timestamp is saved to this file for resuming.


$CacheFile = 'C:\nxlog_sharepoint_auditlog_position.txt'

# The database is queried at this interval in seconds.


$PollInterval = 10

# Allow this many seconds for new logs to be written to database.


$ReadDelay = 30

# Use this to enable debug logging (for testing outside of NXLog).


#$DebugPreference = 'Continue'
################################################################################
[...]

76.3.2. Reading Audit Logs From the Database


It is also possible to read the audit logs directly from the SharePoint database.

Example 343. Collecting Audit Logs Directly From Database

This configuration collects audit events from the AuditData table in the WSS_Content database.

The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.

nxlog.conf
 1 <Input audit_db>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; \
 4 Server=SHARESERVE1; Database=WSS_Content; \
 5 Trusted_Connection=yes
 6 IdType timestamp
 7
 8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
 9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(Occurred) AS maxid FROM dbo.AuditData
12
13 SQL SELECT Occurred AS id, * FROM dbo.AuditData \
14 WHERE Occurred > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($Occurred);
20 </Exec>
21 </Input>

502
76.4. Windows EventLog
SharePoint will generate Windows event logs according to the diagnostic log levels configured (see the Diagnostic
Logs section). NXLog can be configured to collect logs from the Windows EventLog as shown below. For more
information about collect Windows EventLog events with NXLog, see the Windows Event Log chapter.

Example 344. Collecting SharePoint Events From the EventLog

This configuration uses the im_msvistalog module to collect all logs from four SharePoint crimson channels,
as well as Application channel events of Warning or higher level. The Application channel will include other
non-SharePoint events. There may be other SharePoint events generated which will not be collected with
this query, depending on the configuration and the channels used.

nxlog.conf
 1 <Input eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Application">
 6 <Select Path="Application">
 7 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
 8 <Select Path="System">
 9 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
10 <Select Path="Microsoft-Office Server-Search/Operational">
11 *</Select>
12 <Select Path="Microsoft-Office-EduServer Diagnostics">*</Select>
13 <Select Path="Microsoft-SharePoint Products-Shared/Operational">
14 *</Select>
15 <Select Path="Microsoft-SharePoint Products-Shared/Audit">*</Select>
16 </Query>
17 </QueryList>
18 </QueryXML>
19 </Input>

76.5. IIS Logs


SharePoint uses the Internet Information Server (IIS) to serve the configured sites as well as the Central
Administration site. IIS generates its own logs.

See the Microsoft IIS chapter for more information about collecting events from IIS with NXLog.

503
Chapter 77. Microsoft SQL Server
NXLog can be integrated with SQL Server in several ways. The server error log file can be read and parsed. SQL
Server Auditing can be configured for a database and the logs collected. It is also possible to read logs from or
write logs to databases hosted by SQL Server. The last section provides some additional information about
setting up ODBC for connecting to a database.

77.1. Error Log


Microsoft SQL Server writes its error logs to a UTF-16LE encoded file using a line-based format. Log messages
may span multiple lines. It is recommended to normalize the encoding to UTF-8 as shown in the examples below.

Example 345. Reading From the SQL Server Error Log

This example uses the xm_charconv LineReader input reader to convert the input to UTF-8 encoding. Events
spanning multiple lines are joined and each event is parsed into $EventTime, $Source, and $Message
fields.

nxlog.conf (truncated)
 1 <Extension charconv>
 2 Module xm_charconv
 3 LineReader UTF-16LE
 4 </Extension>
 5
 6 define ERRORLOG_EVENT /(?x)^(\xEF\xBB\xBF)? \
 7 (?<EventTime>\d+-\d+-\d+\ \d+:\d+:\d+.\d+) \
 8 \ (?<Source>\S+)\s+(?<Message>.+)$/s
 9 <Input mssql_errorlog>
10 Module im_file
11 File 'C:\Program Files\Microsoft SQL Server\' + \
12 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
13 InputType charconv
14 <Exec>
15 # Attempt to match regular expression
16 if $raw_event =~ %ERRORLOG_EVENT%
17 {
18 # Check if previous lines were saved
19 if defined(get_var('saved'))
20 {
21 $tmp = $raw_event;
22 $raw_event = get_var('saved');
23 set_var('saved', $tmp);
24 delete($tmp);
25 # Process and send previous event
26 $raw_event =~ %ERRORLOG_EVENT%;
27 $EventTime = parsedate($EventTime);
28 }
29 [...]

Because there is no closing/footer line for the events, a log message is kept in the buffers,
NOTE
and not forwarded, until a new log message is read.

504
Example 346. Reading From the SQL Server Error Log (NXLog Community Edition)

This example uses the xm_charconv module convert() function to convert the character set to UTF-8. For log
messages that span multiple lines, an event is created for each line. Variables are used to retain the same
$EventTime and $Source values for subsequent events in this case.

nxlog.conf (truncated)
 1 <Extension _charconv>
 2 Module xm_charconv
 3 </Extension>
 4
 5 <Input mssql_errorlog>
 6 Module im_file
 7 File 'C:\Program Files\Microsoft SQL Server\' + \
 8 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
 9 <Exec>
10 # Convert character encoding
11 $raw_event = convert($raw_event, 'UTF-16LE', 'UTF-8');
12 # Discard empty lines
13 if $raw_event == '' drop();
14 # Attempt to match regular expression
15 else if $raw_event =~ /(?x)^(?<EventTime>\d+-\d+-\d+\ \d+:\d+:\d+.\d+)
16 \ (?<Source>\S+)\s+(?<Message>.+)$/s
17 {
18 # Convert $EventTime field to datetime type
19 $EventTime = parsedate($EventTime);
20 # Save $EventTime and $Source; may be needed for next event
21 set_var('last_EventTime', $EventTime);
22 set_var('last_Source', $Source);
23 }
24 # If regular expression does not match, this is a multi-line event
25 else
26 {
27 # Use the entire line for the $Message field
28 $Message = $raw_event;
29 [...]

77.2. Audit Log


Microsoft SQL Server 2008 introduced a new feature that provided a much needed solution for security oriented
customers: SQL Server Auditing. With this feature, the server records all changes to the database and access
groups. These logs are stored in a proprietary format file or in the Application/Security EventLog data.

While in earlier versions these logs had to be generated by SQL Trace or a custom monitoring process, it is now
possible to start recording audit logs with a few clicks in Management Studio or a relatively simple SQL script.

The following instructions require a Microsoft SQL Server with auditing support and the Microsoft SQL
Management Studio. Consult the relevant documentation below to determine whether "Fine Grained Auditing" is
available for your SQL Server version and edition.

• Features Supported by the Editions of SQL Server 2008 R2


• Features Supported by the Editions of SQL Server 2012
• Features Supported by the Editions of SQL Server 2014
• Editions and supported features of SQL Server 2016
• Editions and supported features of SQL Server 2017

505
For more information, see SQL Server Audit (Database Engine) on Microsoft Docs.

77.2.1. Configuring SQL Server for Auditing


To set up SQL auditing, create a Server Audit object that describes the target for audit data (a binary file or
EventLog channel). Then add either a Server Audit Specification object or a Database Audit Specification
object (or both) so SQL Server can start producing meaningful data into the defined Server Audit object.
Generally, to log SQL statements set up a Database Audit Specification object. To trace server events, such as log-
on attempts or server principle changes, define a Server Audit Specification.

77.2.1.1. Creating a Server Audit Object


GUI
In Management Studio, after connecting to the database instance:

1. Click on the plus (+) next to Security.

2. Right-click on Audits and select New Audit. The Create Audit dialog box appears. Choose a name for
the audit object.
3. In the Audit destination drop-down list, choose Security log or File (for security reasons, Application
log is not recommended as a target). For File, enter a file path and configure log rotation.
4. Click OK. The Server Audit object is created. Note the red arrow next to the newly created object’s name
indicating this is a disabled object. To enable it, right click on the audit object and select Enable audit (in
case of an error, see Checking SQL Audit Generation below).

SQL script
To instead create the Server Audit object via SQL, run the CREATE SERVER AUDIT and ALTER SERVER AUDIT
commands. For example:

CREATE SERVER AUDIT myaudit


TO <SECURITY LOG|FILE>
WITH (QUEUE_DELAY=100, ON_FAILURE=CONTINUE);
ALTER SERVER AUDIT myaudit WITH (STATE=ON)

77.2.1.2. Creating a Server Audit Specification


GUI
In Management Studio, after connecting to the database instance:

1. Click on the plus (+) next to Security.

2. Right-click on Server Audit Specifications and select New Audit. The Create Audit dialog box appears.
3. Choose a Server Audit object (the one defined earlier) and select the actions to be reported.
4. Click OK. The Server Audit Specification object is created. Note the red arrow next to the newly created
object’s name indicating this is a disabled object. To enable it, right click on the audit object and select
Enable audit.

SQL script
Alternatively, use the CREATE SERVER AUDIT SPECIFICATION and ALTER SERVER AUDIT SPECIFICATION
commands. For example:

CREATE SERVER AUDIT SPECIFICATION srv_audit_spec


FOR SERVER AUDIT myaudit
  ADD (FAILED_LOGIN_GROUP)
ALTER SERVER AUDIT SPECIFICATION srv_audit_spec
FOR SERVER AUDIT myaudit
WITH (STATE=ON)

506
77.2.1.3. Creating a Database Audit Specification
GUI
In Management Studio, after connecting to the database instance:

1. Click on the plus (+) next to Databases.

2. Click on the plus (+) next to the database to be audited, then click on the plus (+) next to Security under
the database.
3. Right-click on Database Audit Specifications and select New Audit. The Create Audit dialog box
appears.
4. Choose a Server Audit object (the one defined earlier) and select the actions to be reported.
5. Click OK. The Database Audit Specification object is created.

SQL script
Alternatively, use the CREATE DATABASE AUDIT SPECIFICATION and ALTER DATABASE AUDIT
SPECIFICATION commands. For example:

CREATE DATABASE AUDIT SPECIFICATION mydb_audit_spec


FOR SERVER AUDIT myaudit
ADD (SELECT ON OBJECT::[Production].[Product] BY [Peter])
ALTER DATABASE AUDIT SPECIFICATION mydb_audit_spec
FOR SERVER AUDIT myaudit
  ADD (SELECT
  ON OBJECT::dbo.Table1
  by dbo)
  WITH (STATE = ON);

77.2.2. Checking SQL Audit Generation


Audit file
If File was chosen as the audit target, check if the file is created and grows when the audit criteria are met.
Other than incorrect NTFS permissions, there should not be any issue with this type of log target.

EventLog
Check the Security and Application EventLogs to see if SQL Auditing is working properly. If there are no
related events in the Security log (though it was set as the destination), check the Application log too. Look for
event ID 33204 in the Application log indicating SQL Server’s failure to write to the Security log.

This is a registry related permission error: the account running the SQL server instance is unable to create an
entry under HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security and fails with ID 33204.

This error can be fixed as follows.

1. Run regedit.

2. Grant Full Control permission for the account running the SQL server instance (for example, Network
Service or a named account) to HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security.
3. Disable, then re-enable the Server Audit; this creates a sub-key, MSSQLSERVER$AUDIT.

4. Optionally, remove the Full Control permission that was just added. This permission is no longer
required now that the sub-key has been created.

77.2.3. Configuring Collection of SQL Audit Logs


After a working SQL audit is in place, NXLog can be configured to read the logs from the Server Audit object. If it

507
is configured with a Security log destination, the events can be read from the EventLog. If it is configured with a
File target, the events can be queried via ODBC.

77.2.3.1. Reading From Windows EventLog


The im_msvistalog module can be used to read events from the Security log.

508
Example 347. Reading Audit Events From the EventLog

In this example, events with ID 33205 are retrieved and some additional fields are parsed from $Message.

Sample Event
2011-11-11 11:00:00 sql2008-ent AUDIT_SUCCESS 33205 Audit event: event_time:2011-11-11
11:00:00.0000000↵
sequence_number:1↵
action_id:SL↵
succeeded:true↵
permission_bitmask:1↵
is_column_permission:true↵
session_id:57↵
server_principal_id:264↵
database_principal_id:1↵
target_server_principal_id:0↵
target_database_principal_id:0↵
object_id:2105058535↵
class_type:U↵
session_server_principal_name:SQL2008-ENT\myuser↵
server_principal_name:SQL2008-ENT\myuser↵
server_principal_sid:0105000000000002120000001aaaaaabbbbcccccddddeeeeffffffff↵
database_principal_name:dbo↵
target_server_principal_name:↵
target_server_principal_sid:↵
target_database_principal_name:↵
server_instance_name:SQL2008-ENT↵
database_name:logindb↵
schema_name:dbo↵
object_name:users↵
statement:select username nev from dbo.users;↵
additional_information:↵

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Security">
 6 <Select Path="Security">*[System[(EventID=33205)]]</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 <Exec>
11 if $Message =~ /action_id:(.*)/ $ActionId = $1;
12 if $Message =~ /session_server_principal_name:(.*)/ $SessionSPN = $1;
13 if $Message =~ /database_principal_name:(.*)/ $DBPrincipal = $1;
14 if $Message =~ /server_instance_name:(.*)/ $ServerInstance = $1;
15 if $Message =~ /database_name:(.*)/ $DBName = $1;
16 if $Message =~ /schema_name:dbo(.*)/ $SchemaName = $1;
17 if $Message =~ /object_name:(.*)/ $ObjectName = $1;
18 if $Message =~ /statement:(.*)/ $Statement = $1;
19 </Exec>
20 </Input>

509
77.2.3.2. Reading From the Audit File
The audit file is stored in a binary format and is read with the sys.fn_get_audit_file function. NXLog can be
configured to collect the audit logs via ODBC with the im_odbc module. For more information about ODBC (and
the ConnectionString directive), see the Setting up ODBC section.

Example 348. Reading Audit Events From SQL Audit File

The configuration below uses the im_odbc module to collect audit logs via ODBC. A corresponding name
for the action_id is included via a lookup performed on the sys.dm_audit_actions table (see Translating
the action_id Field below for more information).

NOTE This configuration has been tested with SQL Server 2017.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
 4 Trusted_Connection=yes; DATABASE=TESTDB_doc81;
 5 PollInterval 5
 6 IdType timestamp
 7 SQL SELECT event_time AS 'id', f.*, a.name AS action_name \
 8 FROM fn_get_audit_file('C:\audit_log\Audit-*.sqlaudit', default, \
 9 default) AS f \
10 INNER JOIN sys.dm_audit_actions AS a \
11 ON f.action_id = a.action_id \
12 WHERE event_time > ?
13 <Exec>
14 delete($id);
15 rename_field($event_time, $EventTime);
16 </Exec>
17 </Input>

77.2.3.3. Translating the action_id Field


The action_id field in the received events contains the ID of the logged operation (see SQL Server Audit Records
on Microsoft Docs). The sys.dm_audit_actions view returns a row for every audit action that can be reported,
including a related action_id field and a human-readable action name. The Reading Audit Events From SQL
Audit File example above includes the action name field for each audit event. To get a complete list of audit
actions and associated details, execute this query and save the result for further reference.

SELECT DISTINCT action_id, name, class_desc, parent_class_desc


  FROM sys.dm_audit_actions

77.3. Reading Logs From a Database


NXLog provides the im_odbc module for reading logs from a database via ODBC. For more information about
ODBC (and the ConnectionString directive), see the Setting up ODBC section.

The SQL directive requires a SELECT statement for collecting logs. An id field must be returned, and must be used
to limit the results of the SELECT statement. Also, some data types may need special handling in order to be used
with NXLog. Continue to the following sections for more details.

77.3.1. Configuring the ID


The id field is used to track the position while collecting logs. It allows the im_odbc module to repeatedly poll for

510
new log records without collecting records more than once. In a simple scenario, the id is an auto-increment
integer field in a table, but several other data types are supported too (see the IdType directive). It is also
possible to generate the id field in the SELECT statement rather than using a field directly.

Writing a working SELECT statement for the SQL directive requires consideration of the id field in two ways.

1. The SELECT statement must return an id field. While there could be a field named id in a table, it is more
common to alias a field as id with the AS clause.
2. The SELECT statement must limit the results by including a WHERE clause. The WHERE clause should include
a question mark (?) which will be substituted with the highest value of the id that was previously seen by the
module instance.

The ways that the id can be generated are limited only by the database and the SQL language. However, the
following examples show the basic use of the int and datetime2 data types, as well as three which may require
special handling: datetimeoffset, datetime, and timestamp (or rowversion).

Example 349. Reading Logs by int ID

In this example, im_odbc collects logs from a table with an auto-increment (identity) int ID field.

Sample Table
CREATE TABLE dbo.test1 (
  RecordID int IDENTITY(1,1) NOT NULL,
  EventTime datetime2 NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (EventTime, Message)


VALUES ('2018-01-01T00:00:00', 'This is a test message');
GO

nxlog.conf
1 <Input reading_integer_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RecordID AS id, * FROM dbo.test1 WHERE RecordID > ?
7 Exec delete($id);
8 </Input>

Event Fields
{
  "RecordID": 1,
  "EventTime": "2017-12-31T23:00:00.000000Z",
  "Message": "This is a test message",
  "EventReceivedTime": "2018-04-01T10:40:54.313071Z",
  "SourceModuleName": "reading_integer_id",
  "SourceModuleType": "im_odbc"
}

511
Example 350. Reading Logs by datetime2 ID

This example shows a table with a datetime2 timestamp field, which im_odbc is configured to use as the id.

Sample Table
CREATE TABLE dbo.test1 (
  EventTime datetime2 NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (EventTime, Message)


VALUES ('2018-01-01T00:00:00', 'This is a test message');
GO

nxlog.conf
1 <Input reading_datetime2_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT EventTime AS id, * FROM dbo.test1 WHERE EventTime > ?
7 Exec delete($id);
8 </Input>

Example 351. Reading Logs by datetimeoffset ID

This example collects logs from a table with a datetimeoffset field used as the id. The datetimeoffset type
stores both a timestamp and an associated time-zone offset, and is not directly supported by im_odbc.
Thus, the CAST() function is used to convert the value to a datetime2 type.

Sample Table
CREATE TABLE dbo.test1 (
  EventTime datetimeoffset NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (EventTime, Message)


VALUES ('2018-01-01T00:00:00+01:00', 'This is a test message');
GO

nxlog.conf
1 <Input reading_datetimeoffset_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT CAST(EventTime AS datetime2) AS id, Message FROM dbo.test1 \
7 WHERE EventTime > ?
8 Exec delete($id);
9 </Input>

512
Example 352. Reading Logs by datetime ID

This example shows a table with a datetime type timestamp which will be used as the id. The datetime type
has been deprecated, and due to a change in the internal representation of datetime values in SQL Server,
some timestamp values (such as the one shown below) cannot be compared correctly without an explicit
casting in the WHERE clause. Without the CAST(), SQL Server may return certain records repeatedly (at
each PollInterval) until a later datetime value is added to the table.

Sample Table
CREATE TABLE dbo.test1 (
  EventTime datetime NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (EventTime, Message)


VALUES ('2018-01-01T00:00:00.333', 'This is a test message');
GO

nxlog.conf
1 <Input reading_datetime_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT EventTime AS id, * FROM dbo.test1 \
7 WHERE EventTime > CAST(? as datetime)
8 Exec delete($id);
9 </Input>

Example 353. Reading Logs by timestamp (rowversion) ID

This example shows a table with a timestamp (or rowversion, see rowversion (Transact-SQL) on Microsoft
Docs) type field which is used as the id. Notice that the IdType directive is set to integer rather than
timestamp, because the timestamp type is not actually a timestamp.

Sample Table
CREATE TABLE dbo.test1 (
  RowVersion timestamp NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (Message)


VALUES ('This is a test message');
GO

nxlog.conf
1 <Input reading_rowversion_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RowVersion AS id, * FROM dbo.test1 WHERE RowVersion > ?
7 Exec delete($id);
8 </Input>

513
77.3.2. Handling Unsupported Data Types
Some of SQL Server’s data types are not directly supported by im_odbc. If an im_odbc instance is configured to
read one of these types, it will log an unsupported odbc type error to the internal log. In this case, the CAST()
function should be used in the SELECT statement to convert the field to a type that im_odbc supports.

Example 354. Reading the datetimeoffset Type

In this example, a datetimeoffset type field is read as two distinct fields: $EventTime for the timestamp value
and $TZOffset for the time-zone offset value (in minutes).

Sample Table
CREATE TABLE dbo.test1 (
  RecordID int IDENTITY(1,1) NOT NULL,
  LogTime datetimeoffset NOT NULL,
  Message varchar(100) NOT NULL,
)

INSERT INTO dbo.test1 (LogTime, Message)


VALUES ('2018-01-01T00:00:00+01:00', 'This is a test message');
GO

nxlog.conf
 1 <Input reading_datetimeoffset>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
 4 Trusted_Connection=yes; Database=TESTDB
 5 IdType integer
 6 SQL SELECT RecordID AS id, \
 7 CAST(LogTime AS datetime2) AS EventTime, \
 8 DATEPART(tz, LogTime) AS TZOffset, \
 9 Message \
10 FROM dbo.test1 WHERE RecordID > ?
11 Exec rename_field($id, $RecordID);
12 </Input>

Event Fields
{
  "RecordID": 1,
  "EventTime": "2017-12-31T23:00:00.000000Z",
  "TZOffset": 60,
  "Message": "This is a test message",
  "EventReceivedTime": "2018-04-01T10:40:54.313071Z",
  "SourceModuleName": "odbcdrv17_in",
  "SourceModuleType": "im_odbc"
}

77.4. Writing Logs to a Database


NXLog provides the om_odbc module for writing logs to a database via ODBC. For more information about
setting up ODBC (and setting the ConnectionString directive), see the Setting up ODBC section.

The om_odbc sql_exec() function can be used to execute INSERT statements.

514
Example 355. Writing Events to an SQL Server Database

The following configuration inserts records into the dbo.test1 table of the specified database. The
$EventTime and $Message fields in the event record are used for the EventTime and Message fields in the
table.

Sample Table
CREATE TABLE dbo.test1 (
  RecordID int IDENTITY(1,1) NOT NULL,
  EventTime datetime2 NOT NULL,
  Message varchar(100) NOT NULL,
)

nxlog.conf
1 <Output mssql>
2 Module om_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 SQL "INSERT INTO dbo.test1 (EventTime, Message) VALUES (?,?)", \
6 $EventTime, $Message
7 </Output>

77.5. Setting up ODBC


To connect to a database, an ODBC connection string must be provided via the im_odbc module’s
ConnectionString directive. This section provides instructions for setting up a DSN-less ODBC connection to a
database from a Linux or Windows system. All connection parameters are given in the connection string and it is
not necessary to set up an ODBC DSN (data source name).

To use a DSN instead, consult either the ODBC Data Source Administrator and Data Source
NOTE Wizard sections on Microsoft Docs (for Windows) or the unixODBC documentation (for Linux), in
addition to the content below.

Connections to an SQL Server database can use either Windows Authentication (also called "trusted connection")
or SQL Server Authentication. For more information, see Choose an Authentication Mode on Microsoft Docs.

When connecting to an SQL Server database with SQL Server Authentication, the
connection string stored in the NXLog configuration file will need to include UID and PWD
keywords for username and password, respectively (this is true for both DSN and DSN-less
WARNING connections). Because these credentials are stored in plain text, it is important to verify
that the configuration file permissions are set correctly. It is also possible to fetch the
connection string from another file with the include directive or via a script with
include_stdout.

77.5.1. ODBC Driver for SQL Server


Download and install the ODBC driver version and package appropriate for the platform and requirements; see
Download ODBC Driver for SQL Server on Microsoft Docs. For more detailed instructions regarding installation
on Linux, see Installing the Microsoft ODBC Driver for SQL Server on Linux and macOS on Microsoft Docs.

515
Example 356. Using ODBC Driver 17 for SQL Server With Windows Authentication

This example uses the "ODBC Driver 17 for SQL Server" driver to connect to the specified server and
database. Windows Authentication is used to authenticate (the Trusted_Connection keyword). The UID
and PWD keywords are not required in this case. The user account under which NXLog is running must have
permission to access the database.

nxlog.conf
1 <Input win_auth>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>

Example 357. Using ODBC Driver 13 for SQL Server With SQL Server Authentication

This example uses the "ODBC Driver 13 for SQL Server" driver to connect to the specified server and
database. In this case, SQL Server Authentication is used to authenticate. The UID and PWD keywords must
be used to provide the SQL Server login account and password, respectively.

nxlog.conf
1 <Input sql_auth>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; Server=MSSQL-HOST; \
4 UID=test; PWD=testpass; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>

77.5.2. FreeTDS
It is also possible to use the FreeTDS driver on Linux.

• Run the following commands to install the FreeTDS driver on RHEL 7.

$ sudo yum install epel-release


$ sudo yum install freetds
$ sudo odbcinst -i -d -r <<EOF
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = libtdsodbc.so.0
Setup = libtdsS.so
EOF

• Run these commands to install the FreeTDS driver on Debian 9.

$ sudo apt-get install tdsodbc unixodbc


$ sudo dpkg-reconfigure tdsodbc

For more information about using FreeTDS, see the FreeTDS User Guide.

516
Example 358. Using FreeTDS With SQL Server Authentication

This example uses the FreeTDS driver to connect to the specified server and database.

nxlog.conf
1 <Input freetds>
2 Module im_odbc
3 ConnectionString Driver={FreeTDS}; Server=MSSQL-HOST; Port=1433; UID=test; \
4 PWD=testpass; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>

517
Chapter 78. Microsoft System Center Endpoint
Protection
Microsoft System Center Endpoint Protection (SCEP) is an anti-virus and anti-malware product for Windows
environments that includes a Windows Firewall manager. SCEP (formerly called Forefront) is integrated into
System Center, an enterprise system management product comprised of multiple modules that manages a
Windows-based enterprise IT environment. For more information, see the Endpoint Protection documentation
on Microsoft Docs.

Because the SCEP client logs events to Windows Event Log, it is possible to collect these events with NXLog.

78.1. EventData Field from Event Log


Some of the event data is stored as custom data in the EventData field of the events, as shown below. The
values are not labeled, but this data can be parsed using regular expressions, if the proper field names are
known.

EventData Field (Excerpt with Line Breaks Added)


<Data>%%830</Data>↵
<Data>1.5.1937.0</Data>↵
<Data>{92224018-9446-4C2D-AFCB-EC4456B8859E}</Data>↵
<Data>10</Data>↵
<Data>%%843</Data>↵
<Data></Data>↵
<Data>C:\\Program Files\\Mozilla Firefox\\firefox.exe</Data>↵
<Data>DOMAIN</Data>↵
<Data>admin</Data>↵
<Data>S-1-5-21-314323950-2314161084-4234690932-1002</Data>↵
<Data>EICAR_Test_File</Data>↵
<Data>2147519003</Data>↵
<Data>5</Data>↵
<Data>42</Data>↵
<Data>http://go.microsoft.com/fwlink/?linkid=37020&amp;name=EICAR_Test_File&amp;threatid=2147519003<
/Data>↵
<Data>file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt</Data>↵
<Data></Data>↵
<Data></Data>↵
<Data>4</Data>↵
<Data>%%814</Data>↵
<Data>0</Data>↵
<Data>%%823</Data>↵
<Data></Data>↵
<Data></Data>↵
<Data>Severe</Data>↵
<Data>Virus</Data>↵
<Data></Data>↵
<Data></Data>↵

Example 359. Collecting and Parsing Forefront (FCSAM) Events From Windows Event Log

This configuration uses the im_msvistalog module to collect FCSAM client events from Windows Event Log.
This will result in an $EventData field in the event record containing <Data> entries similar to the previous
example.

To extract values from the $EventData field, a regular expression is selected based on the event ID. Then
each <Data> entry is identified by a combination of its position in the list and a pattern match on its value.
For example, the <Data>1.5.1937.0</Data> portion of the EventData string is extracted and saved to the

518
NXLog $ClientVersion field.

This example includes regular expressions for parsing event IDs 3004, 3005, 5007, 5008, 1000, 1001, 1002,
1006 and 1007. Some fields, which are empty or otherwise do not contain useful data are skipped. The
configuration could be extended to parse other events logged by the FCSAM client via adding more regular
expressions, parsing multiple event IDs with a single expression, and/or dividing the parsing into multiple
expressions for a single event.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 define FCSAMEvents 3004, 3005, 5007, 5008, 1000, 1001, 1002, 1006, 1007
 6
 7 define EventID_3004_REGEX /(?x) \
 8 <Data>(?<ClientVersion>(\d+\.\d+\.\d+\.\d+))<\/Data> \
 9 <Data>(?<ScanID>(\{[\d\w\-]+\}))<\/Data> \
10 <Data>\d+<\/Data> \
11 <Data>\%\%\d{3}<\/Data> \
12 <Data><\/Data> \
13 <Data>(?<ProcessName>(\w{1}:\\.*\.exe))<\/Data> \
14 <Data>(?<Domain>([\w\d]+))<\/Data> \
15 <Data>(?<User>([\w\d]+))<\/Data> \
16 <Data>(?<SID>(S-[\d\-]+))<\/Data> \
17 <Data>(?<Filename>.*)<\/Data> \
18 <Data>(?<ID>(\d{9,11}))<\/Data> \
19 <Data>(?<SeverityID>(\d{1,2}))<\/Data> \
20 <Data>(?<CategoryID>(\d{1,3}))<\/Data> \
21 <Data>(?<FWLink>(http.*id=\d{10}))<\/Data> \
22 <Data>(?<PathFound>(file:\w{1}:.*\.\w{2,4}))<\/Data> \
23 <Data><\/Data> \
24 <Data><\/Data> \
25 <Data>\d+<\/Data> \
26 <Data>\%\%\d+<\/Data> \
27 <Data>\d+<\/Data> \
28 <Data>\%\%\d+<\/Data> \
29 [...]

519
Event Sample
{
  "EventTime": "2019-01-11T12:19:22.000000+01:00",
  "Hostname": "Host.DOMAIN.local",
  "Keywords": "36028797018963968",
  "EventType": "WARNING",
  "SeverityValue": 3,
  "Severity": "Severe",
  "EventID": 3004,
  "SourceName": "FCSAM",
  "TaskValue": 0,
  "RecordNumber": 11595,
  "ExecutionProcessID": 0,
  "ExecutionThreadID": 0,
  "Channel": "System",
  "Message": "Microsoft Forefront Client Security Real-Time Protection agent has detected
changes. Microsoft recommends you analyze the software that made these changes for potential
risks. You can use information about how these programs operate to choose whether to allow them
to run or remove them from your computer. Allow changes only if you trust the program or the
software publisher. Microsoft Forefront Client Security can't undo changes that you allow.\r\n
For more information please see the following:
\r\nhttp://go.microsoft.com/fwlink/?linkid=37020&name=EICAR_Test_File&threatid=2147519003\r\n
\tScan ID: {92224018-9446-4C2D-AFCB-EC4456B8859E}\r\n \tAgent: On Access\r\n \tUser: DOMAIN
\\admin\r\n \tName: EICAR_Test_File\r\n \tID: 2147519003\r\n \tSeverity: Severe\r\n \tCategory:
Virus\r\n \tPath Found: file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt\r\n \tAlert Type:
\r\n \tProcess Name: C:\\Program Files\\Mozilla Firefox\\firefox.exe\r\n \tDetection Type:
Concrete\r\n \tStatus: Suspend",
  "Opcode": "Info",
  "EventData": "<Data>%%830</Data><Data>1.5.1937.0</Data><Data>{92224018-9446-4C2D-AFCB-
EC4456B8859E}</Data><Data>10</Data><Data>%%843</Data><Data></Data><Data>C:\\Program Files
\\Mozilla Firefox\\firefox.exe</Data><Data>DOMAIN</Data><Data>admin</Data><Data>S-1-5-21-
314323950-2314161084-4234690932-
1002</Data><Data>EICAR_Test_File</Data><Data>2147519003</Data><Data>5</Data><Data>42</Data><Dat
a>http://go.microsoft.com/fwlink/?linkid=37020&amp;name=EICAR_Test_File&amp;threatid=2147519003
</Data><Data>file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt</Data><Data></Data><Data></Data
><Data>4</Data><Data>%%814</Data><Data>0</Data><Data>%%823</Data><Data></Data><Data></Data><Dat
a>Severe</Data><Data>Virus</Data><Data></Data><Data></Data>",
  "EventReceivedTime": "2019-01-11T12:19:22.883100+01:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog",
  "Category": "Virus",
  "CategoryID": "42",
  "ClientVersion": "1.5.1937.0",
  "FWLink":
"http://go.microsoft.com/fwlink/?linkid=37020&amp;name=EICAR_Test_File&amp;threatid=2147519003"
,
  "Filename": "EICAR_Test_File",
  "ID": "2147519003",
  "PathFound": "file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt",
  "ProcessName": "C:\\Program Files\\Mozilla Firefox\\firefox.exe",
  "SID": "S-1-5-21-314323950-2314161084-4234690932-1002",
  "ScanID": "{92224018-9446-4C2D-AFCB-EC4456B8859E}",
  "SeverityID": "5",
  "User": "DOMAIN \\ admin"
}

520
78.2. Collecting and Parsing SCEP Data from Log Files
SCEP client log files are located in the %allusersprofile%\Microsoft\Microsoft Antimalware\Support
directory.

These logs contain the following client actions:

• Definition updates
• Malware detections
• Monitoring alerts

Input Sample - MPDetection


2019-06-08T13:35:31.153Z Service started - System Center Endpoint Protection \↵
(DDEFDD14-250E-4DC8-A0B3-9D667EC5D8EB)↵

Input Sample - MPLog


2019-05-31T17:15:17.383Z Process scan (postsignatureupdatescan) started.↵
Signature updated via MMPC on 05-31-2019 19:15:17↵

SCEP Client Installation Logs Location


%allusersprofile%\Microsoft\Microsoft Security Client\Support

Input Sample - EppSetup


SUCCESS ⇥ 2019/05/31 19:12:05:782 TID:4700 PID:4692↵
Setup ended successfully with result: The operation completed successfully. [00000000]↵

Input Sample - MSSecurityClient_Setup


=== Verbose logging stopped: 5/31/2019 19:11:59 ===↵
MSI (s) (28:2C) [19:11:59:329]: Destroying RemoteAPI object.↵

The following configuration collects events from SCEP files with the im_file module. Logs are written in the
UTF-16LE character encoding, so the xm_charconv extension module is used to convert the input.

521
nxlog.conf (truncated)
 1 <Extension charconv>
 2 Module xm_charconv
 3 LineReader UTF-16LE
 4 </Extension>
 5
 6 <Extension _json>
 7 Module xm_json
 8 </Extension>
 9
10 <Input Antimalware>
11 Module im_file
12 File 'C:\ProgramData\Microsoft\Microsoft Antimalware\Support\' + \
13 'MPDetection-*.log'
14 File 'C:\ProgramData\Microsoft\Microsoft Antimalware\Support\' + \
15 'MPLog-*.log'
16 File 'C:\ProgramData\Microsoft\Microsoft Security Client\Support\' + \
17 'EppSetup.log'
18 File 'C:\ProgramData\Microsoft\Microsoft Security Client\Support\' + \
19 'MSSecurityClient_Setup*.log'
20 ReadFromLast TRUE
21 InputType charconv
22 <Exec>
23 file_name() =~ /(?<Filename>[^\\]+)$/;
24 if $FileName =~ /MPLog|MPDetection/
25 if $raw_event =~ /(.*\.\d{3}Z)\s+(.*)/
26 {
27 $EventTime = $1;
28 [...]

Event Sample - MPDetection


{
  "EventReceivedTime": "2019-06-16T14:24:51.746591+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "MPDetection-05312019-191154.log",
  "EventTime": "2019-06-08T13:35:31.153Z",
  "Message": "Service started - System Center Endpoint Protection (DDEFDD14-250E-4DC8-A0B3-
9D667EC5D8EB)"
}

Event Sample - MPLog


{
  "EventReceivedTime": "2019-06-16T14:36:04.642769+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "MPLog-05312019-191154.log",
  "Message": "************************************************************"
}
{
  "EventReceivedTime": "2019-06-16T14:36:04.642769+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "MPLog-05312019-191154.log",
  "EventTime": "2019-05-31T17:15:17.383Z",
  "Message": "Process scan (postsignatureupdatescan) started."
}

522
Event Sample - EppSetup
{
  "EventReceivedTime": "2019-06-16T14:39:07.127660+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "EppSetup.log",
  "Status": "SUCCESS",
  "EventTime": "2019/05/31 19:12:05:782",
  "TID": "4700",
  "PID": "4692"
}
{
  "EventReceivedTime": "2019-06-16T14:39:07.127660+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "EppSetup.log",
  "Message": "Setup ended successfully with result: The operation completed successfully."
}

Event Sample - MSSecurityClient_Setup


{
  "EventReceivedTime": "2019-06-16T14:22:17.824508+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "MSSecurityClient_Setup_4.7.213.0_epp_Install.log",
  "Message": "=== Verbose logging stopped: 5/31/2019 19:11:59 ==="
}
{
  "EventReceivedTime": "2019-06-16T14:22:17.824508+02:00",
  "SourceModuleName": "Antimalware",
  "SourceModuleType": "im_file",
  "Filename": "MSSecurityClient_Setup_4.7.213.0_epp_Install.log",
  "EventTime": "19:11:59:329",
  "Message": " Destroying RemoteAPI object."
}

78.3. Collecting and Parsing SCEP Data from an SQL Database


SCEP (SCCM) also logs data to an SQL database.

The following configuration queries the SCCM database with the im_odbc module. This example contains
two SQL queries collecting Last Malware alerts and AV Detection alerts.

523
nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input last_malware>
 6 Module im_odbc
 7 ConnectionString DSN=SMS;database=CM_CND;uid=user;pwd=password;
 8 IdType timestamp
 9 SQL SELECT DetectionTime as id,* \
10 FROM vEP_LastMalware \
11 WHERE DetectionTime > CAST(? AS datetime)
12 Exec to_json();
13 </Input>
14
15 <Input av_detections>
16 Module im_odbc
17 ConnectionString DSN=SMS;database=CM_CND;uid=user;pwd=password;
18 IdType timestamp
19 SQL SELECT DetectionTime as id,* \
20 FROM v_GS_Threats \
21 INNER JOIN v_R_System \
22 ON v_GS_Threats.ResourceID=v_R_System.ResourceID \
23 WHERE DetectionTime > CAST(? AS datetime)
24 Exec to_json();
25 </Input>

Event Sample - Last Malware


{
  "id": "2019-06-20T18:21:14.050000+02:00",
  "RecordID": 72057594037997950,
  "MachineID": 16777219,
  "LastMessageTime": "2019-06-20T18:21:22.597000+02:00",
  "LastMessageSerialNumber": 102,
  "DetectionTime": "2019-06-20T18:21:14.050000+02:00",
  "ActionTime": "2019-06-20T18:21:22.573000+02:00",
  "ProductVersion": "4.7.213.0",
  "DetectionID": "6A70D85D-1AB0-4F20-BCAB-9B9CCEEA5ED5",
  "DetectionSource": 1,
  "PendingActions": 0,
  "Process": "Unknown",
  "UserID": 16777217,
  "ThreatName": "Virus:DOS/EICAR_Test_File",
  "ThreatID": 2147519003,
  "SeverityID": 5,
  "CategoryID": 42,
  "Path": "file:_C:\\Users\\admin\\Downloads\\eicar.com;file:_C:\\Users\\admin\\Downloads
\\eicar.com.txt",
  "CleaningAction": 2,
  "ExecutionStatus": 0,
  "ActionSuccess": true,
  "ErrorCode": 0,
  "RemainingActions": 0,
  "LastRemainingActionsCleanTime": null,
  "EventReceivedTime": "2019-06-20T20:22:28.050844+02:00",
  "SourceModuleName": "last_malware",
  "SourceModuleType": "im_odbc"
}

524
Chapter 79. Microsoft System Center Configuration
Manager
System Center Configuration Manager (SCCM) is a software management suite that enables administrators to
manage the deployment and security of devices, applications and operating system patches across a corporate
network. SCCM is part of the Microsoft System Center suite. NXLog can collect and forward the log data created
by SCCM.

79.1. SCCM Log Types


SCCM log files can be organized into three categories:

Client log files


Logs related to client operation and installation.

Server log files


Logs on the server or related to specific system roles.

Log files by functionality


Logs related to application management, endpoint protection, software updates and so on.

SCCM stores log files in various locations depending on the process originator and system configuration.

79.2. Collecting from Log Files


SCCM client and server components record process information in log files. These log files are usable for initial
troubleshooting if needed.

SCCM enables logging for client and server components by default. NXLog can collect these events with the
im_file module.

525
Example 360. Configuration for File Based Logs

The following configuration uses the im_file module for collecting the log files and parses the contents via
regular expressions to extract the fields. It contains two types of custom regular expressions for the usage
of proper fields.

nxlog.conf (truncated)
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 define type1 /(?x)^(?<Message>.*)\$\$\<\
 6 (?<Component>.*)\>\<\
 7 (?<EventTime>.*).\d{3}-\d{2}\>\<thread=\
 8 (?<Thread>\d+)/s
 9
10 define type2 /(?x)^\<\!\[LOG\[(?<Message>.*)\]LOG\]\!\>\<time=\"\
11 (?<Time>.*).\d{3}-\d{2}\"\s+date=\"\
12 (?<Date>.*)\"\s+component=\"\
13 (?<Component>.*)\"\s+context=\"\
14 (?<Context>.*)\"\s+type=\"\
15 (?<Type>.*)\"\s+thread=\"\
16 (?<Thread>.*)\"\s+file=\"\
17 (?<File>.*)\"\>/s
18
19
20 <Input in>
21 Module im_file
22 File 'C:\WINDOWS\SysWOW64\CCM\Logs\*'
23 File 'C:\WINDOWS\System32\CCM\Logs\*'
24 File 'C:\Program Files\Microsoft Configuration Manager\Logs\*'
25 File 'C:\Program Files\SMS_CCM\Logs\*'
26 <Exec>
27 if file_name() =~ /^.*\\(.*)$/ $Filename = $1;
28 if $raw_event =~ %type1%;
29 [...]

Output sample from MP_Framework.log


{
  "EventReceivedTime": "2019-11-06T21:29:38.585187+01:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "Filename": "MP_Framework.log",
  "Component": "MpFramework",
  "Context": "",
  "File": "mpstartuptask.cpp:122",
  "Message": "Policy request file doesn't exist.",
  "Thread": "7824",
  "Type": "1",
  "EventTime": "2019-11-06T21:29:38.000000+01:00"
}

79.3. Collecting from a Microsoft SQL Database


SCCM logs events into a Microsoft SQL Server database. NXLog can collect these events with the im_odbc
module.

For this, an ODBC System Data Source need to be configured either on the server running NXLog or on a remote

526
server, in the case you would like to get log data via ODBC remotely.

For more information, consult the relevant ODBC documentation; the Microsoft ODBC Data Source
Administrator guide or the unixODBC Project.

The below configuration example contains two im_odbc module instances to fetch data from the following two
views:

• V_SMS_Alert — lists information about built-in and user created alerts, which might be displayed in the
SCCM console.
• V_StatMsgWithInsStrings — lists information about status messages returned by each SCCM component.

SCCM provides an overview of audit related information in the Monitoring > Overview >
NOTE System Status > Status Message Queries list in the GUI. SCCM stores audit related
information in the V_StatMsgWithInsStrings view of the SQL database.

Audit related messages are vital to track which accounts have modified or deleted settings in
NOTE
the SCCM environment. These messages are purged from the database after 180 days.

Queries are based on the Microsoft System Center Configuration Manager Schema. For more information, see
the Status and alert views section in the SSCM documentation.

527
Example 361. Configuration with two SQL Queries and a Combined Output

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input sccm_alerts>
 6 Module im_odbc
 7 ConnectionString DSN=SMS SQL;database=CM_CND;uid=user;pwd=password;
 8 SQL SELECT ID,TypeID,TypeInstanceID,Name,FeatureArea, \
 9 ObjectWmiClass,Severity FROM V_SMS_Alert
10 </Input>
11
12 <Input sccm_audit>
13 Module im_odbc
14 ConnectionString DSN=SMS SQL;database=CM_CND;uid=user;pwd=password;
15 SQL SELECT * FROM v_StatMsgWithInsStrings
16 </Input>
17
18 <Output outfile>
19 Module om_file
20 File 'C:\logs\out.log'
21 Exec to_json();
22 </Output>
23
24 <Route sccm>
25 Path sccm_alerts, sccm_audit => outfile
26 </Route>

Output.log (Audit Query)


{
  "RecordID": 72057594037934110,
  "ModuleName": "SMS Provider",
  "Severity": 1073741824,
  "MessageID": 30063,
  "ReportFunction": 0,
  "SuccessfulTransaction": 0,
  "PartOfTransaction": 0,
  "PerClient": 0,
  "MessageType": 768,
  "Win32Error": 0,
  "Time": "2019-02-28T20:35:59.010000+01:00",
  "SiteCode": "CND",
  "TopLevelSiteCode": "",
  "MachineName": "Host.DOMAIN.local",
  "Component": "Microsoft.ConfigurationManagement.exe",
  "ProcessID": 1236,
  "ThreadID": 6112,
  "InsString1": "DOMAIN\\admin",
  "InsString2": "CND00001",
  "InsString3": "NXLog",
  "InsString4": "SMS_R_System",
  "EventReceivedTime": "2019-02-28T21:36:04.986375+01:00",
  "SourceModuleName": "sccm_in",
  "SourceModuleType": "im_odbc"
}

528
Chapter 80. Microsoft System Center Operations
Manager
Microsoft System Center Operations Manager (SCOM) provides infrastructure monitoring across various services,
devices, and operations from a single console. The activities related to these systems are recorded in SCOM’s
databases, and these databases can be queried using SQL. The resulting data can be collected and forwarded by
NXLog.

80.1. Log Types


Collected event logs
These events are collected by filtering rules in configured management packs.

Alert logs
Alerts are significant events generated by rules and monitors.

SCOM administrative event logs


Administrative actions executed in SCOM are currently either unsupported by Microsoft (requiring SQL
triggers in the OM database and thus voiding the warranty) or are too performance heavy with little
meaningful data to retrieve.

The default retention time for resolved alerts and collected events is seven days, after which the
NOTE database entries are groomed. To configure database grooming settings read the TechNet
article How to Configure Grooming Settings for the Operations Manager Database.

80.2. Collecting Logs


For NXLog to collect logs, the following prerequisites must be completed.

• Create a Windows/SQL account with read permissions for the Operations Manager database.
• Configure an ODBC 32-bit System Data Source on the server running NXLog. For more information, consult
the relevant ODBC documentation: the Microsoft ODBC Data Source Administrator guide or the unixODBC
Project.
• Set an appropriate firewall rule on the database server that accepts connections from the server running
NXLog. Open TCP port 1433 or whichever port the SQL Server is configured to allow SQL Server access on.
For further information read the Configure Firewall for Database Engine Access guide.

NXLog can then be configured with one or more im_odbc input modules, each with an SQL query that produces
the fields to be logged.

The configured SQL query must contain a way to serialize the result set, enabling NXLog to
NOTE resume reading logs where it left off after a restart. This is easily achieved by using an auto-
increment-like solution or a timestamp field. See the example below.

Example 362. Collecting Event and Alert Logs

This example queries the database for event logs and unresolved alert logs, then sends the results in JSON
format to a plain text file. Note the Exec directive in the scom_alerts input instance. It is used to extract
the content of the AlertParameters field that is itself a composite (XML) structure. You should define your
own regular expressions to extract data you are interested in from the alerts' AlertParameters and Context
fields and the events' EventData and EventParameters fields.

This example uses the DATEDIFF SQL function to generate a timestamp from an SQL datetime field with
millisecond precision. The timestamp is used to serialize the result set as required by NXLog. Starting with

529
SQL Server 2016 the DATEDIFF_BIG T-SQL function can be used instead (see DATEDIFF_BIG (Transact-SQL) at
MSDN).

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input scom_events>
 6 Module im_odbc
 7 ConnectionString DSN=scom;uid=username@mydomain.local;pwd=mypassword;\
 8 database=OperationsManager
 9 SQL SELECT CAST(DATEDIFF(minute, '19700101', CAST(EV.TimeGenerated AS DATE)) \
10 AS BIGINT) * 60000 + DATEDIFF(ms, '19000101', \
11 CAST(EV.TimeGenerated AS TIME)) AS 'id', \
12 EV.TimeGenerated AS 'EventTime', \
13 EV.TimeAdded AS 'EventAddedTime', \
14 EV.Number AS 'EventID', \
15 EV.MonitoringObjectDisplayName AS 'Source', \
16 R.DisplayName AS 'RuleName', \
17 EV.EventData, EV.EventParameters \
18 FROM EventView EV JOIN RuleView R WITH (NOLOCK) ON \
19 EV.RuleId = R.id \
20 WHERE CAST(DATEDIFF(minute, '19700101', CAST(EV.TimeGenerated \
21 AS DATE)) AS BIGINT) * 60000 + DATEDIFF(ms, '19000101', \
22 CAST(EV.TimeGenerated AS TIME)) > ?
23 PollInterval 30
24 IdIsTimeStamp FALSE
25 </Input>
26
27 <Input scom_alerts>
28 Module im_odbc
29 [...]

Output Sample (Event Log)


{
  "id": 1463035663720,
  "EventTime": "2016-05-12 06:47:43",
  "EventAddedTime": "2016-05-12 06:48:15",
  "WindowsID": 4776,
  "Source": "dc01.nxlog.local",
  "RuleName": "Windows log collection test",
  "EventData": "<DataItem type=\"System.XmlData\" time=\"2016-05-12T08:47:44.7224395+02:00\"
sourceHealthServiceId=\"F767895D-A408-0F91-42A3-87565E1D9D85\"><EventData xmlns=
\"http://schemas.microsoft.com/win/2004/08/events/event\"><Data Name=\"PackageName
\">MICROSOFT_AUTHENTICATION_PACKAGE_V1_0</Data><Data Name=\"TargetUserName
\">SCOM01$</Data><Data Name=\"Workstation\">SCOM01</Data><Data Name=\"Status
\">0x0</Data></EventData></DataItem>",
  "EventParameters":
"<Param>MICROSOFT_AUTHENTICATION_PACKAGE_V1_0</Param><Param>SCOM01$</Param><Param>SCOM01</Param
><Param>0x0</Param>",
  "EventReceivedTime": "2016-05-12 10:28:50",
  "SourceModuleName": "in_scom_events",
  "SourceModuleType": "im_odbc"
}

530
Output Sample (Alert Log)
{
  "id": 1462887688220,
  "Alert Name": "Failed to Connect to Computer",
  "Category": "StateCollection",
  "Alert Description": "The computer {0} was not accessible.",
  "EventTime": "2016-05-10 13:41:28",
  "EventAddedTime": "2016-05-10 13:41:28",
  "Context": "<DataItem type=\"MonitorTaskDataType\" time=\"2016-05-10T15:41:28.1932994+02:00\"
sourceHealthServiceId=\"00000000-0000-0000-0000-000000000000\"><StateChange><DataItem time=
\"2016-05-10T15:41:25.5592943+02:00\" type=\"System.Health.MonitorStateChangeData\"
sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B\" MonitorExists=\"true\"
DependencyInstanceId=\"00000000-0000-0000-0000-000000000000\" DependencyMonitorId=\"00000000-
0000-0000-0000-000000000000\"><ManagedEntityId>CC7109D1-9177-090D-AC3A-
18781CFFF898</ManagedEntityId><EventOriginId>9B02AB65-FDB5-40AE-863F-
6FAD232E06F9</EventOriginId><MonitorId>B59F78CE-C42A-8995-F099-
E705DBB34FD4</MonitorId><ParentMonitorId>A6C69968-61AA-A6B9-DB6E-
83A0DA6110EA</ParentMonitorId><HealthState>3</HealthState><OldHealthState>1</OldHealthState><Ti
meChanged>2016-05-10T15:41:25.5592943+02:00</TimeChanged><Context><DataItem type=
\"System.Availability.StateData\" time=\"2016-05-10T15:41:25.5542835+02:00\"
sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B\"><ManagementGroupId>{1457194C-
D3B4-6685-5D3B-E4F7DAB158AD}</ManagementGroupId><HealthServiceId>72704AC7-4FDF-6006-1BB0-
C74868E173D5</HealthServiceId><HostName>member2012r2-01.nxlog.local</HostName><Reachability
ThruServer=\"false\"><State>0</State></Reachability></DataItem></Context></DataItem></StateChan
ge><Diagnostic><DataItem type=\"System.PropertyBagData\" time=\"2016-05-
10T15:41:25.6342865+02:00\" sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B
\"><Property Name=\"StatusCode\" VariantType=\"8\">11003</Property><Property Name=
\"ResponseTime\" VariantType=\"8\"></Property></DataItem></Diagnostic></DataItem>",
  "AlertParameters": "<AlertParameters><AlertParameter1>member2012r2-
01.nxlog.local</AlertParameter1></AlertParameters>",
  "EventReceivedTime": "2016-05-12 10:33:38",
  "SourceModuleName": "in_scom_alerts",
  "SourceModuleType": "im_odbc",
  "AlertMessage": "member2012r2-01.nxlog.local"
}

531
Chapter 81. MongoDB
MongoDB is a document-oriented database system.

NXLog can be configured to collect data from a MongoDB database. A proof-of-concept Perl script is shown in
the example below.

Example 363. Collecting Data From MongoDB

This configuration uses im_perl to execute a Perl script which reads data from a MongoDB database. The
generated events are written to file with om_file.

When new documents are available in the database, the script sorts them by ObjectId and processes them
sequentially. Each document is passed to NXLog by calling Log::Nxlog::add_input_data(). The script will
poll the database continuously with Log::Nxlog::set_read_timer(). In the event that the MongoDB
server is unreachable, the timer delay will be increased to attempt reconnection later.

WARNING After processing, documents are deleted from the collection.

The Perl script shown here is a proof-of-concept only. The script must be modified to
NOTE
correspond with the data to be collected from MongoDB.

nxlog.conf
1 <Input perl>
2 Module im_perl
3 PerlCode mongodb-input.pl
4 </Input>
5
6 <Output file>
7 Module om_file
8 File '/tmp/output.log'
9 </Output>

mongodb-input.pl (truncated)
#!/usr/bin/perl

use strict;
use warnings;

use FindBin;
use lib $FindBin::Bin;
use Log::Nxlog;
use MongoDB;
use Try::Tiny;

my $counter;
my $client;
my $collection;
my $cur;
my $count;
my $logfile;
[...]

For this example, a JSON data set of US ZIP (postal) codes was used. The data set was fed to MongoDB with
mongoimport -d zips -c zips --file zips.json.

532
Input Sample
{ "_id" : "01001", "city" : "AGAWAM", "loc" : [ -72.622739, 42.070206 ], "pop" : 15338, "state"
: "MA" }
{ "_id" : "01008", "city" : "BLANDFORD", "loc" : [ -72.936114, 42.182949 ], "pop" : 1240,
"state" : "MA" }
{ "_id" : "01010", "city" : "BRIMFIELD", "loc" : [ -72.188455, 42.116543 ], "pop" : 3706,
"state" : "MA" }
{ "_id" : "01011", "city" : "CHESTER", "loc" : [ -72.988761, 42.279421 ], "pop" : 1688, "state"
: "MA" }
{ "_id" : "01020", "city" : "CHICOPEE", "loc" : [ -72.576142, 42.176443 ], "pop" : 31495,
"state" : "MA" }

Output Sample
ID: 01001 City: AGAWAM Loc: -72.622739,42.070206 Pop: 15338 State: MA↵
ID: 01008 City: BLANDFORD Loc: -72.936114,42.182949 Pop: 1240 State: MA↵
ID: 01010 City: BRIMFIELD Loc: -72.188455,42.116543 Pop: 3706 State: MA↵
ID: 01011 City: CHESTER Loc: -72.988761,42.279421 Pop: 1688 State: MA↵
ID: 01020 City: CHICOPEE Loc: -72.576142,42.176443 Pop: 31495 State: MA↵

533
Chapter 82. Nagios Log Server
Nagios Log Server provides centralized management, monitoring, and analysis of logging data. It utilizes the ELK
(Elasticsearch, Logstash, and Kibana) stack. NXLog can be customized to send log data to the Nagios Log Server
over TCP, UDP, and TLS/SSL protocols.

82.1. Installation and Configuration of Nagios Log Server


To learn more about installation and configuration of Nagios Log Server, see the Manual Installation Instructions
and Administrator Guide on the Nagios website.

By default, Nagios Log Server does not require any post-installation configuration which means logs can be
received from NXLog right away.

82.2. NXLog Configuration


NXLog can be configured to send the logs it collects to Nagios Log Server.

To see the IP address and ports of the Nagios Log Server instance, open the Configure page and find the
Configuration Editor section.

These address and ports will be used in the examples below.

Example 364. Collecting Systemd Logs

The configuration below reads systemd messages using the im_systemd module and selects only those
entries which contain the sshd character combination. The selected messages are processed with the
xm_kvp module and converted to JSON using the xm_json module. Sending over TCP is carried out using
the om_tcp module.

534
 1 <Extension kvp>
 2 Module xm_kvp
 3 KVDelimiter =
 4 KVPDelimiter " "
 5 </Extension>
 6
 7 <Extension json>
 8 Module xm_json
 9 </Extension>
10
11 <Input systemd>
12 Module im_systemd
13 ReadFromLast TRUE
14 Exec if not ($raw_event =~ /sshd/) drop();
15 </Input>
16
17 <Output out>
18 Module om_tcp
19 Host 192.168.31.179
20 Port 3515
21 <Exec>
22 kvp->parse_kvp();
23 to_json();
24 </Exec>
25 </Output>

Below is the event sample of a log message which is sent over TCP.

{
  "Severity": "info",
  "SeverityValue": 6,
  "Facility": "syslog",
  "FacilityValue": 4,
  "Message": "Accepted password for administrator from 192.168.31.179 port 46534 ssh2",
  "SourceName": "sshd",
  "ProcessID": 3168,
  "User": "root",
  "Group": "root",
  "ProcessName": "sshd",
  "ProcessExecutable": "/usr/sbin/sshd",
  "ProcessCmdLine": "sshd: administrator [priv]",
  "Capabilities": "3fffffffff",
  "SystemdCGroup": "/system.slice/ssh.service",
  "SystemdUnit": "ssh.service",
  "SystemdSlice": "system.slice",
  "SelinuxContext": "unconfined\n",
  "EventTime": "2020-03-25 18:59:53",
  "BootID": "1eb2f28ae8064c7a954e2420be54a7f2",
  "MachineID": "0823d4a95f464afeb0021a7e75a1b693",
  "SysInvID": "984c8a16fd20462a9ac8c0682081979c",
  "Hostname": "ubuntu",
  "Transport": "syslog",
  "EventReceivedTime": "2020-03-25T18:59:53.565177+00:00",
  "SourceModuleName": "systemd",
  "SourceModuleType": "im_systemd"
}

Example 365. Collecting Windows Event Logs

535
The configuration below reads Windows Event Log entries and selects only those entries which contain IDs
4624 and 4625 using the im_msvistalog module. The collected logs are then converted to JSON using the
xm_json module after the Message field is deleted from the entry. Sending over UDP is carried out using the
om_udp module.

 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input in_eventlog>
 6 Module im_msvistalog
 7 <QueryXML>
 8 <QueryList>
 9 <Query Id="0">
10 <Select Path="Security">
11 *[System[Level=0 and (EventID=4624 or EventID=4625)]]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 <Exec>
16 delete($Message);
17 json->to_json();
18 </Exec>
19 </Input>
20
21 <Output out>
22 Module om_udp
23 Host 192.168.31.179
24 Port 5544
25 Exec to_json();
26 </Output>

Below is the event sample of a log message which is sent over UDP.

536
{
  "EventTime": "2020-03-22T13:48:55.455545-07:00",
  "Hostname": "WIN-IVR26CIVSF6",
  "Keywords": "9232379236109516800",
  "EventType": "AUDIT_SUCCESS",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 4624,
  "SourceName": "Microsoft-Windows-Security-Auditing",
  "ProviderGuid": "{54849625-5478-4994-A5BA-3E3B0328C30D}",
  "Version": 2,
  "TaskValue": 12544,
  "OpcodeValue": 0,
  "RecordNumber": 15033,
  "ActivityID": "{CFEB8893-00D2-0000-E289-EBCFD200D601}",
  "ExecutionProcessID": 532,
  "ExecutionThreadID": 572,
  "Channel": "Security",
  "Category": "Logon",
  "Opcode": "Info",
  "SubjectUserSid": "S-1-5-18",
  "SubjectUserName": "WIN-IVR26CIVSF6$",
  "SubjectDomainName": "WORKGROUP",
  "SubjectLogonId": "0x3e7",
  "TargetUserSid": "S-1-5-90-0-6",
  "TargetUserName": "DWM-6",
  "TargetDomainName": "Window Manager",
  "TargetLogonId": "0x1c8f13",
  "LogonType": "2",
  "LogonProcessName": "Advapi ",
  "AuthenticationPackageName": "Negotiate",
  "WorkstationName": "-",
  "LogonGuid": "{00000000-0000-0000-0000-000000000000}",
  "TransmittedServices": "-",
  "LmPackageName": "-",
  "KeyLength": "0",
  "ProcessId": "0x848",
  "ProcessName": "C:\\Windows\\System32\\winlogon.exe",
  "IpAddress": "-",
  "IpPort": "-",
  "ImpersonationLevel": "%%1833",
  "RestrictedAdminMode": "-",
  "TargetOutboundUserName": "-",
  "TargetOutboundDomainName": "-",
  "VirtualAccount": "%%1842",
  "TargetLinkedLogonId": "0x1c8f24",
  "ElevatedToken": "%%1842",
  "EventReceivedTime": "2020-03-22T13:48:56.870657-07:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog"
}

Configuration of NXLog for sending logs over SSL/TLS is already described in the Sending NXLogs With SSL/TLS
section on the Nagios website.

To read more about encrypted transfer of data, see the Encrypted Transfer and TLS/SSL (om_ssl) chapters on
NXLog website.

Other examples of sending log data using NXLog from the Nagios website:

537
• Configuring NXLog To Send Additional Log Files
• Configuring NXLog To Send Multi-Line Log Files

82.3. Verifying Data Collection


To verify successful collection by the Nagios Log Server, open the Home page and add the relevant log source.

On the log source page, find the Verify Incoming Logs section, type in the IP address of the NXLog server and
click the Verify button. The verification should show a number of log entries which have already been accepted
by the Log Server from the specified IP address.

To observe the collected entries, go to the Reports page and click the required IP address (hostname) in the
table.

The table with log entries will open. To expand information about the specified entry, click its line in the table.

Each entry contains structured information about its fields and values.

538
539
Chapter 83. Nessus Vulnerability Scanner
The results of a Nessus scan, saved as XML, can be collected and parsed with NXLog Enterprise Edition.

Scan Sample
<?xml version="1.0" ?>
<NessusClientData_v2>
  <Report xmlns:cm="http://www.nessus.org/cm" name="Scan Testbed">
  <ReportHost name="192.168.1.112">
  <HostProperties>
  <tag name="HOST_END">Wed Jun 18 04:20:45 2014</tag>
  <tag name="patch-summary-total-cves">1</tag>
  <tag name="traceroute-hop-1">?</tag>
  <tag name="traceroute-hop-0">10.10.10.20</tag>
  <tag name="operating-system">Linux Kernel</tag>
  <tag name="host-ip">192.168.1.112</tag>
  <tag name="HOST_START">Wed Jun 18 04:19:21 2014</tag>
  </HostProperties>
  <ReportItem port="6667" svc_name="irc" protocol="tcp" severity="0" pluginID="22964"
  pluginName="Service Detection" pluginFamily="Service detection">
  <description>It was possible to identify the remote service by its banner or by
looking at the error
  message it sends when it receives an HTTP request.
  </description>
  <fname>find_service.nasl</fname>
  <plugin_modification_date>2014/06/03</plugin_modification_date>
  <plugin_name>Service Detection</plugin_name>
  <plugin_publication_date>2007/08/19</plugin_publication_date>
  <plugin_type>remote</plugin_type>
  <risk_factor>None</risk_factor>
  <script_version>$Revision: 1.137 $</script_version>
  <solution>n/a</solution>
  <synopsis>The remote service could be identified.</synopsis>
  <plugin_output>An IRC server seems to be running on this port is running on this
port.</plugin_output>
  </ReportItem>
  </ReportHost>
  </Report>
</NessusClientData_v2>

While the above sample illustrates the correct syntax, it is not a complete Nessus report. For
NOTE
more information refer to the Nessus v2 File Format document on tenable.com.

The preferred approach for parsing Nessus scans is with im_perl and a Perl script; this provides fine-grained
control over the collected information. If Perl is not available, the xm_multiline and xm_xml extension modules
can be used instead. Both methods require NXLog Enterprise Edition.

Example 366. Parsing Events With Perl

In this example, the im_perl input module executes the nessus.pl Perl script which reads the Nessus scan.
The script generates an event for each ReportItem, and includes details from Report and ReportHost in
each event. Furthermore, normalized $EventTime, $Severity, and $SeverityValue fields are added to
the event record.

540
nxlog.conf
1 <Input perl>
2 Module im_perl
3 PerlCode nessus.pl
4 </Input>

Event Sample
{
  "EventTime": "2014-06-18 04:20:45",
  "Report": "Scan Testbed",
  "ReportHost": "192.168.1.112",
  "port": "6667",
  "svc_name": "irc",
  "protocol": "tcp",
  "NessusSeverityValue": 0,
  "NessusSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "pluginID": "22964",
  "pluginName": "Service Detection",
  "pluginFamily": "Service detection",
  "description": "It was possible to identify the remote service by its banner or by looking at
the error\nmessage it sends when it receives an HTTP request.\n",
  "fname": "find_service.nasl",
  "plugin_modification_date": "2014/06/03",
  "plugin_name": "Service Detection",
  "plugin_publication_date": "2007/08/19",
  "plugin_type": "remote",
  "risk_factor": "None",
  "script_version": "$Revision: 1.137 $",
  "solution": "n/a",
  "synopsis": "The remote service could be identified.",
  "plugin_output": "An IRC server seems to be running on this port is running on this port.",
  "EventReceivedTime": "2017-11-29 20:29:40",
  "SourceModuleName": "perl",
  "SourceModuleType": "im_perl"
}

nessus.pl (truncated)
#!/usr/bin/perl

use strict;
use warnings;

use FindBin;
use lib $FindBin::Bin;
use Log::Nxlog;
use XML::LibXML;

sub read_data {
  my $doc = XML::LibXML->load_xml( location => 'scan.nessus' );
  my $report = $doc->findnodes('/NessusClientData_v2/Report');
  my @nessus_sev = ("INFO","LOW","MEDIUM","HIGH","CRITICAL");
  my @nxlog_sev_val = (2,3,4,5,5);
  my @nxlog_sev = ("INFO","WARNING","ERROR","CRITICAL","CRITICAL");
  my %mon2num = qw(
  Jan 01 Feb 02 Mar 03 Apr 04 May 05 Jun 06
[...]

541
Example 367. Parsing Events With xm_multiline

This example depicts an alternative way to collect results from Nessus XML scan files, recommended only if
Perl is not available. This configuration generates an event for each ReportItem found in the scan report.

nxlog.conf
 1 <Extension multiline_parser>
 2 Module xm_multiline
 3 HeaderLine /^<ReportItem/
 4 EndLine /^<\/ReportItem>/
 5 </Extension>
 6
 7 <Extension _xml>
 8 Module xm_xml
 9 ParseAttributes TRUE
10 </Extension>
11
12 <Input in>
13 Module im_file
14 File "nessus_report.xml"
15 InputType multiline_parser
16 <Exec>
17 # Discard everything that doesn't seem to be an xml event
18 if $raw_event !~ /^<ReportItem/ drop();
19
20 # Parse the xml event
21 parse_xml();
22 </Exec>
23 </Input>

Event Sample
{
  "EventReceivedTime": "2017-11-09 10:22:58",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "ReportItem.port": "6667",
  "ReportItem.svc_name": "irc",
  "ReportItem.protocol": "tcp",
  "ReportItem.severity": "0",
  "ReportItem.pluginID": "22964",
  "ReportItem.pluginName": "Service Detection",
  "ReportItem.pluginFamily": "Service detection",
  "ReportItem.description": "It was possible to identify the remote service by its banner or by
looking at the error\nmessage it sends when it receives an HTTP request.\n",
  "ReportItem.fname": "find_service.nasl",
  "ReportItem.plugin_modification_date": "2014/06/03",
  "ReportItem.plugin_name": "Service Detection",
  "ReportItem.plugin_publication_date": "2007/08/19",
  "ReportItem.plugin_type": "remote",
  "ReportItem.risk_factor": "None",
  "ReportItem.script_version": "$Revision: 1.137 $",
  "ReportItem.solution": "n/a",
  "ReportItem.synopsis": "The remote service could be identified.",
  "ReportItem.plugin_output": "An IRC server seems to be running on this port is running on
this port."
}

542
Chapter 84. NetApp
NetApp storage is capable of sending logs to a remote Syslog destination via UDP as well as saving audit logs
directly to a network share.

Log Sample
4/14/2017 15:40:25 p-netapp1 DEBUG repl.engine.error: replStatus="8",
replFailureMsg="5898503", replFailureMsgDetail="0", functionName="repl_util::Result
repl_core::Instance::endTransfer(spinnp_uuid_t*)", lineNumber="738"↵

For more details about configuring logging on NetApp storage, please refer to the Product Documentation
section of the NetApp Support site. Search for your ONTAP version, which can be determined by running
version -b from the command line.

Example 368. Checking the ONTAP Version

This example shows the output from ONTAP 8.3.

> version -b
/cfcard/x86_64/freebsd/image1/kernel: OS 8.3.1P2

84.1. Sending Logs in Syslog Format


The NetApp web interface does not provide a way to configure an external Syslog server, but it is possible to
configure this on the command line. This is a cluster level change that only needs to performed only once per
cluster, and will automatically be applied to all members.

The steps below have been tested with ONTAP 8 and should work for earlier versions. Exact
NOTE
commands for newer versions may vary.

1. Configure NXLog to receive log entries via UDP and process them as Syslog (see the examples below). Then
restart NXLog.
2. Make sure the NXLog agent is accessible from each member of the cluster.
3. Log in to the cluster address with SSH.
4. Run the following command to configure the Syslog destination. Replace NAME and IP_ADDRESS with the
required values. The default port for UDP is 514.

> event destination create -name NAME -syslog IP_ADDRESS

5. Now select the messages to be sent. Use the same NAME as in the previous step and set MSGS to the required
value.

> event route add-destinations -destinations NAME -messagename MSGS

A list of messages can be obtained by running the command with a question mark (?) as the argument.

> event route add-destinations -destinations NAME -messagename ?

It is also possible to specify a severity level in addition to message types. The severity levels are EMERGENCY,
ALERT, CRITICAL, ERROR, WARNING, NOTICE, INFORMATIONAL, and DEBUG.

> event route add-destinations -destinations NAME -messagename MSGS


  -severity SEVERITY

543
Example 369. Sending Messages at Informational Level to 192.168.6.143

The following commands send all messages with Informational severity level (including higher
severites) to 192.168.6.143 in Syslog format via UDP port 514.

> event destination create -name nxlog -syslog 192.168.6.143


> event route add-destinations -destinations nxlog -messagename *
  -severity <=INFORMATIONAL

Example 370. Receiving Syslog Logs From NetApp

This example shows NetApp Syslog logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/netapp.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.5.61",
  "EventReceivedTime": "2017-04-14 15:38:58",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 0,
  "SyslogFacility": "KERN",
  "SyslogSeverityValue": 7,
  "SyslogSeverity": "DEBUG",
  "SeverityValue": 1,
  "Severity": "DEBUG",
  "Hostname": "192.168.5.61",
  "EventTime": "2017-04-14 15:40:25",
  "Message": "[p-netapp1:repl.engine.error:debug]: replStatus=\"8\", replFailureMsg=\"5898503
\", replFailureMsgDetail=\"0\", functionName=\"repl_util::Result
repl_core::Instance::endTransfer(spinnp_uuid_t*)\", lineNumber=\"738\""
}

Example 371. Extracting Additional Fields From the Syslog Messages

Messages that contain key-value pairs, like the example at the beginning of the section, can be parsed with

544
the xm_kvp module to extract more fields if required.

nxlog.conf
 1 <Output out>
 2 Module om_null
 3 </Output>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Extension kvp>
10 Module xm_kvp
11 KVPDelimiter ,
12 KVDelimiter =
13 EscapeChar \\
14 </Extension>
15
16 <Input in_syslog_udp>
17 Module im_udp
18 Host 0.0.0.0
19 Port 514
20 <Exec>
21 parse_syslog();
22 if $Message =~ /(?x)^\[([a-z-A-Z0-9-]*):([a-z-A-Z.]*):([a-z-A-Z]*)\]:
23 \ ([a-zA-Z]+=.+)/
24 {
25 $NAUnit = $1;
26 $NAMsgName = $2;
27 $NAMsgSev = $3;
28 $NAMessage = $4;
29 kvp->parse_kvp($4);
30 }
31 </Exec>
32 </Input>

545
Output Sample
{
  "MessageSourceAddress": "192.168.5.63",
  "EventReceivedTime": "2017-04-15 23:13:45",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 0,
  "SyslogFacility": "KERN",
  "SyslogSeverityValue": 7,
  "SyslogSeverity": "DEBUG",
  "SeverityValue": 1,
  "Severity": "DEBUG",
  "Hostname": "192.168.5.63",
  "EventTime": "2017-04-15 23:15:14",
  "Message": "[p-netapp3:repl.engine.error:debug]: replStatus=\"5\", replFailureMsg=\"5898500
\", replFailureMsgDetail=\"0\", functionName=\"void
repl_volume::Query::_queryResponse(repl_spinnp::Request&, const spinnp_repl_result_t&,
repl_spinnp::Response*)\", lineNumber=\"149\"",
  "NAUnit": "p-netapp3",
  "NAMsgName": "repl.engine.error",
  "NAMsgSev": "debug",
  "NAMessage": "replStatus=\"5\", replFailureMsg=\"5898500\", replFailureMsgDetail=\"0\",
functionName=\"void repl_volume::Query::_queryResponse(repl_spinnp::Request&, const
spinnp_repl_result_t&, repl_spinnp::Response*)\", lineNumber=\"149\"",
  "replStatus": "5",
  "replFailureMsg": "5898500",
  "replFailureMsgDetail": "0",
  "functionName": "void repl_volume::Query::_queryResponse(repl_spinnp::Request&, const
spinnp_repl_result_t&, repl_spinnp::Response*)",
  "lineNumber": "149"
}

84.2. Sending Logs to a Remote File Share


NetApp saves its logs in the Windows EventLog (EVTX) format. In the case of a standalone unit, these logs are
available over the network in the \etc$ share, and can be parsed by the im_msvistalog module. However in
cluster mode, starting from ONTAP 7, this share is not accessible. Instead, audit logs from each virtual server can
be sent to a CIFS share where NXLog can access and read them. This configuration must be performed for each
virtual server separately.

To accomplish this, create and enable an audit policy for each virtual server.

> vserver audit create -vserver <VIRTUAL_SERVER> -destination <SHARE>


  -rotate-size <SIZE> -rotate-limit <NUMBER>
> vserver audit enable -vserver <VIRTUAL_SERVER>

Example 372. Sending NetApp Logs to a CIFS Share

These commands set up an audit policy that sends logs to the specified share, rotates log files at 100 MB,
and retains the last 10 rotated log files.

> vserver audit create -vserver vs_p12_cifs


  -destination /p-GRT -rotate-size 100M -rotate-limit 10
> vserver audit enable vs_p12_cifs

546
Example 373. Reading Logs From a NetApp EventLog File

This example shows NetApp events as collected and processed by NXLog from an EventLog file.

nxlog.conf
 1 <Input in_file_evt>
 2 Module im_msvistalog
 3 File C:\Temp\NXLog\audit_vs_p12_cifs_last.evtx
 4 </Input>
 5
 6 <Output file_from_eventlog>
 7 Module om_file
 8 File "C:\Temp\evt.log"
 9 Exec to_json();
10 </Output>

Output Sample
{
  "EventTime": "2017-05-10 21:17:12",
  "Hostname": "e3864b4d-8937-11e5-b812-00a098831757/bf4a40a5-9216-11e5-8d9a-00a098831757",
  "Keywords": -9214364837600035000,
  "EventType": "AUDIT_SUCCESS",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 4624,
  "SourceName": "NetApp-Security-Auditing",
  "ProviderGuid": "{3CB2A168-FE19-4A4E-BDAD-DCF422F13473}",
  "Version": 101,
  "OpcodeValue": 0,
  "RecordNumber": 0,
  "ProcessID": 0,
  "ThreadID": 0,
  "Channel": "Security",
  "ERROR_EVT_UNRESOLVED": true,
  "IpAddress' IPVersion='4": "192.168.17.151",
  "IpPort": "49421",
  "TargetUserSID": "S-1-5-21-4103495029-501085275-2219630704-2697",
  "TargetUserName": "App_Service",
  "TargetUserIsLocal": "false",
  "TargetDomainName": "DOMAIN",
  "AuthenticationPackageName": "KRB5",
  "LogonType": "3",
  "EventReceivedTime": "2017-05-10 22:33:00",
  "SourceModuleName": "in_file_evt",
  "SourceModuleType": "im_msvistalog"
}

547
Chapter 85. .NET Application Logs
NXLog can be used to capture logs directly from Microsoft .NET™ applications using third-party utilities. This
guide demonstrates how to set up these utilities with a sample .NET application and a corresponding NXLog
configuration.

This guide uses the SharpDevelop IDE, but Microsoft Visual Studio™ on Windows, or MonoDevelop on Linux
could also be used. The log4net package and log4net.Ext.Json extension are also required.

The following instructions were tested with SharpDevelop 5.1.0, .NET 4.5, log4net 2.0.5, and
log4net.Ext.Json 1.2.15.14586. To use NuGet packages without the NuGet package manager,
NOTE
simply download the nupkg file using the "Download" link, add a .zip extension to the file
name, and extract.

1. Create a new Solution in SharpDevelop by selecting File › New › Solution and choosing the Console
Application option. Enter a name and click [ Create ].
2. Place the log4net and log4net.Ext.Json DLL files in the bin\Debug directory of your project.

3. Select Project › Add Reference. Open the .NET Assembly Browser tab and click [ Browse ]. Add the two
DLL files so that they appear in the Selected References list, then click [ OK ].

4. Edit the AssemblyInfo.cs file (under Properties in the Projects sidebar) and add the following line.

[assembly: log4net.Config.XmlConfigurator(ConfigFile = "App.config", Watch = true)]

548
5. Click the Refresh icon in the Projects sidebar to show all project files.
6. Create a file named App.config in the bin\Debug folder, open it for editing, and add the following code.
Update the remoteAddress value of the with the IP address (or host name) of the NXLog instance.

App.config
<configuration>
  <configSections>
  <section name="log4net"
  type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>

  <log4net>
  <appender name="UdpAppender" type="log4net.Appender.UdpAppender">
  <remoteAddress value="192.168.56.103" />
  <remotePort value="514" />
  <layout type="log4net.Layout.SerializedLayout, log4net.Ext.Json" />
  </appender>

  <root>
  <level value="DEBUG"/>
  <appender-ref ref="UdpAppender"/>
  </root>
  </log4net>
</configuration>

7. Edit the Program.cs file, and replace its contents with the following code. This loads the log4net module and
creates some sample log messages.

549
Program.cs
using System;
using log4net;

namespace demo
{
  class Program
  {
  private static readonly log4net.ILog mylog = log4net.LogManager.GetLogger(typeof(
Program));
  public static void Main(string[] args)
  {
  log4net.Config.BasicConfigurator.Configure();
  mylog.Debug("This is a debug message");
  mylog.Warn("This is a warn message");
  mylog.Error("This is an error message");
  mylog.Fatal("This is a fatal message");
  Console.ReadLine();
  }
  }
}

8. Configure NXLog.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input in>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 <Exec>
10 $raw_event =~ s/\s+$//;
11
12 # Parse JSON into fields for later processing if required
13 parse_json();
14 </Exec>
15 </Input>
16
17 <Output out>
18 Module om_file
19 File "/tmp/output"
20 </Output>
21
22 <Route r>
23 Path in => out
24 </Route>

9. In SharpDevelop, press the F5 key to build and run the application. The following output should appear.

Demo Output
4301 [1] DEBUG demo.Program (null) - This is a debug message↵
4424 [1] WARN demo.Program (null) - This is a warn message↵
4425 [1] ERROR demo.Program (null) - This is an error message↵
4426 [1] FATAL demo.Program (null) - This is a fatal message↵

10. Examine the /tmp/output file. It should show the sample log entries produced by the .NET application.

550
NXLog Output
{"date":"2014-03-
19T09:41:08.7231787+01:00","Level":"DEBUG","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is a debug message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8456254+01:00","Level":"WARN","AppDomain":"demo.exe","Logger":"demo.Program","Thread
":"1","Message":"This is a warn message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8466327+01:00","Level":"ERROR","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is an error message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8476223+01:00","Level":"FATAL","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is a fatal message","Exception":""}↵

551
Chapter 86. Nginx
The Nginx web server supports error and access logging. Both types of logs can be written to file, or forwarded
as Syslog via UDP, or written as Syslog to a Unix domain socket. The sections below provide a brief overview; see
the Logging section of the Nginx documentation for more detailed information.

86.1. Error Log


The error_log directive configures the destination and log level for the error log. This directive can be given in
the main (top-level) configuration context to override the default. It can also be specified at the http, stream,
server, and location levels, where it will override the inherited setting from the higher levels.

Example 374. Collecting Error Logs From File

With the following directive, Nginx will log all messages of "warn" severity or higher to the specified log file.

nginx.conf
error_log /var/log/nginx/error.log warn;

Following is a log message generated by Nginx, an NXLog configuration for parsing it, and the resulting
JSON.

Log Sample
2017/08/07 04:37:16 [emerg] 17479#17479: epoll_create() failed (24: Too many open files)↵

nxlog.conf
 1 <Input nginx_error>
 2 Module im_file
 3 File '/var/log/nginx/error.log'
 4 <Exec>
 5 if $raw_event =~ /^(\S+ \S+) \[(\S+)\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
 6 {
 7 $EventTime = strptime($1, '%Y/%m/%d %H:%M:%S');
 8 $NginxLogLevel = $2;
 9 $NginxPID = $3;
10 $NginxTID = $4;
11 if $6 != '' $NginxCID = $6;
12 $Message = $7;
13 }
14 </Exec>
15 </Input>

Output Sample
{
  "EventReceivedTime": "2017-08-07T04:37:16.245375+02:00",
  "SourceModuleName": "nginx_error",
  "SourceModuleType": "im_file",
  "EventTime": "2017-08-07T04:37:16.000000+02:00",
  "NginxLogLevel": "emerg",
  "NginxPID": "17479",
  "NginxTID": "17479",
  "Message": "epoll_create() failed (24: Too many open files)"
}

552
Example 375. Collecting Error Logs via Syslog

With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Syslog
server. The messages will be generated with the "local7" facility.

nginx.conf
error_log syslog:server=192.168.1.1:514,facility=local7 warn;

This NXLog configuration can be used to parse the logs.

nxlog.conf
 1 <Input nginx_error>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /^\S+ \S+ \[\S+\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
 8 {
 9 $NginxPID = $1;
10 $NginxTID = $2;
11 if $4 != '' $NginxCID = $4;
12 $Message = $5;
13 }
14 </Exec>
15 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.1.12",
  "EventReceivedTime": "2017-08-07T04:37:16.441368+02:00",
  "SourceModuleName": "nginx_error",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 23,
  "SyslogFacility": "LOCAL7",
  "SyslogSeverityValue": 1,
  "SyslogSeverity": "ALERT",
  "SeverityValue": 5,
  "Severity": "CRITICAL",
  "Hostname": "nginx-host",
  "EventTime": "2017-08-07T04:37:16.000000+02:00",
  "SourceName": "nginx",
  "Message": "epoll_create() failed (24: Too many open files)",
  "NginxPID": "17479",
  "NginxTID": "17479"
}

553
Example 376. Collecting Error Logs via Unix Domain Socket

With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Unix
domain socket. The messages will be sent in Syslog format with the "local7" Syslog facility.

nginx.conf
error_log syslog:server=unix:/var/log/nginx/error.sock,facility=local7 warn;

nxlog.conf
 1 <Input nginx_error>
 2 Module im_uds
 3 UDS /var/log/nginx/error.sock
 4 <Exec>
 5 parse_syslog();
 6 if $Message =~ /^\S+ \S+ \[\S+\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
 7 {
 8 $NginxPID = $1;
 9 $NginxTID = $2;
10 if $4 != '' $NginxCID = $4;
11 $Message = $5;
12 }
13 </Exec>
14 </Input>

86.2. Access Log


By default, Nginx writes access logs to logs/access.log in the Combined Log Format. An NXLog configuration
example for parsing this can be found in the Common & Combined Log Formats section. Access logs can also be
forwarded in Syslog format via UDP or a Unix domain socket, as shown below.

The log format can be customized by setting the log_format directive; see the Nginx documentation for more
information.

554
Example 377. Collecting Access Logs via Syslog

With this directive, Nginx will forward access logs to the specified Syslog server. The messages will be
generated with the "local7" facility and the "info" severity.

nginx.conf
access_log syslog:server=192.168.1.1:514,facility=local7,severity=info;

This NXLog configuration can be used to parse the logs.

nxlog.conf
 1 <Input nginx_access>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
 8 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
 9 \ \"([^\"]+)\"/
10 {
11 $Hostname = $1;
12 if $2 != '-' $AccountName = $2;
13 $EventTime = parsedate($3);
14 $HTTPMethod = $4;
15 $HTTPURL = $5;
16 $HTTPResponseStatus = $6;
17 if $7 != '-' $FileSize = $7;
18 if $8 != '-' $HTTPReferer = $8;
19 if $9 != '-' $HTTPUserAgent = $9;
20 delete($Message);
21 }
22 </Exec>
23 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.1.12",
  "EventReceivedTime": "2017-08-07T06:15:55.662319+02:00",
  "SourceModuleName": "nginx_access",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 23,
  "SyslogFacility": "LOCAL7",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.1.12",
  "EventTime": "2017-08-07T06:15:55.000000+02:00",
  "SourceName": "nginx",
  "HTTPMethod": "GET",
  "HTTPURL": "/",
  "HTTPResponseStatus": "304",
  "FileSize": "0",
  "HTTPUserAgent": "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0"
}

555
Example 378. Collecting Access Logs via Unix Domain Socket

With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Unix
domain socket. The messages will be sent in Syslog format with the "local7" Syslog facility.

nginx.conf
access_log syslog:server=unix:/var/log/nginx/access.sock,facility=local7,severity=info;

nxlog.conf
 1 <Input nginx_access>
 2 Module im_uds
 3 UDS /var/log/nginx/access.sock
 4 <Exec>
 5 parse_syslog();
 6 if $Message =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
 7 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
 8 \ \"([^\"]+)\"/
 9 {
10 $Hostname = $1;
11 if $2 != '-' $AccountName = $2;
12 $EventTime = parsedate($3);
13 $HTTPMethod = $4;
14 $HTTPURL = $5;
15 $HTTPResponseStatus = $6;
16 if $7 != '-' $FileSize = $7;
17 if $8 != '-' $HTTPReferer = $8;
18 if $9 != '-' $HTTPUserAgent = $9;
19 delete($Message);
20 }
21 </Exec>
22 </Input>

556
Chapter 87. Okta
Okta provides identity management cloud software services.

NXLog can be set up to pull events from Okta using their REST API. For more information, see the Okta add-on.

557
Chapter 88. Osquery
Osquery provides easy access to operating system logs via SQL queries as it exposes operating system data in a
relational data model.

NXLog can be integrated with osquery when deployed on Windows, MacOS, Linux, and FreeBSD. Osquery does
not provide mechanisms to forward logs, it relies on software such as NXLog to do so.

88.1. Using Osquery


Osquery utilizes SQL queries to retrieve information.

Example 379. Using osquery

The following simple SELECT statement lists process information:

SELECT pid, name, path FROM processes;

Table 59. Sample Query Result on Linux

pid name path


1 bash /usr/bin/bash

162 nxlog /opt/nxlog/bin/nxlog

178 osquery /usr/bin/osqueryd

22 bash /usr/bin/bash

37 vim /usr/bin/vim

Table 60. Sample Query Result on Windows

pid name path


0 [System Process]

4 System

244 smss.exe C:\Windows\System32\smss.exe

324 csrss.exe C:\Windows\System32\csrss.exe

404 csrss.exe C:\Windows\System32\csrss.exe

412 wininit.exe C:\Windows\System32\wininit.exe

596 svchost.exe C:\Windows\System32\svchost.exe

For more information about osquery commands, see the osqueryi (shell) and SQL Introduction sections on the
osquery website.

88.2. Configuring Osquery


The osqueryd daemon allows scheduling queries and provides two types of logging:

• differential — logs changes in the system between the previous and the current query executions.
• snapshot — logs the data set obtained in a certain point in time.

For more information on installing osquery, see the Getting Started section on the osquery website.

558
Osquery can be configured via the osquery.conf file using a JSON format. This file should be located under the
following paths:

• Linux: /etc/osquery/

• Windows: C:\Program Files\osquery\

• FreeBSD: /usr/local/etc/

• MacOS: /private/var/osquery/

Example 380. Configuring Osquery for the Differential Mode

The following configuration is an example of a differential logging configuration. The schedule object
contains the nested processes object, which contains two fields:

• query — This key specifies the SQL statement. In this case, it selects all entries from the processes
table.
• interval — This key contains the number of seconds after which the statement is executed again. In
this example, the query is executed every 10 seconds.

osquery.conf
{
  "schedule": {
  "processes": {
  "query": "SELECT pid, name, path FROM processes;",
  "interval": 10
  }
  }
}

Example 381. Configuring Osquery for the Snapshot Mode

The following configuration is an example of the snapshot logging configuration.

The processes object contains the additional snapshot key, which is a boolean flag to enable the snapshot
logging mode.

osquery.conf
{
  "schedule": {
  "processes": {
  "query": "SELECT pid, name, path FROM processes;",
  "interval": 10,
  "snapshot": true
  }
  }
}

For more information, see the Configuration section on the osquery website.

88.3. Log Samples


Osquery creates status logs of its own execution for both differential and snapshot logging.

Execution logs are stored in the following files:

559
• osqueryd.INFO,
• osqueryd.WARNING,
• osqueryd.ERROR.

By default, all osquery log files are available under the following paths:

• On Unix-like systems: /var/log/osquery/

• On Windows: C:\Program Files\osquery\log\

Example 382. Execution Logs

Below are the samples of the execution logs from Ubuntu and Windows.

osqueryd.INFO on Ubuntu
Log file created at: 2019/11/25 10:07:54↵
Running on machine: ubuntu↵
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg↵
I1125 10:07:54.233732 28060 events.cpp:863] Event publisher not enabled: auditeventpublisher:
Publisher disabled via configuration↵
I1125 10:07:54.233835 28060 events.cpp:863] Event publisher not enabled: syslog: Publisher
disabled via configuration↵

osqueryd.INFO on Windows
Log file created at: 2019/11/28 10:57:00↵
Running on machine: WIN-SFULD4GOF4H↵
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg↵
I1128 10:57:00.979398 3908 scheduler.cpp:105] Executing scheduled query processes: SELECT pid,
name, path FROM processes;↵
E1128 10:57:01.009029 3908 processes.cpp:312] Failed to lookup path information for process 4↵
E1128 10:57:01.024600 3908 processes.cpp:332] Failed to get cwd for 4 with 31↵
I1128 10:58:01.649113 3908 scheduler.cpp:105] Executing scheduled query processes: SELECT pid,
name, path FROM processes;↵
E1128 10:58:01.681404 3908 processes.cpp:312] Failed to lookup path information for process 4↵
E1128 10:58:01.712568 3908 processes.cpp:332] Failed to get cwd for 4 with 31↵

The osqueryd.results.log file stores differential log entries.

560
Example 383. Differential Logs

Below are the samples of the differential logs from Ubuntu and Windows.

osqueryd.results.log on Ubuntu
{"name":"users","hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:11:40 2019
UTC","unixTime":1574673100,"epoch":0,"counter":0,"logNumericsAsNumbers":false,"columns":{"direc
tory":"/","uid":"111","username":"kernoops"},"action":"removed"}↵
{"name":"users","hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:11:40 2019
UTC","unixTime":1574673100,"epoch":0,"counter":0,"logNumericsAsNumbers":false,"columns":{"direc
tory":"/bin","uid":"2","username":"bin"},"action":"removed"}↵

osqueryd.results.log on Windows
{"name":"processes","hostIdentifier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:18:00 2019
UTC","unixTime":1575051480,"epoch":0,"counter":23,"logNumericsAsNumbers":false,"columns":{"name
":"conhost.exe","path":"C:\\Windows\\System32\\conhost.exe","pid":"2936"},"action":"removed"}↵
{"name":"processes","hostIdentifier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:18:00 2019
UTC","unixTime":1575051480,"epoch":0,"counter":23,"logNumericsAsNumbers":false,"columns":{"name
":"dllhost.exe","path":"C:\\Windows\\System32\\dllhost.exe","pid":"3784"},"action":"removed"}↵

The osqueryd.snapshots.log file stores snapshot logs.

Example 384. Snapshot Logs

Below are the samples of the snapshot logs from Ubuntu and Windows.

osqueryd.snapshots.log on Ubuntu
{"snapshot":[{"name":"gsd-rfkill","path":"/usr/lib/gnome-settings-daemon/gsd-
rfkill","pid":"944"},{"name":"gsd-screensaver","path":"/usr/lib/gnome-settings-daemon/gsd-
screensaver-proxy","pid":"947"},{"name":"gsd-sharing","path":"/usr/lib/gnome-settings-
daemon/gsd-sharing","pid":"949"},{"name":"gsd-smartcard","path":"/usr/lib/gnome-settings-
daemon/gsd-smartcard","pid":"955"},{"name":"gsd-sound","path":"/usr/lib/gnome-settings-
daemon/gsd-sound","pid":"962"},{"name":"gsd-wacom","path":"/usr/lib/gnome-settings-daemon/gsd-
wacom","pid":"965"},{"name":"kstrp","path":"","pid":"98"}],"action":"snapshot","name":"users","
hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:14:25 2019
UTC","unixTime":1574673265,"epoch":0,"counter":0,"logNumericsAsNumbers":false}↵

osqueryd.snapshots.log on Windows
{"snapshot":[{"name":"[System
Process]","path":"","pid":"0"},{"name":"System","path":"","pid":"4"},{"name":"smss.exe","path":
"C:\\Windows\\System32\\smss.exe","pid":"244"},{"name":"csrss.exe","path":"C:\\Windows\\System3
2\\csrss.exe","pid":"328"},{"name":"wininit.exe","path":"C:\\Windows\\System32\\wininit.exe","p
id":"408"},{"name":"winlogon.exe","path":"C:\\Windows\\System32\\winlogon.exe","pid":"452"},{"n
ame":"services.exe","path":"C:\\Windows\\System32\\services.exe","pid":"512"},{"name":"RuntimeB
roker.exe","path":"C:\\Windows\\System32\\RuntimeBroker.exe","pid":"2664"},{"name":"sihost.exe"
,"path":"C:\\Windows\\System32\\sihost.exe","pid":"2700"},{"name":"svchost.exe","path":"C:\\Win
dows\\System32\\svchost.exe","pid":"2708"}],"action":"snapshot","name":"processes","hostIdentif
ier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:13:04 2019
UTC","unixTime":1575051184,"epoch":0,"counter":0,"logNumericsAsNumbers":false}↵

For more information about the logging system of osquery, see the Logging section on the osquery website.

88.4. Configuring NXLog


This section provides examples on how to configure NXLog to integrate with osquery.

561
Example 385. Configuring NXLog for Unix-like Systems

The following configuration uses the im_file module to read the osquery log entries and process them with
the xm_json module.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input osquery_diff>
 6 Module im_file
 7 File "/var/log/osquery/osqueryd.results.log"
 8 Exec parse_json();
 9 </Input>
10
11 <Input osquery_snap>
12 Module im_file
13 File "/var/log/osquery/osqueryd.snapshots.log"
14 Exec parse_json();
15 </Input>

Example 386. Configuring NXLog for Windows

The following configuration uses the im_file module to read the osquery log entries and process them with
the xm_json module.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input osquery_diff>
 6 Module im_file
 7 File "C:\\Program Files\\osquery\\log\\osqueryd.results.log"
 8 Exec parse_json();
 9 </Input>
10
11 <Input osquery_snap>
12 Module im_file
13 File "C:\\Program Files\\osquery\\log\\osqueryd.snapshots.log"
14 Exec parse_json();
15 </Input>

Example 387. Forwarding Osquery Logs

Using an appropriate output module, NXLog can be configured to forward osquery logs to a remote system.
As an example, the om_tcp module is used.

nxlog.conf
1 <Output snap_out>
2 Module om_tcp
3 Host 192.168.1.1
4 Port 1515
5 </Output>

562
Chapter 89. Postfix
NXLog can be configured to collect logs from the Postfix mail server. Postfix logs its actions to the standard
system logger with the mail facility type.

Syslog/Postfix Log Format


Oct 10 01:23:45 hostname postfix/component[pid]: message↵

The component indicates the Postfix process that produced the log message. Most log entries, those relevant to
particular email messages, also include the queue ID of the email message as the first part of the message.

Log Sample
Oct 10 01:23:45 mailhost postfix/smtpd[2534]: 4F9D195432C: client=localhost[127.0.0.1]↵
Oct 10 01:23:45 mailhost postfix/cleanup[2536]: 4F9D195432C: message-
id=<20161001103311.4F9D195432C@mail.example.com>↵
Oct 10 01:23:46 mailhost postfix/qmgr[2531]: 4F9D195432C: from=<origin@other.com>, size=344, nrcpt=1
(queue active)↵
Oct 10 01:23:46 mailhost postfix/smtp[2538]: 4F9D195432C: to=<destination@example.com>,
relay=mail.example.com[216.150.150.131], delay=11, status=sent (250 Ok: queued as 8BDCA22DA71)↵

89.1. Configuring Postfix Logging


Several configuration directives, set in main.cf, can be used to adjust Postfix’s logging behavior.

lmtp_tls_loglevel
smtp_tls_loglevel
smtpd_tls_loglevel
The loglevel directives should be set to 0 (disabled, the default) or 1 during normal operation. Values of 2 or
3 can be used for troubleshooting.

debug_peer_level
Specify the increment in logging level when a remote client or server matches a pattern in the
debug_peer_list parameter (default 2).

debug_peer_list
Provide a list of remote client or server hostnames or network address patterns for which to increase the
logging level.

See the Postfix Debugging Howto and the postconf(5) man page for more information.

89.2. Collecting and Processing Postfix Logs


The local syslogd configuration determines where and how the mail facility logs are written, but normally the
logs can be found in /var/log/maillog or /var/log/mail.log. See Collecting and Parsing Syslog and Linux
System Logs for more information about collecting Syslog logs.

563
Example 388. Reading From Syslog Log File

This configuration reads the Postfix logs from file and forwards them via TCP to a remote host.

nxlog.conf
 1 <Input postfix>
 2 Module im_file
 3 File "/var/log/mail.log"
 4 </Input>
 5
 6 <Output out>
 7 Module om_tcp
 8 Host 192.168.1.1
 9 Port 1514
10 </Output>

It is also possible to parse individual Postfix messages into fields, providing access to more fine-grained filtering
and analysis of log data. The NXLog Exec directive can be used to apply regular expressions for this purpose.

Example 389. Extracting Additional Fields and Filtering

Here is the Input module instance again, extended to parse the Postfix messages in the example above.
Various fields are added to the event record, depending on the particular message received. Then in the
Output module instance, only those log entries that are from Postfix’s smtp component and are being
relayed through mail.example.com are logged to the output file.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input postfix>
 6 Module im_file
 7 File "/var/log/mail.log"
 8 <Exec>
 9 if $raw_event =~ /(?x)^(\S+\ +\d+\ \d+:\d+:\d+)\ (\S+)
10 \ postfix\/(\S+)\[(\d+)\]:\ (.+)$/
11 {
12 $EventTime = parsedate($1);
13 $HostName = $2;
14 $SourceName = "postfix";
15 $Component = $3;
16 $ProcessID = $4;
17 $Message = $5;
18 if $Component == "smtpd" and
19 $Message =~ /(\w+): client=(\S+)\[([\d.]+)\]/
20 {
21 $QueueID = $1;
22 $ClientHostname = $2;
23 $ClientIP = $3;
24 }
25 if $Component == "cleanup" and
26 $Message =~ /(\w+): message-id=(<\S+@\S+>)/
27 {
28 $QueueID = $1;
29 [...]

Using the example log entries above, this configuration results in a single JSON entry written to the log file.

564
Output Sample
{
  "EventReceivedTime": "2016-10-05 16:38:57",
  "SourceModuleName": "postfix",
  "SourceModuleType": "im_file",
  "EventTime": "2016-10-10 01:23:46",
  "HostName": "mail",
  "SourceName": "postfix",
  "Component": "smtp",
  "ProcessID": "2538",
  "Message": "4F9D195432C: to=<destination@example.com>,
relay=mail.example.com[216.150.150.131], delay=11, status=sent (250 Ok: queued as
8BDCA22DA71)",
  "QueueID": "4F9D195432C",
  "Recipient": "<destination@example.com>",
  "RelayHostname": "mail.example.com",
  "RelayIP": "216.150.150.131",
  "Delay": "11",
  "Status": "sent",
  "SMTPCode": "250",
  "QueueIDDelivered": "8BDCA22DA71"
}

565
Chapter 90. Promise
The Promise Storage Area Network (SAN) is capable of sending SNMP traps to remote destinations.
Unfortunately Syslog is not supported on these units.

Log Sample
2654 Fan 4 Enc 1 Info Apr 27, 2017 19:08:48 PSU fan or blower speed is decreased↵

There is a single management interface no matter how many shelves are installed, so configuration only needs
to be performed once from the Promise web interface or the command line.

More information about configuring Promise arrays is available in the E-Class product manual. Also, additional
details on CNMP configuration and links to MIB files are available in the following KB article.

1. Configure NXLog for receiving SNMP traps (see the example below). Remember to place the MIB file in the
directory specified by the MIBDir directive. Then restart NXLog.
2. Make sure the NXLog agent is accessible from the unit.
3. Configure Promise by using the web interface or the command line. See the following sections.

Example 390. Receiving SNMP Traps From Promise

This example shows SNMP trap messages from Promise, as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Extension snmp>
10 Module xm_snmp
11 MIBDir /usr/share/mibs/iana
12 </Extension>
13
14 <Input in_snmp_udp>
15 Module im_udp
16 Host 0.0.0.0
17 Port 162
18 InputType snmp
19 Exec parse_syslog();
20 </Input>
21
22 <Output file_snmp>
23 Module om_file
24 File "/var/log/snmp.log"
25 Exec to_json();
26 </Output>

566
Output Sample
{
  "SNMP.CommunityString": "public",
  "SNMP.RequestID": 1295816642,
  "EventTime": "2017-04-27 20:44:37",
  "SeverityValue": 2,
  "Severity": "INFO",
  "OID.1.3.6.1.2.1.1.3.0": 67,
  "OID.1.3.6.1.6.3.1.1.4.1.0": "1.3.6.1.4.1.7933.1.20.0.11.0.1",
  "OID.1.3.6.1.4.1.7933.1.20.0.10.1": 2654,
  "OID.1.3.6.1.4.1.7933.1.20.0.10.2": 327683,
  "OID.1.3.6.1.4.1.7933.1.20.0.10.3": 327683,
  "OID.1.3.6.1.4.1.7933.1.20.0.10.4": 2,
  "OID.1.3.6.1.4.1.7933.1.20.0.10.5": "Fan 4 Enc 1",
  "OID.1.3.6.1.4.1.7933.1.20.0.10.6": "Apr 27, 2017 19:08:48",
  "OID.1.3.6.1.4.1.7933.1.20.0.10.7": "PSU fan or blower speed is decreased",
  "MessageSourceAddress": "192.168.10.21",
  "EventReceivedTime": "2017-04-27 20:44:37",
  "SourceModuleName": "in_snmp_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "Hostname": "INFO",
  "Message": "OID.1.3.6.1.2.1.1.3.0=\"67\" OID.1.3.6.1.6.3.1.1.4.1.0=
\"1.3.6.1.4.1.7933.1.20.0.11.0.1\" OID.1.3.6.1.4.1.7933.1.20.0.10.1=\"2654\"
OID.1.3.6.1.4.1.7933.1.20.0.10.2=\"327683\" OID.1.3.6.1.4.1.7933.1.20.0.10.3=\"327683\"
OID.1.3.6.1.4.1.7933.1.20.0.10.4=\"2\" OID.1.3.6.1.4.1.7933.1.20.0.10.5=\"Fan 4 Enc 1\"
OID.1.3.6.1.4.1.7933.1.20.0.10.6=\"Apr 27, 2017 19:08:48\" OID.1.3.6.1.4.1.7933.1.20.0.10.7=
\"PSU fan or blower speed is decreased\""
}

The steps below have been tested on the VTrak E600 series and should work on other models as
NOTE
well.

90.1. Configuring via Web Interface


Follow these steps to enable sending SNMP traps through the web interface.

1. Log in to the web interface.


2. Go to Subsystems › Administrative Tools › Software Management.

3. Under the Service tab, click on [ SNMP ].


4. Under Trap Sink, specify the Trap Sink Server IP address and select the appropriate Trap Filter to choose
the logging level. Then click [ Update ].

567
5. Make sure Running Status is Started and Startup Type is set to Automatic.
6. Click [ Submit ] and confirm SNMP restart.

90.2. Configuring via Command Line


Follow these steps to enable sending SNMP traps through the command line interface.

1. Connect to Promise via SSH.


2. Type menu.

3. Go to Additional Info and Management › Software Management › SNMP.

4. Select Trap Sinks › Create New Trap Sink.

5. Specify the remote IP address under Trap Sink Server and the logging level under Trap Filter.
6. Select Save SNMP Trap Sink.
7. Select Return to Previous Menu and then Restart.
8. Make sure Startup Type is set to Automatic.

568
Chapter 91. Rapid7 InsightIDR SIEM
Rapid7 InsightIDR is an intruder analytics suite that helps detect and investigate security incidents. It works with
data collected from network logs, authentication logs, and other log sources from endpoint devices.

NXLog can be configured to collect and forward event logs to InsightIDR. It can also be used to rewrite event
fields to meet the log field name requirements of InsightIDR’s Universal Event Format (UEF).

91.1. Configuring InsightIDR for Log Collection


This topic provides information about setting up log sources for InsightIDR. This will need to be done once for
each log source, making sure that the correct details are provided for each log type collected from that source. In
addition to this guide, please see the Rapid7 InsightIDR documentation.

1. Create, deploy and activate an InsightIDR Collector. A Collector is required before adding any data sources to
InsightIDR.

Read more about the requirements in Rapid7’s InsightIDR Collector Requirements documentation before you
install and deploy the InsightIDR Collector.

2. To confirm that the Collector is running, select Data Collection in the left side panel, then under the Data
Collection Management pane, select the Collectors tab.

Here you can check the state of the Collectors. If the Collector is not running, review the Collector
Troubleshooting page in the Rapid7’s Collector Troubleshooting documentation.

3. To add a new Data Source, in the Data Collection Management pane, select the Event Sources tab, then in
the Product Types list, adjacent to Rapid7, click Add.

The Add Event source wizard opens.

4. To configure the Event Source, select the name of the Collector and the Event Source from the
corresponding dropdown lists, optionally enter the Display Name, and then select the Timezone from the
dropdown list.

For the Event Source, select either Rapid 7 Raw Data (if using JSON) or Rapid7 Generic Syslog (if using
Syslog-formatted logs).

5. For the Collection Method, select the Listen For Syslog button, in the Port field enter the port number, then
from the drop down list select a Protocol.
6. If TCP was selected for the Protocol, optionally select Encrypted, then click Save.

The newly created Event Source is visible under the Event Sources tab of the Data Collection Management
Pane.

91.2. Configuring NXLog for Log Processing


This section shows example configurations to send event logs to InsightIDR.

91.2.1. Sending Generic Structured Logs


NXLog can be used to send structured logs (as JSON or other KVP), and generic log formats like Snare or Syslog to
InsightIDR. To illustrate this, the following examples show configurations collecting and sending event logs from
common Windows log sources.

569
Example 391. Event Logs Collected from Event Tracing for Windows (ETW)

This configuration uses the im_etw module to collect Windows DNS Server log data and send it to
InsightIDR as JSON.

nxlog.conf
1 <Input etw_in>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 Exec to_json();
5 </Input>

Structured JSON Event Sample in InsightIDR


{
  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "EventId": 256,
  "EventTime": "2019-02-07T11:03:18.320983+00:00",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "Source": "172.31.33.197",
  "QNAME": "1.0.0.127.in-addr.arpa.",
  "QTYPE": "12",
  "XID": "2767",
  "EventReceivedTime": "2019-02-07T11:03:19.330496+00:00",
  "SourceModuleName": "etw_in",
  "SourceModuleType": "im_etw"
}

570
Example 392. Sending Windows Event Log Security Events

This example sends Windows Event Log collected from the Security Channel using the im_msvistalog
module. The events are sent to InsightIDR in Snare format. When sending Windows Event Log security
events, create a data source with the type Rapid7 Generic Windows Event Log.

nxlog.conf
 1 <Input eventlog_in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id='0'>
 6 <Select Path='Security'>*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 <Exec>
11 $Message = replace($Message, "\t", " "); $Message = replace \
12 ($Message, "\n", " "); $Message = replace($Message, "\r", " ");
13 $raw_event = $Message;
14 to_syslog_snare();
15 </Exec>
16 </Input>

Output Sample ("Snare Over Syslog")


<14>Dec 13 18:12:48 DC71.nxlog.internal MSWinEventLog ⇥ 1 ⇥ Security ⇥ 1 ⇥ Fri Dec 13 18:12:48
2019 ⇥ 4634 ⇥ Microsoft-Windows-Security-Auditing ⇥ N/A ⇥ N/A ⇥ Success Audit ⇥
DC71.nxlog.internal ⇥ Logoff ⇥ ⇥ An account was logged off. Subject: Security ID: S-1-
5-18 Account Name: DC01$ Account Domain: NXLOG Logon ID: 0x4DD51 Logon Type: 3
25885↵

571
Example 393. Sending Other Windows Event Log Events

In this configuration, the im_msvistalog module is configured to collect Windows DHCP events and send
them as JSON, but other types of Windows events can be collected too. In this case, the Rapid7 log source is
set as Rapid7 Generic Syslog, so the logs are indexed and parsed.

nxlog.conf
 1 <Input dhcp_server_eventlog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="DhcpAdminEvents">*</Select>
 7 <Select Path="Microsoft-Windows-Dhcp-Server/FilterNotifications"> \
 8 *</Select>
 9 <Select Path="Microsoft-Windows-Dhcp-Server/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 Exec to_syslog_bsd();
14 </Input>

DHCP Output in Syslog Format


<14>Jul 12 14:01:25 NXLog.co Microsoft-Windows-DHCP-Server[1836]: Scope: [[171.40.0.0]Test 3]
for IPv4 is Deleted by NXLOG\Administrator.↵

Figure 4. DHCP as Raw Data in Rapid7

91.2.2. Converting and Sending Logs in Universal Event Format (UEF)


Rapid7 InsightIDR allows certain types of non-InsightIDR event sources to access the same functionality as
InsightIDR event sources such as User Behavior Analytics. Use the previous steps to add a new Data Source in
InsightIDR keeping in mind the following:

• The Event Source should be one of the supported Rapid7 Universal Event Format types.
• The logs need to be converted to either JSON or KVP.
• Relevant fields should be rewrited, added and whitelisted to support the UEF specification for the log source
type.
• Confirm that the logs have no format violations and are correctly indexed by Rapid7 InsightIDR. See the
Verifying Data Collection section.

The following configuration examples are based on collecting Rapid7 Ingress Authentication events. The steps,
fields and input options will vary depending on the UEF source types. For more information, see the Universal
Event Sources section in the Rapid7 documentation.

572
Use the xm_rewrite module to rename raw data fields to match SIEM and dashboard field
NOTE
names.

Use the xm_kvp module to delete, add and rename raw data fields. For more information, see
NOTE the Universal Event Formats in InsightIDR: A Step-by-Step NXLog Guide in the Rapid7
documentation.

Example 394. Configuring xm_rewrite for Windows and Linux

Use the xm_rewrite module to specify which fields to keep and rename. The fields to rewrite will depend on
the operating system as shown below. For both, the fields $version and $event_type are added.

nxlog.conf
 1 <Extension rewrite>
 2 Module xm_rewrite
 3 # Fields associated with UEF are whitelisted
 4 Keep EventTime, version, event_type, authentication_result \
 5 IpAddress, WorkstationName, Hostname
 6
 7 # Rename the following fields to the UEF specification
 8 Rename EventTime, time
 9 Rename Hostname, account
10 Rename IpAddress, source_ip
11 Rename WorkstationName, authentication_target
12 Rename Version, version
13 </Extension>

nxlog.conf
 1 <Extension rewrite>
 2 Module xm_rewrite
 3 # The syslog raw data needs to be parsed first
 4 Exec parse_syslog();
 5 Keep Hostname, account, version, user, custom_message, Message, \
 6 event_type, EventReceivedTime, authentication_result, raw_event, \
 7 authentication_target, source_ip
 8 Rename HostName, authentication_target
 9 Rename EventReceivedTime, time
10 Rename Message, custom_message
11 </Extension>

Example 395. Configuring Rapid7 Universal Ingress Authentication Log Collection

In Windows, $EventTime is converted to the required ISO 8601 format. The SUCCESS and FAILURE results
are mapped to $authentication_result based on the event ID.

573
nxlog.conf
 1 <Input in_auth_windows>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Security">*[System[(Level&lt;=4)]]</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 <Exec>
11 # Convert the $EventTime string to ISO 8601 extended format.
12 $EventTime = strftime($EventTime, '%Y-%m-%dT%H:%M:%SZ');
13
14 # Add the required input for $version
15 $version = "v1";
16
17 # Add the required input for $event_type
18 $event_type = "INGRESS_AUTHENTICATION";
19
20 # Add the required authentication results for EventLog IDs
21 if ($EventID IN (4625)) { $authentication_result = "FAILURE"; } \
22 else if ($EventID IN (4624)) { $authentication_result = "SUCCESS"; } \
23 # Drop all other event IDs
24 else drop();
25
26 # Add the process to rewrite the fields and convert to JSON
27 rewrite->process();
28 to_json();
29 </Exec>
30 </Input>

In Linux, $EventReceivedTime is used and converted to the ISO 8601 format. The SUCCESS and FAILURE
results are mapped to $authentication_result based on string results in the $raw_event field.
Additional parsing of the $raw_event field is made to obtain the string data for the $account and
$source_ip values.

574
nxlog.conf (truncated)
 1 <Input in_auth_linux>
 2 Module im_file
 3 File "/var/log/auth.log"
 4 <Exec>
 5 # Convert the $EventReceivedTime string to ISO 8601 extended format
 6 $EventReceivedTime = strftime($EventReceivedTime, '%Y-%m-%dT%H:%M:%SZ');
 7
 8 # Add the required input for $version
 9 $version = "v1";
10
11 # Add the required input for $event_type
12 $event_type = "INGRESS_AUTHENTICATION";
13
14 # Use xm_rewrite module for the $source_ip and $account fields
15 rewrite->process();
16
17 # Obtain the $source_ip and $account string data from $raw_event
18 if ($raw_event =~ /Accepted publickey for\ (\S+)\ from\ (\S+)/)
19 {
20 $account = $1;
21 $source_ip = $2;
22 }
23
24 # Success and Authentication messages based on the $raw_event field
25 if ($raw_event =~ /authentication failure/) \
26 { $authentication_result = "FAILURE"; } \
27 else if ($raw_event =~ /successfully authenticated/) \
28 { $authentication_result = "SUCCESS"; } \
29 [...]

575
Example 396. Ingress Authentication UEF Event Samples in JSON

The following examples display the JSON output based on the NXLog configuration files above. It is
recommended to first test the input to determine that the fields are renamed and added. There is an
option to provide a $custom_message, as displayed in the Linux example.

UEF Event Sample for Ingress Authentication on Windows


{
  "time": "2019-07-26T19:34:03Z",
  "version": "v1",
  "account": "ADMINISTRATOR",
  "authentication_target": "HR Workstation",
  "source_ip": "121.137.167.158",
  "event_type": "INGRESS_AUTHENTICATION",
  "authentication_result": "FAILURE"
}

UEF Event Sample for Ingress Authentication on Linux


{
  "time": "2019-08-17T18:15:27Z",
  "version": "v1",
  "event_type": "INGRESS_AUTHENTICATION",
  "account": "ubuntu",
  "source_ip": "172.5.160.165",
  "authentication_result": "SUCCESS",
  "authentication_target": "ip-172-31-17-116",
  "custom_message": "Accepted publickey for ubuntu from 172.5.160.165 port 58788 ssh2: RSA
SHA256:5kZ3eXnEIFf4orffpf924pbJCgPj57EQRHWBj7E"
}

576
Example 397. Full Ingress Authentication Event Sample Indexed in Rapid7

The following is an event sample in JSON format as indexed by Rapid7 InsightIDR.

Windows UEF Event Sample on Rapid7 InsightIDR Log View


{
  "timestamp": "2019-07-26T19:34:03.000Z",
  "user": "administrator",
  "account": "administrator",
  "result": "FAILED_OTHER",
  "source_ip": "121.137.167.158",
  "service": "CUSTOM UNIVERSAL EVENT",
  "geoip_organization": "Korea Telecom",
  "geoip_country_code": "KR",
  "geoip_country_name": "South Korea",
  "geoip_city": "Pyeongtaek-si",
  "geoip_region": "41",
  "authentication_target": "HR Workstation",
  "source_json": {
  "time": "2019-07-26T19:34:03Z",
  "version": "v1",
  "account": "ADMINISTRATOR",
  "authentication_target": "WorkstationName",
  "source_ip": "121.137.167.158",
  "event_type": "INGRESS_AUTHENTICATION",
  "authentication_result": "FAILURE"
  }
}

91.3. Verifying Data Collection


To verify data collection, check that the event source is collecting the raw logs, and that Rapid7 InsightIDR is
indexing them.

1. In the Data Collection Management pane, go to the Event Sources tab.


2. Select View raw log to see recent raw logs for the event source.
3. To verify log indexing, go to the Log Search pane and find the logs via the Logs or Log sets options.

Once indexed, logs collected using NXLog can be further processed in Rapid7 InsightIDR.

577
Chapter 92. RSA NetWitness
RSA NetWitness Platform is a threat detection and incident response suite that leverages logs and other data
sources for monitoring, reporting, and investigations. NXLog is an officially supported RSA Ready certified
product and can be configured as the log collection agent for NetWitness.

92.1. Configuring NetWitness


The following steps are also outlined in the NetWitness CEF Implementation Guide. See that document for more
information and associated warnings.

1. Make sure Syslog collection is enabled. RSA NetWitness creates Syslog listeners by default for UDP on port
514, TCP on port 514, and SSL on port 6514. See Configure Syslog Event Sources for Remote Collector on RSA
Link for further setup notes.
2. Add a Log Decoder using the "Envision Config File" resource.
a. From the NetWitness menu, select Configure > Live Content.
b. In the Keywords field, enter Envision Config File.

c. In the Matching Resources pane, check the Envision Config File entry and click Deploy in the menu
bar.

d. In the Deployment Wizard Resources pane, click Next.


e. In the Services pane, select the Log Decoder and click Next.

f. In the Review pane, review the changes and click Deploy. Click Close after the deployment task has
finished.
3. Deploy the Common Event Format.
a. From the NetWitness menu, select Live > Search.
b. In the Keywords field, enter Common Event Format.

578
c. In the Matching Resources pane, check the Common Event Format entry and click Deploy in the
menu bar.

d. In the Deployment Wizard Resources pane, click Next.


e. In the Services pane, select the Log Decoder and click Next.
f. In the Review pane, review the changes and click Deploy. Click Close after the deployment task is
finished.
4. Ensure that the CEF parser is enabled on the Log Decoder(s).
a. Open Admin > Services on the NetWitness dashboard.
b. Locate the Log Decoder, click the gear to the right, and select View > Config.

c. Enable the cef parser in the Service Parsers Configuration and click Apply.

579
5. Edit the CEF configuration to collect NXLog event times.
a. Connect via SFTP using WinSCP or another utility.
b. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/devices/cef/cef.xml.

c. Edit the file, adding the following lines after the end of the preceding <MESSAGE … /> section:

<MESSAGE
  id1="NXLog_NXLog"
  id2="NXLog_NXLog"
  eventcategory="1612000000"
  functions="&lt;@msg:*PARMVAL($MSG)&gt;&lt;@event_time:*EVNTTIME($MSG,'%R %F
%Z',event_time_string)&gt;&lt;@endtime:*EVNTTIME($MSG,'%W-%D-%G
%Z',param_endtime)&gt;&lt;@starttime:*EVNTTIME($MSG,'%W-%G-%FT%Z',param_starttime)&gt;"
  content="&lt;param_endtime&gt;&lt;param_starttime&gt;&lt;msghold&gt;" />

6. If required, edit the CEF custom configuration to support custom fields as follows.
a. Connect via SFTP.
b. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/devices/cef/cef-
custom.xml, if it exists.

c. Create the file with the following contents. Or if the file already exists, add only the required sections.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<DEVICEMESSAGES>
<!--
#
# cef-custom.xml Reference: https://community.rsa.com/docs/DOC-79189
#
--> cef-custom.xml

  <VendorProducts>
  <Vendor2Device vendor="NXlog" product="NXLog Enterprise Edition"
  device="NXLog_NXLog" group="Analysis"/>
  </VendorProducts>

  <ExtensionKeys>
  <ExtensionKey cefName="Keywords" metaName="Keywords"/>
  <ExtensionKey cefName="Severity" metaName="Severity"/>
  <ExtensionKey cefName="SeverityValue" metaName="SeverityValue"/>
  <ExtensionKey cefName="SourceName" metaName="SourceName"/>
  <ExtensionKey cefName="ProviderGuid" metaName="ProviderGuid"/>
  <ExtensionKey cefName="TaskValue" metaName="TaskValue"/>
  <ExtensionKey cefName="OpcodeValue" metaName="OpcodeValue"/>
  <ExtensionKey cefName="RecordNumber" metaName="RecordNumber"/>
  <ExtensionKey cefName="ExecutionProcessID" metaName="ExecutionProcessID"/>
  <ExtensionKey cefName="ExecutionThreadID" metaName="ExecutionThreadID"/>
  <ExtensionKey cefName="param2" metaName="param2"/>
  <ExtensionKey cefName="SourceModuleName" metaName="SourceModuleName"/>
  <ExtensionKey cefName="SourceModuleType" metaName="SourceModuleType"/>
  <ExtensionKey cefName="EventReceivedTime" metaName="param_starttime"/>

  <ExtensionKey cefName="msg" metaName="msg">


  <device2meta device="trendmicrodsa" metaName="info"/>
  <device2meta device="NXLog_NXLog" metaName="info"/>
  </ExtensionKey>
  </ExtensionKeys>
</DEVICEMESSAGES>

580
d. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/table-map-custom.xml, if it
exists.
e. Create the file with the following contents. Or if the file already exists, add the lines between <mappings>
and </mappings>.

<?xml version="1.0" encoding="utf-8"?>


<!--
# attributes:
# envisionName: The name of the column in the universal table
# nwName: The name of the NetWitness meta field
# format: Optional. The language key data type. See LanguageManager. Defaults to
"Text".
# flags: Optional. One of None|File|Duration|Transient. Defaults to "None".
# failureKey: Optional. The name of the NW key to write data if conversion fails.
Defaults to system generated "parse.error" meta.
# nullTokens: Optional. The list of "null" tokens. Pipe separated. Default is no null
tokens.
-->

<mappings>
  <mapping envisionName="severity" nwName="severity" flags="None" format="Text"/>
  <mapping envisionName="Keywords" nwName="Keywords" flags="None" format="Text"/>
  <mapping envisionName="Severity" nwName="Severity" flags="None" format="Text"/>
  <mapping envisionName="SeverityValue" nwName="SeverityValue" flags="None" format="Text"/>
  <mapping envisionName="dvcpid" nwName="dvcpid" flags="None" format="Text"/>
  <mapping envisionName="hardware_id" nwName="hardware.id" flags="None" format="Text"/>
  <mapping envisionName="SourceName" nwName="SourceName" flags="None" format="Text"/>
  <mapping envisionName="ProviderGuid" nwName="ProviderGuid" flags="None" format="Text"/>
  <mapping envisionName="TaskValue" nwName="TaskValue" flags="None" format="Text"/>
  <mapping envisionName="OpcodeValue" nwName="OpcodeValue" flags="None" format="Text"/>
  <mapping envisionName="RecordNumber" nwName="RecordNumber" flags="None" format="Text"/>
  <mapping envisionName="ExecProcID" nwName="ExecProcID" flags="None" format="Text"/>
  <mapping envisionName="ExecThreadID" nwName="ExecThreadID" flags="None" format="Text"/>
  <mapping envisionName="cs_devfacility" nwName="deviceFacility" flags="None" format="Text"/>
  <mapping envisionName="info" nwName="info" flags="None" format="Text"/>
  <mapping envisionName="param2" nwName="param2" flags="None" format="Text"/>
  <mapping envisionName="SourceModuleName" nwName="SourceModuleName" flags="None"
format="Text"/>
  <mapping envisionName="SourceModuleType" nwName="SourceModuleType" flags="None"
format="Text"/>
  <mapping envisionName="param_endtime" nwName="end" flags="None" format="TimeT"/>
  <mapping envisionName="param_starttime" nwName="start" flags="none" format="TimeT"/>
</mappings>

7. Start collecting logs.


a. Go to Admin > Services, select the associated Log Decoder, click the gear, and select View > System.

581
b. Click Start Capture to start the log collection.

92.2. Configuring NXLog


NXLog can be configured to collect, convert, and send whatever log events are required. The xm_cef and
xm_syslog provide the necessary functionality for converting log data to CEF and adding the Syslog header.

582
Example 398. Converting and Forwarding EventLog Data in CEF

This example configuration reads from the Windows EventLog with im_msvistalog, converts the log data to
CEF, and forwards it to NetWitness via TCP.

The xm_cef extension module provides the to_cef() function, which generates the CEF format. The xm_syslog
extension module provides the to_syslog_bsd() procedure, which adds the BSD Syslog header.

nxlog.conf
 1 <Extension _cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module _xm_syslog
 7 </Extension>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 </Input>
12
13 <Output netwitness_tcp>
14 Module om_tcp
15 Host 127.0.0.1
16 Port 514
17 <Exec>
18 $Message = to_cef();
19 to_syslog_bsd();
20 </Exec>
21 </Output>

To send logs via UDP, use this Output block instead.

nxlog.conf
1 <Output netwitness_udp>
2 Module om_udp
3 Host 127.0.0.1
4 Port 514
5 <Exec>
6 $Message = to_cef();
7 to_syslog_bsd();
8 </Exec>
9 </Output>

92.3. Verifying Collection on NetWitness


After deploying the NXLog configuration on the log source host and starting the capture on NetWitness, the
event log data log should be available on NetWitness.

Go to Admin and select the Log Decoder. In the Events area, select an event to view its details.

583
It is also possible to examine the raw log to verify that the output to NetWitness is in CEF.

Output Sample
Nov 13 12:34:17 test.test.com Service_Control_Manager: CEF:0|NXLog|NXLog|4.1.4016|0|-|7|end=2018-11-
13 12:34:17 dvchost=test.test.com Keywords=9259400833873739776 outcome=INFO SeverityValue=2
Severity=INFO externalId=7036 SourceName=Service Control Manager ProviderGuid={555908D1-A6D7-4695-
8E1E-26931D2012F4} Version=0 TaskValue=0 OpcodeValue=0 RecordNumber=3037 ExecutionProcessID=496
ExecutionThreadID=2136 deviceFacility=System msg=The Windows Installer service entered the stopped
state. param1=Windows Installer param2=stopped EventReceivedTime=2018-11-13 12:40:28
SourceModuleName=eventlog SourceModuleType=im_msvistalog↵

584
Chapter 93. SafeNet KeySecure
SafeNet KeySecure devices are capable of sending their logs to a remote Syslog destination via UDP or TCP.
KeySecure has four different logs: System, Audit, Activity, and Client Event. Each one has a slightly different
format, and each can be configured with up to two Syslog servers. There is also an option to sign and encrypt
logs messages before sending them to the remote destination. Configuration for this type of scenario is outside
of the scope of this section.

Sample Audit Message


2017-03-26 18:12:04 [admin] [Login] [CLI]: Logged out from 192.168.15.231 via SSH↵

In case of a cluster with two or more KeySecure devices, the configuration change on one of them will be
replicated to other members. Each member will be sending logs separately. For more details regarding logging
configuration on SafeNet KeySecure, refer to the KeySecure Appliance User Guide.

This section covers configuration for sending logs via UDP. To use TCP instead, just select it
NOTE
instead where appropriate.

1. Configure NXLog for receiving Syslog logs (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the KeySecure device.
3. Configure Syslog logging on KeySecure using either the web interface or the command line. See the following
sections.

The steps in the following sections have been tested on KeySecure 460 and should work on
NOTE
other models also.

585
Example 399. Receiving Logs From KeySecure

This example shows a KeySecure Audit log message as received and processed by NXLog. Use the im_tcp
module instead of im_udp to receive Syslog messages via TCP instead.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/keysecure.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.5.20",
  "EventReceivedTime": "2017-03-26 18:11:36",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 17,
  "SyslogFacility": "LOCAL1",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "p-keysecure1",
  "EventTime": "2017-03-26 18:12:26",
  "SourceName": "IngrianAudit",
  "Message": "2017-03-26 18:12:26 [admin] [Login] [CLI]: Logged in from 192.168.15.231 via SSH"
}

586
Example 400. Extracting Additional Fields

Additional field extraction can also be configured. Note that this depends on which particular log the
message is coming from, as each has a different format.

nxlog.conf
 1 <Input in_syslog_udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ /(?x)^\d{4}-\d{2}-\d{2}\ \d{2}:\d{2}:\d{2}\ \[([a-zA-Z]*)\]
 8 \ \[([a-zA-Z]*)\]\ \[([a-zA-Z]*)\]:\ (.*)$/
 9 {
10 $KSUsername = $1;
11 $KSEvent = $2;
12 $KSSubsys = $3;
13 $KSMessage = $4;
14 }
15 </Exec>
16 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.5.20",
  "EventReceivedTime": "2017-04-15 19:14:59",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 17,
  "SyslogFacility": "LOCAL1",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "p-keysecure1",
  "EventTime": "2017-04-15 19:16:29",
  "SourceName": "IngrianAudit",
  "Message": "2017-04-15 19:16:29 [admin] [Login] [CLI]: Logged in from 192.168.15.231 via
SSH",
  "KSUsername": "admin",
  "KSEvent": "Login",
  "KSSubsys": "CLI",
  "KSMessage": "Logged in from 192.168.15.231 via SSH"
}

93.1. Configuring via the Web Interface


1. Log in to the KeySecure Management Console.
2. Go to Device › Logs & Statistics › Log Configuration › Rotation & Syslog.

3. Select a log type and click [ Edit ] to change the settings.


4. Select the Enable Syslog option and specify the required IP addresses, ports, protocols, and facility for up to
two servers.

587
5. Click [ Save ].
6. Repeat for the other log types as required.

93.2. Configuring via the Command Line


1. Log in to KeySecure via SSH.
2. Run the following commands. Follow the prompts to enable remote syslog with the required IP addresses,
ports, protocols, and facility for up to two servers.

# configure
# system syslog
# audit syslog
# activity syslog
# clientevent syslog

588
Example 401. Forwarding System Logs

The following commands enable sending System logs to 192.168.6.43 via UDP port 514.

p-keysecure1# configure
p-keysecure1 (config)# system syslog
Enable Syslog [y]:
Syslog Server #1 IP: 192.168.6.143
Syslog Server #1 Port [514]:
Server #1 Proto:
  1: udp
  2: tcp
Enter a number (1 - 2) [1]:
Syslog Server #2 IP:
Syslog Server #2 Port [514]:
Server #2 Proto:
  1: udp
  2: tcp
Enter a number (1 - 2) [1]:
Syslog Facility:
  1: local0
  2: local1
  3: local2
  4: local3
  5: local4
  6: local5
  7: local6
  8: local7
Enter a number (1 - 8) [2]:
System Log syslog settings successfully saved. Syslog is enabled.
Warning: The syslog protocol insecurely transfers logs in cleartext

589
Chapter 94. Salesforce
Salesforce provides customer relationship management (CRM) and other enterprise products.

NXLog can be set up to fetch Event Log Files from Salesforce using the REST API. For more information, see the
Salesforce add-on.

590
Chapter 95. Snare
The Snare Agent is a popular log collection software for Windows EventLog. The Snare format is supported by
many tools and SIEM vendors. It uses tab delimited records and can use Syslog as the transport. NXLog can be
configured to collect or forward logs in the Snare format.

The Snare format can be used with or without the Syslog header.

Snare Format
HOSTNAME ⇥ MSWinEventLog ⇥ Criticality ⇥ EventLogSource ⇥ SnareCounter ⇥ SubmitTime ⇥ EventID ⇥
SourceName ⇥ UserName ⇥ SIDType ⇥ EventLogType ⇥ ComputerName ⇥ CategoryString ⇥ DataString ⇥
ExpandedString ⇥ OptionalMD5Checksum↵

"Snare Over Syslog" Format


<PRI>TIMESTAMP HOSTNAME MSWinEventLog ⇥ Criticality ⇥ EventLogSource ⇥ SnareCounter ⇥ SubmitTime ⇥
EventID ⇥ SourceName ⇥ UserName ⇥ SIDType ⇥ EventLogType ⇥ ComputerName ⇥ CategoryString ⇥
DataString ⇥ ExpandedString ⇥ OptionalMD5Checksum↵

95.1. Collecting Snare


NXLog can parse Snare logs with the parse_csv() procedure provided by the xm_csv extension module.

Example 402. Using xm_csv to Capture Snare Logs

With the following configuration, NXLog will accept Snare format logs via UDP, parse them, convert to JSON,
and output the result to file. This configuration supports both "Snare over Syslog" and the regular Snare
format.

nxlog.conf (truncated)
 1 <Extension snare>
 2 Module xm_csv
 3 Fields $MSWINEventLog, $Criticality, $EventLogSource, $SnareCounter, \
 4 $SubmitTime, $EventID, $SourceName, $UserName, $SIDType, \
 5 $EventLogType, $ComputerName, $Category, $Data, $Expanded, \
 6 $MD5Checksum
 7 FieldTypes string, integer, string, integer, datetime, integer, string, \
 8 string, string, string, string, string, string, string, string
 9 Delimiter \t
10 </Extension>
11
12 <Extension json>
13 Module xm_json
14 </Extension>
15
16 <Extension syslog>
17 Module xm_syslog
18 </Extension>
19
20 <Input in>
21 Module im_udp
22 Host 0.0.0.0
23 Port 6161
24 <Exec>
25 parse_syslog_bsd();
26 if $Message =~ /^((\w+)\t)?(MSWinEventLog.+)$/
27 {
28 if $2 != ''
29 [...]

591
Input Sample ("Snare Over Syslog")
<13>Nov 21 11:40:27 myserver MSWinEventLog ⇥ 0 ⇥ Security ⇥ 32 ⇥ Mon Nov 21 11:40:27 2016 ⇥
592 ⇥ Security ⇥ Andy ⇥ User ⇥ Success Audit ⇥ MAIN ⇥ DetailedTracking ⇥ Process ended ⇥
Ended process ID: 2455↵

Output Sample
{
  "EventReceivedTime": "2016-11-21 11:40:28",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "myserver",
  "EventTime": "2016-11-21 11:40:27",
  "Message": "Ended process ID: 2455",
  "MSWINEventLog": "MSWinEventLog",
  "Criticality": 0,
  "EventLogSource": "Security",
  "SnareCounter": 32,
  "SubmitTime": "2016-11-21 11:40:27",
  "EventID": 592,
  "SourceName": "Security",
  "UserName": "Andy",
  "SIDType": "User",
  "EventLogType": "SuccessAudit",
  "ComputerName": "MAIN",
  "CategoryString": "DetailedTracking",
  "DataString": "Process ended",
  "ExpandedString": "Ended process ID: 2455"
}

95.2. Generating Snare


NXLog can also generate Snare logs in place of the original Snare agent with the to_syslog_snare() procedure
provided by the xm_syslog extension module.

592
Example 403. Sending EventLog in Snare Format

With this configuration, NXLog will read the Windows EventLog, convert it to Snare format, and output it via
UDP. NXLog log messages are also included (via the im_internal module). Tabs and newline sequences are
replaced with spaces.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input internal>
 6 Module im_internal
 7 </Input>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message =~ s/(\t|\R)/ /g;
12 </Input>
13
14 <Output out>
15 Module om_udp
16 Host 192.168.1.1
17 Port 514
18 Exec to_syslog_snare();
19 </Output>
20
21 <Route r>
22 Path internal, eventlog => out
23 </Route>

Output Sample
<13>Nov 21 11:40:27 myserver MSWinEventLog ⇥ 0 ⇥ Security ⇥ 32 ⇥ Mon Nov 21 11:40:27 2016 ⇥
592 ⇥ Security ⇥ N/A ⇥ N/A ⇥ Success Audit ⇥ MAIN ⇥ DetailedTracking ⇥ Process ended ⇥ Ended
process ID: 2455↵

593
Chapter 96. Snort
NXLog can be used to capture and process logs from the Snort network intrusion prevention system.

Snort writes log entries to the /var/log/snort/alert file. Each entry contains the date and time of the event,
the packet header, a description of the type of breach that was detected, and a severity rating. Each log entry
traverses multiple lines, and there is neither a fixed number of lines nor a separator.

Example 404. Snort Rules and Log Samples

Following are three example Snort rules and corresponding log messages.

Snort Rule
alert icmp any any -> any any (msg:"ICMP Packet"; sid:477; rev:3;)

Log Sample
[**] [1:477:3] ICMP Packet [**]↵
[Priority: 0]↵
04/30-07:54:41.759229 172.25.212.245 -> 172.25.212.153↵
ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:96 DF↵
Type:8 Code:0 ID:16348 Seq:0 ECHO↵

Snort Rule
alert tcp any any -> any any (msg:"Exploit detected"; sid:1000001; content:"exploit";)

Log Sample
[**] [1:1000001:0] Exploit detected [**]↵
[Priority: 0]↵
04/30-07:54:38.312536 172.25.212.204:80 -> 192.168.255.110:46127↵
TCP TTL:64 TOS:0x0 ID:19844 IpLen:20 DgmLen:505 DF↵
***AP*** Seq: 0xF936BE12 Ack: 0x2C9A47D8 Win: 0x7B TcpLen: 20↵

Snort Rule
alert tcp any any -> any any (msg:"Advanced exploit detected"; \
sid:1000002; content:"backdoor"; reference:myserver,myrules; \
gid:1000001; rev:1; classtype:shellcode-detect; priority:100; \
metadata:meta data;)

Log Sample
[**] [1000001:1000002:1] Advanced exploit detected [**]↵
[Classification: Executable Code was Detected] [Priority: 100]↵
04/30-07:54:35.707783 192.168.255.110:46117 -> 172.25.212.204:80↵
TCP TTL:127 TOS:0x0 ID:14547 IpLen:20 DgmLen:435 DF↵
***AP*** Seq: 0x49649AA5 Ack: 0x5BC496C0 Win: 0x40 TcpLen: 20↵
[Xref => myserver myrules]↵

594
Example 405. Parsing Snort Logs

This configuration uses an xm_multiline extension module instance with a HeaderLine regular expression
to parse the log entries. An Exec directive is also used to drop all empty lines.

In the Input module instance, another regular expression captures the parts of the message and adds
corresponding fields to the event record. Additional information could be extracted also, such as Xref data,
by adding (.*)\s+(.*)\s+\[Xref => (.*)\] to the expression and then $Xref = $13; below it.

Finally, the log entries are formatted as JSON with the to_json() procedure.

nxlog.conf (truncated)
 1 <Extension snort>
 2 Module xm_multiline
 3 HeaderLine /^\[\*\*\] \[\S+] (.*) \[\*\*\]/
 4 Exec if $raw_event =~ /^\s+$/ drop();
 5 </Extension>
 6
 7 <Extension _json>
 8 Module xm_json
 9 </Extension>
10
11 <Input in>
12 Module im_file
13 File "/var/log/snort/alert"
14 InputType snort
15 <Exec>
16 if $raw_event =~ /(?x)^\[\*\*\]\ \[\S+\]\ (.*)\ \[\*\*\]\s+
17 (?:\[Classification:\ ([^\]]+)\]\ )?
18 \[Priority:\ (\d+)\]\s+
19 (\d\d).(\d\d)\-(\d\d:\d\d:\d\d\.\d+)
20 \ (\d+.\d+.\d+.\d+):?(\d+)?\ ->
21 \ (\d+.\d+.\d+.\d+):?(\d+)?\s+\ /
22 {
23 $EventName = $1;
24 $Classification = $2;
25 $Priority = $3;
26 $EventTime = parsedate(year(now()) + "-" + $4 + "-" + $5 + " " + $6);
27 $SourceIPAddress = $7;
28 $SourcePort = $8;
29 [...]

Output Sample
{
  "EventReceivedTime": "2014-05-05 09:08:58",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "EventName": "Advanced exploit detected",
  "Classification": "Executable Code was Detected",
  "Priority": "100",
  "EventTime": "2014-04-30 07:54:35",
  "SourceIPAddress": "192.168.255.110",
  "SourcePort": "46117",
  "DestinationIPAddress": "172.25.212.204",
  "DestinationPort": "80"
}

595
Chapter 97. Splunk
Splunk is a software platform for data collection, indexing, searching, and visualization. NXLog can be configured
as an agent for Splunk, collecting and forwarding logs to the Splunk instance. Splunk can accept logs forwarded
via UDP, TCP, TLS, or HTTP.

For more information, see the Splunk Enterprise documentation. See also the Sending ETW Logs to Splunk with
NXLog post.

97.1. An Alternative to the Splunk Universal Forwarder


The Splunk universal forwarder is a Splunk agent commonly used in a similar role as NXLog. However, NXLog
offers some significant advantages over the Splunk universal forwarder by providing full-featured parsing and
filtering prior to forwarding, which results in faster indexing by Splunk, just to mention a few. In controlled tests,
Splunk was able to process and index events forwarded by NXLog over 10 times faster than the same set of
Windows events forwarded by the Splunk universal forwarder, despite the overhead of renaming Windows field
names and reformatting the events to emulate Splunk’s proprietary forwarding format.

When planning a migration to NXLog, the various types of log sources being collected by Splunk universal
forwarders should be evaluated. Depending on the type of log source, it could be as simple as creating a new
TCP data input port and following some of the examples contained in this chapter, such as forwarding BSD
Syslog events. As long as the log source provides data in a standard format that Splunk can easily index, and
Splunk is retaining the original field names, no special configurations need be written.

In the case of Windows Event Log providers, special NXLog configurations are required to emulate the event
fields and format sent by the Splunk universal forwarder since Splunk renames at least four Windows fields and
adds some new fields to the event schema. See the comparison table below.

Table 61. Comparison of Field Names for Windows Events

Windows NXLog Splunk


Channel Channel Logname

Computer Hostname * ComputerName

EventID EventID EventCode

Execution_Proces ExecutionProcess  — 


sID ID

Execution_Threa ExecutionThreadI  — 


dID D

ProviderGuid ProviderGuid  — 

UserID UserID Sid

 —   —  Type

 —   —  idType

* NXLog normalizes this field name across all modules and log sources.

It should be emphasized that NXLog is capable of forwarding Windows events or any other kind
of structured logs to Splunk for indexing without any need to emulate the format or event
NOTE schema used by the Splunk universal forwarder. There is no technical requirement or advantage
in using Splunk’s proprietary format for forwarding logs to Splunk, especially for new Splunk
deployments which have no existing corpus of Windows events.

The only purpose of emulating the Splunk universal forwarder format is to maintain continuity with
previously indexed Windows events that were forwarded with the Splunk universal forwarder. Forwarding
Windows Event Log data in JSON format over TCP to Splunk is the preferred method.

596
97.1.1. Forwarding Windows Events Using JSON
This section assumes that any preexisting Windows Event Log data currently indexed in Splunk will be managed
separately—due to some of its fields names being altered from the original Windows field names—until it ages
out of the system. However, if there is a need to maintain Splunk-specific field names of Windows events, see the
next section that provides a solution for using NXLog to forward Windows events as if they were sent by the
Splunk universal forwarder.

After defining a network data input port (see Adding a TCP or UDP Data Input in the next section for details), the
only NXLog configuration needed for forwarding events to Splunk is a simple, generic TCP (or UDP) output
module instance that converts the logs to JSON as they are being sent.

Example 406. Forwarding Windows DNS Server Events in JSON Format to Splunk

This example uses Windows ETW to collect Windows DNS Server events. The output instance defines the IP
address and port of the host where Splunk Enterprise is receiving data on TCP port 1527 which was defined
in Splunk to have a Source Type of _json.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input dns_server>
 6 Module im_etw
 7 Provider Microsoft-Windows-DNSServer
 8 </Input>
 9
10 <Output splunk>
11 Module om_tcp
12 Host 192.168.1.21
13 Port 1527
14 Exec to_json();
15 </Output>

597
Output Sample (whitespace added)
{
  "SourceName": "Microsoft-Windows-DNSServer",
  "ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
  "EventId": 515,
  "Version": 0,
  "ChannelID": 17,
  "OpcodeValue": 0,
  "TaskValue": 5,
  "Keywords": "4611686018428436480",
  "EventTime": "2020-05-19T10:42:06.313322-05:00",
  "ExecutionProcessID": 1536,
  "ExecutionThreadID": 3896,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Domain": "WIN-R4QHULN6KLH",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-915329490-2962477901-227355065-500",
  "AccountType": "User",
  "Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
  "Type": "5",
  "NAME": "www.example.com",
  "TTL": "3600",
  "BufferSize": "17",
  "RDATA": "0x106E73312E6578616D706C652E636F6D2E",
  "Zone": "example.com",
  "ZoneScope": "Default",
  "VirtualizationID": ".",
  "EventReceivedTime": "2020-05-19T10:42:07.313482-05:00",
  "SourceModuleName": "dns_server",
  "SourceModuleType": "im_etw",
  "MessageSourceAddress": "192.168.1.61"
}

Since Splunk readily accepts formats like JSON and XML that support highly structured data, querying JSON-
formatted logs is easily accomplished with Splunk’s spath command.

97.1.2. Forwarding Windows Events Using the Splunk Universal Forwarder


Format
If it is important to retain the Splunk universal format after migrating to NXLog, then adhering to the following

598
procedures is imperative for Splunk to correctly ingest the logs being forwarding using this emulation technique.

When creating configurations with NXLog for maintaining backwards compatibility with events previously
collected by the universal forwarder, only a few general principles need to be observed:

• When creating a new TCP data input in Splunk, choose the right Source Type.
• In the NXLog configuration, rename event fields to the field names Splunk associates with that Source Type.
• In the NXLog configuration, make sure the data matches the format shown in Splunk as closely as possible,
unless Splunk is failing to parse specific fields.
• In the NXLog configuration, manually parse embedded structured data as new, full-fledged fields. A common
cause of failed parsing using this technique are fields containing long strings of embedded subfields.

The following steps should be followed for each type of log source being forwarded:

1. Examine the events in Splunk and note which value is assigned to sourcetype= listed below each event. The
universal forwarder may list different values for sourcetype even when they are coming from the same
source. Try to determine which one is the best fit.

2. In Splunk, create a new TCP Data Input port for each log source type to be forwarded and set the Source
Type to the same one assigned to events that have been sent by the universal forwarder after they have
been ingested by Splunk.
3. Note which fields are being parsed and indexed after they have been received and processed by Splunk.
4. Create an NXLog configuration that will capture the log source data, rename the field names to those
associated with the Source Type, and format them to match the format that the Splunk universal forwarder
uses.

The actual format used by the Splunk universal forwarder is "cooked" data which has a binary header
component and a footer. A single line containing the date and time of the event marks the beginning of the event
data on the next line, which is generally formatted as key-value pairs, unquoted, separated by an equals sign (=),
with only one key-value pair per line. The header and footer parts are not needed for forwarding events to a TCP
Data Input port. Only the first line containing the event’s date/time and the subsequent lines containing the key-
value pairs are needed.

Windows Event Log data can be forwarded to Splunk using NXLog in such a way that Splunk parses and indexes
them as if they were sent by the Splunk universal forwarder. Only three criteria need to be met:

1. The Splunk Add-on for Microsoft Windows has been installed where the forwarded events will be received.
See About installing Splunk add-ons on Splunk Docs for more details.
2. The NXLog configuration rewrites events to match the field names expected by the corresponding log source
in the Splunk Add-on for Microsoft Windows and formats the event to match the format of the Splunk
universal forwarder.
3. A unique TCP Data Input port is created for each type of Windows Event Provider by following the procedure
in Adding a TCP or UDP Data Input. When specifying the Source type it is imperative to choose the correct
name from the dropdown list that follows this naming convention: WinEventLog:Provider[/Channel].

When adding a new TCP Data Input, the desired Source type for Windows might not be present
in the Select Source Type dropdown menu. If so, select or manually enter WinEventLog and
NOTE create the TCP Data Input. Once created, go back to the list of TCP Data Inputs and edit it by
clicking the TCP port number. Make sure Set source type is set to Manual, then enter the
correct name in the Source type field.

The following examples have been tested with Splunk 8.0.0 and the "Splunk Add-on for
NOTE
Microsoft Windows" version 8.0.0.

Example 407. Forwarding Windows DNS Server Audit Events Using the Universal Forwarder Format

599
This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows
DNS Server Audit events to Splunk. First, a new TCP Input on port 1515 with a Source type of
WinEventLog:Microsoft-Windows-DNSServer/Audit is created for receiving the forwarded events.

This configuration uses the im_msvistalog module to collect and the parse the log data. Since there will be
no need for filtering in this example, a simple File directive defines the location of the log source to be
read, otherwise a QueryXML block would have been used to define the filters and the Provider/Channel as
the log source. The Exec block contains the necessary logic for converting the parsed data to the format
used by the Splunk universal forwarder. Since each event will be formatted and output as a multi-line
record stored as a single string in the $raw_event field, the xm_rewrite is used to delete the original fields.
Once converted, events are then forwarded over TCP port 1515 to Splunk.

nxlog.conf (truncated)
 1 <Extension Drop_Fields>
 2 Module xm_rewrite
 3 Keep # Remove all
 4 </Extension>
 5
 6 <Input DNS_Server_Audit>
 7 Module im_msvistalog
 8 File %SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-DNSServer%4Audit.evtx
 9 <Exec>
10 # Create a header variable for storing the Splunk datetime string
11 create_var('timestamp_header');
12 create_var('event'); # The Splunk equivalent of a $raw_event
13 create_var('message'); # For preserving the $Message field
14 create_var('vip_fields'); # Message subfields converted to fields
15
16 # Get the Splunk datetime string needed for the Header Line
17 $dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
18 $hr = ""; # Hours, 2-digit
19 $ap = ""; # For either "AM" or "PM";
20 if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
21 if (hour($EventTime) < 12) {
22 $ap = "AM";
23 $hr = $4;
24 if (hour($EventTime) == 0) $hr = "12";
25 }
26 if (hour($EventTime) > 11) {
27 $ap = "PM";
28 if (hour($EventTime) == 12) $hr = $4;;
29 [...]

A sample DNS Server Audit Event after being forwarded to Splunk.

600
Events should be automatically parsed by Splunk as shown below.

601
Example 408. Forwarding Sysmon DNS Query Events Using the Universal Forwarder Format

This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows
Sysmon DNS Query Events events to Splunk. First, a new TCP Input on port 1515 with a Source type of
WinEventLog:Microsoft-Windows-Sysmon/Operational is created for receiving the forwarded events.

The configuration uses the im_msvistalog module to collect and the parse the log data. The QueryXML block
is used to specify not only the Provider/Channel, but also provides additional filtering for collecting only DNS
Query events. The Exec block contains the necessary logic for converting the data to the format used by the
Splunk universal forwarder. Since each event will be formatted and output as a multi-line record stored as a
single string in the $raw_event field, the xm_rewrite is used to delete the original fields. Once converted,
events are then forwarded over TCP port 1517 to Splunk.

602
nxlog.conf (truncated)
 1 <Extension Drop_Fields>
 2 Module xm_rewrite
 3 Keep # Remove all
 4 </Extension>
 5
 6 <Input DNS_Sysmon>
 7 Module im_msvistalog
 8 <QueryXML>
 9 <QueryList>
10 <Query Id="0">
11 <Select Path="Microsoft-Windows-Sysmon/Operational">
12 *[System[(EventID=22)]]
13 </Select>
14 </Query>
15 </QueryList>
16 </QueryXML>
17 <Exec>
18 # Create a header variable for storing the Splunk datetime string
19 create_var('timestamp_header');
20 create_var('event'); # The Splunk equivalent of a $raw_event
21 create_var('message'); # For preserving the $Message field
22 create_var('message_fields'); # Message subfields converted to fields
23
24 # Get the Splunk datetime string needed for the Header Line
25 $dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
26 $hr = ""; # Hours, 2-digit
27 $ap = ""; # For either "AM" or "PM";
28 if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
29 [...]

A sample Sysmon DNS Query Event after being forwarded to Splunk:

Events should be automatically parsed by Splunk as shown below.

603
97.1.3. File and Directory-Based Forwarding
The only means available to the Splunk Universal Forwarder for selecting log sources to monitor is by manually
defining paths to files or directories on the local host. This same technique is available with NXLog. Since NXLog
is also designed to forward to other NXLog agents, this feature can be leveraged to reduce the number of open
network connections to a Splunk Enterprise server when events are forwarded from a single NXLog central
logging server.

604
Example 409. Forwarding File-Based Centralized Logs to Splunk

In the following example, a central NXLog server receives events for all log sources within the enterprise
and forwards each log source type via a TCP data input connection that has been preconfigured on the
Splunk Enterprise server for that Source type.

nxlog.conf (truncated)
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 # Receive Events from ALL Enterprise Servers
 6 <Input syslog_in>
 7 Module im_tcp
 8 Host 0.0.0.0
 9 Port 1514
10 </Input>
11
12 <Input dns_audit_in>
13 Module im_tcp
14 Host 0.0.0.0
15 Port 1515
16 </Input>
17
18 # Cache the Events to Disk in case of Splunk unavailability
19 <Output syslog_cache>
20 Module om_file
21 File '/opt/nxlog/var/log/cached/syslog.bin'
22 OutputType Binary
23 </Output>
24
25 <Output dns_audit_cache>
26 Module om_file
27 File '/opt/nxlog/var/log/cached/dns-audit.bin'
28 OutputType Binary
29 [...]

97.2. Configuring Splunk


The following sections describe steps that may be required to prepare Splunk for receiving events from NXLog.

97.2.1. Adding a TCP or UDP Data Input


TCP or UDP log collection can be added from the web interface, however TLS encryption must be configured by
editing configuration files.

1. Add a new data input.


a. On the Splunk web interface, go to Settings > Data inputs.
b. In the Local inputs section, for the TCP (or UDP) input type, click Add new.
c. Enter the Port on which to listen for log data (for example, port 514).
d. Fill in the remaining values, if required, and click Next.

605
2. Configure the input settings.
a. Select the Source type appropriate for the logs to be sent. For more information, see the Sending
Generic Structured Logs and Sending Specific Log Types for Splunk to Parse sections below.
b. Choose an App context; for example, Search & Reporting (search).
c. Adjust the remaining default values, if required, and click Review.

3. Review the pending changes and click Submit.

97.2.2. Configuring TLS Collection


Follow these steps to set up TLS collection.

1. In order to generate certificates, issue the following commands from the server’s console. The script will ask
for a password to protect the key.

606
$ mkdir /opt/splunk/etc/certs
$ export OPENSSL_CONF=/opt/splunk/openssl/openssl.cnf
$ /opt/splunk/bin/genRootCA.sh -d /opt/splunk/etc/certs
$ /opt/splunk/bin/genSignedServerCert.sh -d /opt/splunk/etc/certs -n splunk -c splunk -p

2. Go to the app’s folder and edit the inputs file. For the Search & Reporting app, the path is
$SPLUNK_HOME/etc/apps/search/local/inputs.conf. Add [tcp-ssl] and [SSL] sections.

inputs.conf
[tcp-ssl://10514]
disabled = false
sourcetype = <optional>

[SSL]
serverCert = /opt/splunk/etc/certs/splunk.pem
sslPassword = <The password provided in step 1>
requireClientCert = false

3. Edit the $SPLUNK_HOME/etc/system/local/server.conf file, adding a sslRootCAPath value to the


[sslConfig] section.

server.conf
[sslConfig]
sslPassword = <Automatically generated>
sslRootCAPath = /opt/splunk/etc/certs/cacert.pem

4. Finally, restart Splunk in order to apply the new configuration.

$ $SPLUNK_HOME/bin/splunk restart splunkd

5. Setup can be tested with netstat or a similar command. If everything went correctly, the following output is
produced.

$ netstat -an | grep :10514


tcp 0 0 0.0.0.0:10514 0.0.0.0:* LISTEN

6. Copy the cacert.pem file from $SPLUNK_HOME/etc/certs to the NXLog certificate directory.

Example 410. Sending Logs via TLS

This configuration illustrates how to send a log file via a TLS-encrypted connection. The AllowUntrusted
setting is required in order to accept a self-signed certificate.

nxlog.conf
1 <Output out>
2 Module om_ssl
3 Host 127.0.0.1
4 Port 10514
5 CertFile %CERTDIR%/cacert.pem
6 AllowUntrusted TRUE
7 </Output>

97.2.3. Configuring HTTP Event Collection (HEC)


HTTP Event Collection can gather events, as JSON-formatted or as raw data, via HTTP/HTTPS. HEC is a stateless,
high performance, token-based solution that is easy to scale with a load balancer. Furthermore, it offers token-
based authentication. For more information about configuring and using HEC, see the following on Splunk Docs:

607
Set up and use HTTP Event Collector in Splunk Web, Format events for HTTP Event Collector, and Input endpoint
descriptions.

By default, HEC is disabled. To enable, follow these steps:

1. Open Settings > Data inputs and click on the HTTP Event Collector type.
2. Click the Global Settings button (in the upper-right corner).
3. For All Tokens, click the Enabled button.
4. Optionally, set the Default Source Type, Default Index, and Default Output Group settings.
5. Check Enable SSL to require events to be sent encrypted (recommended). See Configuring TLS Collection.
6. Change the HTTP Port Number if required, or leave it set to the default port 8088.

7. Click Save.

Once HEC is enabled, add a new token as follows:

1. If not already on the HTTP Event Collector page, open Settings > Data inputs and click on the HTTP Event
Collector type.
2. Click New Token.
3. Enter a name for the token and modify any other settings if required; then click Next.
4. For the Source type, choose Automatic. The source type will be specified using an HTTP header as shown in
the examples in the following sections.
5. Choose an App context; for example, Search & Reporting (search).
6. Adjust the remaining default values, if required, and click Review.

608
7. Verify the information on the summary page and click Submit. The HEC token is created and its value is
presented.
8. The configuration can be tested with the following command (substitute the correct token):

$ curl -k https://<host>:8088/services/collector \
  -H 'Authorization: Splunk <token>' -d '{"event":"test"}'

If configured correctly, Splunk will respond that the test event was delivered.

{"text":"Success","code":0}

97.3. Sending Generic Structured Logs


NXLog can be configured to send generic structured logs to Splunk in JSON format.

97.3.1. Sending Structured Logs via HEC


Events can be sent to the HEC standard /services/collector endpoint using a specific nested JSON format. In
this way, multiple input instances can be used to gather log data, and everything forwarded using a single output
instance.

The HEC uses a JSON event format, with event data in the event key and additional metadata sent in time, host,
source, sourcetype, index, and fields keys. For details about the format, see Format events for HTTP Event
Collector on Splunk Docs and in particular, the Event metadata section there. Because the source type is
specified in the event metadata, it is not necessary to set the source type on Splunk or to use separate tokens for
different source types.

609
Example 411. Forwarding Structured Data to HEC

This example shows an output instance that uses the xm_json and om_http modules to send the data to
the HEC. Events are formatted specifically for the HEC standard /services/collector endpoint.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension clean_splunk_fields>
 6 Module xm_rewrite
 7 Keep time, host, source, sourcetype, index, fields, event
 8 </Extension>
 9
10 <Output out>
11 Module om_http
12 URL https://127.0.0.1:8088/services/collector
13 AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
14 HTTPSCAFile %CERTDIR%/cacert.pem
15 <Exec>
16 # Rename event fields to what Splunk uses
17 if $Severity rename_field($Severity, $vendor_severity);
18 if $SeverityValue rename_field($SeverityValue, $severity_id);
19
20 # Convert all fields to JSON and write to $event field
21 $event = to_json();
22
23 # Convert $EventTime to decimal seconds since epoch UTC
24 $time = string(integer($EventTime));
25 $time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
26 $time = $sec + "." + $ms;
27
28 # Specify the log source type
29 [...]

Output Sample
{
  "event": {
  "EventReceivedTime": "2019-10-18 19:58:19",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "SyslogFacility": "USER",
  "vendor_severity": "INFO",
  "severity_id": 2,
  "EventTime": "2019-10-18 19:58:02",
  "Hostname": "myserver2",
  "ProcessID": 14533,
  "SourceName": "sshd",
  "Message": "Failed password for invalid user"
  },
  "time": "1571428682.218749",
  "sourcetype": "_json",
  "host": "myserver2",
  "source": "sshd"
}

610
97.3.2. Sending Structured Logs via TCP/TLS
It is also possible to send JSON-formatted events to Splunk via TCP or TLS. To extract fields and index the event
timestamps as sent by the configuration below, add a new source type with the corresponding settings:

1. Open Settings > Source types.


2. Find the _json source type and click Clone.

3. Provide a name for the new source type, such as nxlog_json.

4. Under the Advanced tab, add the following configuration values:

Name Value

TIME_PREFIX "time":"

TIME_FORMAT %s.%6N

Then select this new source type the TCP data input, as described in Adding a TCP or UDP Data Input.

611
Example 412. Forwarding Structured Data to Splunk via TCP

This configuration sets the $time field for Splunk, converts the event data to JSON with the xm_json
to_json() procedure, and forwards via TCP with the om_tcp module.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output out>
 6 Module om_tcp
 7 Host 127.0.0.1
 8 Port 514
 9 <Exec>
10 # Convert $EventTime to decimal seconds since epoch UTC
11 $time = string(integer($EventTime));
12 $time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
13 $time = $sec + "." + $ms;
14 delete($sec);
15 delete($ms);
16
17 # Write to JSON
18 to_json();
19 </Exec>
20 </Output>

Output Sample (whitespace added)


{
  "EventReceivedTime": "2019-09-30T20:00:01.448973+00:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "SyslogFacility": "USER",
  "vendor_severity": "INFO",
  "severity_id": 2,
  "EventTime": "2019-10-03T05:36:58.190689+00:00",
  "Hostname": "myserver2",
  "ProcessID": 14533,
  "SourceName": "sshd",
  "Message": "Failed password for invalid user",
  "time": "1570081018.190689"
}

97.4. Sending Specific Log Types for Splunk to Parse


Splunk implements parsing for a variety of log formats, and apps available on Splunkbase provide support for
additional log formats. So in some cases it is most effective to send the raw logs and allow Splunk to do the
parsing.

97.4.1. Forwarding Windows Event Log as XML


Windows Event Log data can be forwarded to Splunk in XML format. The "Splunk Add-on for Microsoft Windows"
provides log source types for parsing this format.

These instructions have been tested with Splunk 7.3.1.1 and the "Splunk Add-on for Microsoft
NOTE
Windows" version 6.0.0.

612
1. Install the Splunk Add-on for Microsoft Windows. See About installing Splunk add-ons on Splunk Docs for
more details.
2. Configure the log source type as XmlWinEventLog.
3. Optionally, add a configuration value to use the event SystemTime value as Splunk’s event _time during
indexing (otherwise Splunk will fall back to using the received time). This can be added to the specific event
source or to the XmlWinEventLog source type. To modify the XmlWinEventLog source type from the
Splunk web interface, follow these steps:
a. Open Settings > Source types.
b. Find the XmlWinEventLog source type (uncheck Show only popular) and click Edit.
c. Open the Advanced tab and add the following configuration value:

Name Value

EVAL-_time strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9N%Z'")

4. Use the im_msvistalog CaptureEventXML directive to capture the XML-formatted event data from the Event
Log. Forward this value to Splunk.

613
Example 413. Forwarding EventLog XML to Splunk via the HEC

This example reads events from the Security channel. With the CaptureEventXML directive set to TRUE, the
XML event data is stored in the $EventXML field. The contents of this field are then assigned to the
$raw_event field, which sent is to Splunk by the the splunk_hec output instance.

nxlog.conf
 1 <Input eventxml>
 2 Module im_msvistalog
 3 Channel Security
 4 CaptureEventXML TRUE
 5 Exec $raw_event = $EventXML;
 6 </Input>
 7
 8 <Output splunk_hec>
 9 Module om_http
10 URL https://127.0.0.1:8088/services/collector/raw
11 AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
12 </Output>

Events should be automatically parsed by Splunk as shown below.

97.4.2. Forwarding BSD Syslog Data to Splunk


Splunk can parse BSD Syslog events, so in this case it is not necessary to do any additional parsing with NXLog.
The source type should be set to syslog.

614
Example 414. Forwarding BSD Syslog to Splunk via TCP

In this example, events in Syslog format are read from file and sent to Splunk via TCP with no additional
processing. Because the source type is set to syslog, Splunk automatically parses the Syslog header
metadata.

nxlog.conf
 1 <Input syslog>
 2 Module im_file
 3 File '/var/log/messages'
 4 </Input>
 5
 6 <Output splunk>
 7 Module om_tcp
 8 Host 10.10.1.12
 9 Port 514
10 </Output>

615
Chapter 98. Symantec Endpoint Protection
The Symantec Endpoint Protection security suite provides anti-malware, anti-virus, firewall, intrusion detection,
and other features for servers and desktop computers. The product includes two main components: the
Symantec Endpoint Protection client which runs on client systems requiring protection; and the Symantec
Endpoint Protection Manager (SEPM) which communicates with clients, maintains policies, provides an
administrative console, and stores log data. For more information, see What is Symantec Endpoint Protection?
on Symantec Support.

Symantec Endpoint Protection Manager (SEPM) stores log data in an MSSQL Server database or in an
embedded database. For more details, see Managing log data in the Symantec Endpoint Protection Manager
(SEPM) on Symantec Support.

The following steps and configurations were tested with SEPM 14.2; see Released versions of
NOTE
Symantec Endpoint Protection on Symantec Support.

98.1. MSSQL Server Database


To collect logs from the SEPM 14.2 MSSQL 2012 database with NXLog, complete these actions:

1. Create a Windows/SQL account with read permissions for the SEPM database.
2. Configure an ODBC 32-bit System Data Source on the server running NXLog. For more information, consult
the relevant ODBC documentation: the Microsoft ODBC Data Source Administrator guide or the unixODBC
Project.
3. Set an appropriate firewall rule on the database server that accepts connections from the server running
NXLog. For more information, see Configure a Windows Firewall for Database Engine Access on Microsoft
Docs.
4. Configure NXLog to collect logs via ODBC with the im_odbc module.

If a custom query is needed, it may be helpful to consult the Database schema reference for
TIP
Endpoint Protection 14.x on Symantec Support.

616
Example 415. Collecting SEPM Logs from SQL Database

This example uses the im_odbc module to connect to the Symantec Endpoint Protection Manager server
via ODBC and collect logs from the MSSQL database. The first query below collects alerts and the second
(commented) query collects audit events.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString DSN=SymantecEndpointSecurityDSN; \
 4 database=sem5;uid=user;pwd=password;
 5
 6 # Query for Virus Alerts
 7 SQL SELECT DATEADD(s,convert(bigint,TIME_STAMP)/1000,'01-01-1970 00:00:00') \
 8 AS EventTime,IDX,ALERT_IDX,COMPUTER_IDX,SOURCE,VIRUSNAME_IDX, \
 9 FILEPATH,ALERTDATETIME,USER_NAME FROM V_ALERTS
10
11 # Alternative query for the Audit log
12 #SQL SELECT DATEADD(s,convert(bigint,TIMESTAMP)/1000,'01-01-1970 00:00:00') \
13 # AS EventTime,METHOD,ARGUMENTS,IP_ADDR FROM V_AUDIT_LOG
14 </Input>

Event Sample (Alerts Log)


{
  "EventTime": "2019-05-30T11:11:51.000000+02:00",
  "IDX": "24589CFDC0A886955DE9A4EFE7A07839",
  "ALERT_IDX": 1,
  "COMPUTER_IDX": "B657A6F2C0A88695489EE7FC3069332A",
  "SOURCE": "Real Time Scan",
  "VIRUSNAME_IDX": "70CB3DDB77EE45CD4C5765A5EF4DAFD9",
  "FILEPATH": "C:\\Windows\\Temp\\SECOH-QAD.exe",
  "ALERTDATETIME": "2019-05-30T11:10:40.000000+02:00",
  "USER_NAME": "SYSTEM",
  "EventReceivedTime": "2019-05-30T15:25:27.510937+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

Event Sample (Audit Log)


{
  "EventTime": "2019-05-30T10:41:58.000000+02:00",
  "METHOD": "RequestHandler.handleRequest()",
  "ARGUMENTS": "Windows user:(SEPMInternal) logging in as:admin/(SEPMInternal) succeeded! at
Thu May 30 12:41:58 CEST 2019",
  "IP_ADDR": "127.0.0.1",
  "EventReceivedTime": "2019-05-30T15:23:59.651649+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

98.2. Embedded Database


Logs can be collected from the SEPM embedded database by using the SAP SQL Anywhere Database Client with
the im_odbc module. Configuring NXLog to access the logs directly is not possible due to limitations of the
embedded database.

1. Download and install the SAP SQL Anywhere Database Client.

617
2. Configure NXLog to collect logs via ODBC with the im_odbc module. Specify SQL Anywhere as the ODBC
Driver in the ConnectionString directive.

For more technical information about querying the embedded database, check How to query the
TIP
SEPM embedded database on Symantec Support.

If it becomes necessary to migrate the embedded database to an MSSQL database, consult Moving
TIP
from the embedded database to Microsoft SQL Server on Symantec Support.

618
Example 416. Collecting SEPM Logs from Embedded Database

This example uses the im_odbc module to connect to the Symantec Endpoint Protection Manager
embedded database via ODBC with the SQL Anywhere driver. The first query below collects alerts and the
second (commented) query collects audit events.

nxlog.conf
 1 <Input in>
 2 Module im_odbc
 3 ConnectionString Driver=SQL Anywhere 17;ENG=Host; \
 4 UID=user;PWD=password;DBN=sem5;LINKS=ShMem;
 5
 6 # Query for Virus Alerts
 7 SQL SELECT DATEADD(ss, TIME_STAMP/1000, '1970-01-01 00:00:00') AS EventTime, \
 8 IDX,Alert_IDX,Computer_IDX,Source,Virusname_IDX,FilePath,AlertDateTime, \
 9 User_Name,Last_Log_Session_Guid FROM V_ALERTS
10
11 # Alternative query for the Audit log
12 #SQL SELECT DATEADD(ss, TIMESTAMP/1000, '1970-01-01 00:00:00') AS EventTime, \
13 # Method,Arguments,IP_ADDR FROM V_AUDIT_LOG
14
15 Exec $EventTime = strftime($EventTime, 'YYYY-MM-DDThh:mm:ss.sTZ');
16 </Input>

Event Sample (Alerts Log)


{
  "EventTime": "2019-05-29T17:12:20.000000+02:00",
  "IDX": "9B597DD0C0A8868C6DB24C4E332BA2EB",
  "Alert_IDX": 1,
  "Computer_IDX": "D93E2505C0A8868C4AB07113C78CD110",
  "Source": "Real Time Scan",
  "Virusname_IDX": "70CB3DDB77EE45CD4C5765A5EF4DAFD9",
  "FilePath": "C:\\Windows\\SECOH-QAD.exe",
  "AlertDateTime": "2019-05-29T17:09:54.000000+02:00",
  "User_Name": "SYSTEM",
  "Last_Log_Session_Guid": "20b4e2887f1c4ea89095e2c67b1ef047",
  "EventReceivedTime": "2019-05-29T19:24:15.534487+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

Event Sample (Audit Log)


{
  "EventTime": "2019-05-29T09:44:23.000000+02:00",
  "Method": "RequestHandler.handleRequest()",
  "Arguments": "Windows user:(SEPMInternal) logging in as:admin/(SEPMInternal) succeeded! at
Wed May 29 11:44:23 CEST 2019",
  "IP_ADDR": "127.0.0.1",
  "EventReceivedTime": "2019-05-29T18:54:51.279574+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_odbc"
}

619
Chapter 99. Synology DiskStation
The Synology DiskStation is a Linux-based Network-attached storage (NAS) appliance. It runs syslog-ng and is
capable of forwarding logs to a remote Syslog via UDP or TCP, including an option for SSL. Configuration is
performed via the web interface.

NOTE The steps below have been tested with DSM 5.2 and should work with newer versions as well.

1. Configure NXLog to receive log entries over the network and process them as Syslog (see the TCP example
below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from DiskStation device being configured.
3. Log in to the DiskStation web interface.
4. Go to Log Center › Log Sending.

5. Under the Location tab, specify the Syslog server, port, protocol, and log format. Enable and configure SSL if
required.

6. Click [ Apply ].

620
Example 417. Receiving DiskStation Logs via TCP

This configuration uses the im_tcp module to collect the DiskStation logs via TCP. A JSON output sample
shows the resulting logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/synology.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.4.20",
  "EventReceivedTime": "2017-07-28 18:30:04",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "DiskStation1",
  "EventTime": "2017-07-28 18:30:02",
  "Message": "Connection PWD\\sql_psqldw1:\tCIFS client [PWD\\sql_psqldw1] from
[192.168.15.138(IP:192.168.15.138)] accessed the shared folder [db_backup]."
}
{
  "MessageSourceAddress": "192.168.4.20",
  "EventReceivedTime": "2017-07-28 18:29:48",
  "SourceModuleName": "in_syslog_tcp",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "DiskStation1",
  "EventTime": "2017-07-28 18:29:56",
  "Message": "System Test message from Synology Syslog Client from (0.240.175.244)"
}

621
Chapter 100. Syslog
NXLog can be configured to collect or generate log entries in the various Syslog formats. This section describes
the various Syslog protocols and discusses how to use them with NXLog.

100.1. BSD Syslog (RFC 3164)


The original Syslog was written for the Sendmail project in the 1980s. It was later adopted by many other
applications and implemented across many operating systems, and became the standard logging system for
Unix-style systems. There was no authoritative publication about Syslog until 2001, when the Internet
Engineering Task Force (IETF) published informational RFC 3164, which described the "observed behavior" among
implementations. Today, modern implementations follow RFC 3164, which attempts to accommodate the many
older implementations; still it must be acknowledged that many undocumented variations exist among the BSD
Syslog protocols as implemented by various applications and devices.

Log Sample
<30>Nov 21 11:40:27 myserver sshd[26459]: Accepted publickey for john from 192.168.1.1 port 41193
ssh2↵

BSD Syslog defines both the log entry format and the transport. The message format is free-form, allowing for
the payload to be JSON or another structured data format.

100.1.1. BSD Syslog Format


BSD Syslog uses a simple format, comprised of three parts.

Base BSD Syslog Format


<PRI>HEADER MSG↵

While this is the common and recommended format for a BSD Syslog message, there are no set
NOTE requirements and a device may send a BSD Syslog log message containing only a free-form
message, without PRI or HEADER parts.

The PRI part, or "priority", is calculated from the facility and severity codes. The facility code indicates the type of
program that generated the message, and the severity code indicates the severity of the message (see the Syslog
Facilities and Syslog Severities tables below). The priority code is calculated by multiplying the facility code by
eight and then adding the severity code.

The PRI part is not written to file by many Syslog loggers. In that case, each log entry begins with
NOTE
the HEADER.

The HEADER part contains two fields: TIMESTAMP and HOSTNAME. The TIMESTAMP provides the local time when
the message was generated in Mmm dd hh:mm:ss format, with no year or time zone specified; the HOSTNAME is
the name of the host where the message was generated.

The MSG part contains two fields: TAG and CONTENT. The TAG is the name of the program or process that
generated the message, and contains only alphanumeric characters. Any other character will represent the
beginning of the CONTENT field. The CONTENT field often contains the process ID enclosed by brackets ([]), a
colon (:), a space, and then the actual message. In the log sample above, the MSG part begins with myserver
sshd[26459]: Accepted publickey; in this case, the TAG is ssh and the CONTENT field begins with [26459].
The CONTENT field can contain only ASCII printable characters (32-126).

Fields Commonly Used in the BSD Syslog Format


<PRI>TIMESTAMP HOSTNAME TAG[PID]: MESSAGE↵

Table 62. Syslog Facilities

622
Facility Description
Code
0 kernel messages

1 user-level messages

2 mail system

3 system daemons

4 security/authorization messages

5 messages generated internally by syslogd

6 line printer subsystem

7 network news subsystem

8 UUCP subsystem

9 clock daemon

10 security/authorization messages

11 FTP daemon

12 NTP subsystem

13 log audit

14 log alert

15 scheduling daemon

16 local use 0 (local0)

17 local use 1 (local1)

18 local use 2 (local2)

19 local use 3 (local3)

20 local use 4 (local4)

21 local use 5 (local5)

22 local use 6 (local6)

23 local use 7 (local7)

Table 63. Syslog Severities

Severity Description
Code
0 Emergency: system is unusable

1 Alert: action must be taken immediately

2 Critical: critical conditions

3 Error: error conditions

4 Warning: warning conditions

5 Notice: normal but significant condition

6 Informational :informational messages

7 Debug: debug-level messages

623
100.1.2. BSD Syslog Transport
According to RFC 3164, the BSD Syslog protocol uses UDP as its transport layer. Each UDP packet carries a single
log entry. BSD Syslog implementations often also support plain TCP and TLS transports, though these are not
covered by RFC 3164.

100.1.3. Disadvantages of BSD Syslog


There are several disadvantages associated with the BSD Syslog protocol.

• The transport defined by RFC 3164 uses UDP and provides no mechanism to ensure reliable delivery,
integrity, or confidentiality of log messages.
• Many undocumented variations exist among implementations.
• The timestamp indicates neither the year nor the timezone, and does not provide precision greater than the
second.
• The PRI field (and therefore the facility and severity codes) are not retained by many Syslog loggers when
writing to log files.
• The entire length of the log entry is limited to 1024 bytes.
• Only ASCII characters 32-126 are allowed, no Unicode or line breaks.

100.2. IETF Syslog (RFCs 5424-5426)


In 2009, the IETF released RFCs 5424, 5425, and 5426 as "Proposed Standard"s intended to replace the "legacy"
BSD Syslog. RFC 5425 includes a timestamp with year, timezone, and fractional seconds; provides a "structured
data" field for key-value pairs; and offers UTF-8 encoding. RFC 5425 defines the use of TLS transport and
supports multi-line log messages. RFC 5426 describes the use of UDP transport.

Log Sample
<165>1 2003-10-11T22:14:15.003Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3"
eventSource="Application" eventID="1011"] An application event log entry...↵

100.2.1. IETF Syslog Format


IETF Syslog uses a base format similar to that of BSD Syslog.

Base IETF Syslog Format


HEADER STRUCTURED-DATA MSG↵

The HEADER part contains seven fields.

• PRI: message priority (same as BSD Syslog)


• VERSION: Syslog format version (always "1" for RFC 5424 logs)
• TIMESTAMP: derived from RFC 3339 (YYYY-MM-DDTHH:MM:SS.000000Z, or with the time zone specified)

• HOSTNAME
• APP-NAME: device or application that generated the message
• PROCID: ID of the process that generated the message
• MSGID: message type (for example, "TCPIN" for incoming TCP traffic and "TCPOUT" for outgoing)

The PRI field is not written to file by many Syslog loggers. In that case, each log entry begins with
NOTE
the VERSION field.

The STRUCTURED-DATA part is optional. If it is omitted, then a hyphen acts as a placeholder. Otherwise, it is

624
surrounded by brackets. It contains an ID of the block and a list of space-separated "key=value" pairs.

The MSG part is optional and contains a free-form, single-line message. If the message is encoded in UTF-8, then
it may be preceded by a Unicode byte order mark (BOM).

Fields in the IETF Syslog Format


<PRI>VERSION TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SD-ID STRUCTURED-DATA] MESSAGE↵

100.2.2. IETF Syslog Transport


IETF Syslog can use UDP, plain TCP, or TLS transport. UDP transport is described by RFC 5426, TLS transport by
RFC 5425.

RFC 5425 also documents the octet-framing method that is used for TLS transport and provides support for
multi-line messages. Octet-framing can be used with plain TCP also, TLS is not required. The message length is
pre-pended as in the following example which shows the raw data that is sent over TCP/TLS.

Log Sample With Octet Framing


101 <13>1 2012-01-01T17:15:52.873750+01:00 myhost - - - [NXLOG@14506 TestField="test value"] test
message↵

In practice IETF Syslog is commonly transferred without octet-framing over TCP or TLS. In this case the newline
(\n) character is used as the record separator, similarly to how BSD Syslog is transferred over TCP or TLS.

100.3. Collecting and Parsing Syslog


NXLog can be configured to collect Syslog logs by:

• reading Syslog files written by another local Syslog agent,


• accepting Syslog via the local /dev/log Unix domain socket, or

• accepting Syslog over the network (via UDP, TCP, or TLS).

100.3.1. Reading Syslog Log Files


Configuring NXLog to read Syslog from file allows another local Syslog agent to continue its logging operations as
before. Note that NXLog will likely not have access to the facility and severity codes because most Syslog loggers
do not write the PRI field to log files.

Make sure NXLog has permission to read log files in /var/log. See Reading Rsyslog Log Files for more
information.

625
Example 418. Reading Syslog From File

This configuration reads log messages from file and parses them using the parse_syslog() procedure.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/log/messages'
8 Exec parse_syslog();
9 </Input>

The parse_syslog() procedure parses the log entry as either BSD or IETF format (the
NOTE
parse_syslog_bsd() and parse_syslog_ietf() procedures can be used alternatively).

100.3.2. Accepting Syslog via /dev/log


Many applications support logging by sending log messages to the /dev/log Unix domain socket. It is the
responsibility of the system logger to accept these messages and then store them as configured. NXLog can be
configured to directly accept logs that are sent to the /dev/log Unix domain socket, in place of the stock Syslog
logger.

1. Configure NXLog (see the example below).


2. Disable the stock Syslog agent’s collection of /dev/log messages, if necessary. See also Replacing Rsyslog.
Either
◦ disable the service entirely (for example, systemctl --now disable rsyslogd) or

◦ modify the configuration to disable reading from /dev/log (for example, remove $ModLoad imuxsock
from /etc/rsyslog.conf and restart Rsyslog).
3. Restart NXLog.

626
Example 419. Reading From /dev/log

With this configuration, NXLog uses the im_uds module to read messages from /dev/log, and the
parse_syslog() procedure to parse them.

FlowControl should be disabled for collecting from /dev/log. Otherwise the


WARNING syslog() system call will block if the Output queue becomes full, resulting in an
unresponsive system.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_uds
 7 UDS /dev/log
 8 FlowControl FALSE
 9 Exec parse_syslog();
10 </Input>

100.3.3. Accepting Syslog via UDP, TCP, or TLS


NXLog can be configured to listen on a port and collect Syslog over the network. A port can be used to receive
messages with UDP, TCP, or TLS transport. The local Syslog agent may already by configured to listen on port 514
for UDP log messages from local applications.

1. Configure NXLog with im_udp, im_tcp, or im_ssl. See the examples below.
2. For NXLog to listen for messages on port 514, the local Syslog agent must not be listening on that port. It
may be necessary to either
◦ disable the service entirely (for example, systemctl disable rsyslogd) or

◦ modify the configuration to disable listening on port 514 (for example, remove input(type="imudp"
port="514") from /etc/rsyslog.conf and restart Rsyslog).

3. Restart NXLog.

Example 420. Receiving Syslog via UDP

This configuration accepts either BSD or IETF Syslog from the local system only, via UDP.

The UDP transport can lose log entries and is therefore not recommended for
WARNING
receiving logs over the network.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_udp
 7 Host localhost
 8 Port 514
 9 Exec parse_syslog();
10 </Input>

627
Example 421. Receiving Syslog via TCP

This configuration accepts either BSD or IETF Syslog via TCP, without supporting octet-framing.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 Exec parse_syslog();
10 </Input>

Example 422. Receiving IETF Syslog via TCP With Octet-Framing

This configuration accepts IETF Syslog via TCP, with support for octet-framing.

Though this is for plain TCP, the Syslog_TLS directive is required because it refers to the
NOTE
octet-framing method described by RFC 5425.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 InputType Syslog_TLS
10 Exec parse_syslog_ietf();
11 </Input>

Example 423. Receiving IETF Syslog via TLS

This configuration accepts IETF Syslog via TLS, with support for octet-framing.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input in>
 6 Module im_ssl
 7 Host 0.0.0.0
 8 Port 6514
 9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 InputType Syslog_TLS
13 Exec parse_syslog_ietf();
14 </Input>

628
100.4. Filtering Syslog
Filtering Syslog messages means keeping or discarding messages based on their contents. Filtering can be
carried out using conditional statements and values from event record fields. For more details about fields, see
the Event Records and Fields section.

Example 424. Filtering Messages by Severity

The configuration below reads user-space messages from the /dev/log socket using the im_uds module.
In the Exec block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog
module.

Using the conditional statement, values from the $SyslogSeverityValue field are checked and messages
with severity level over 6 are discarded using the drop() procedure.

The remaining messages are converted to JSON using to_json() procedure from the xm_json module.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input from_uds>
 6 Module im_uds
 7 UDS /dev/log
 8 <Exec>
 9 parse_syslog_bsd();
10 if NOT ($SyslogSeverityValue < 6)
11 {
12 drop();
13 }
14 to_json();
15 </Exec>
16 </Input>

Messages can also be filtered using values of a group of fields.

629
Example 425. Filtering by Various Values

The configuration below reads log messages from the /dev/log socket using the im_uds module. In the
Exec block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog module.

Using the conditional statement, complex filtering is carried out as per the following parameters:

• severity, using the value from the $SyslogSeverityValue field,

• facility, using the value from the $SyslogFacility field,

• source name, using the value from the $SourceName field.

In case a message does not meet at least one filtering condition, it is discarded using the drop() procedure.
Otherwise, it is converted to JSON using to_json() procedure from the xm_json module.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input from_uds>
 6 Module im_uds
 7 UDS /dev/log
 8 <Exec>
 9 parse_syslog_bsd();
10 if NOT (
11 ($SyslogSeverityValue < 6) OR
12 ($SyslogFacility IN ('AUTHPRIV', 'AUTH', 'MAIL', 'CRON')) OR
13 ($SourceName IN ('apt','nxlog','osqueryd'))
14 )
15 {
16 drop();
17 }
18 to_json();
19 </Exec>
20 </Input>

Syslog messages can be filtered by the $Message field values using regular expressions.

630
Example 426. Filtering by Message Field

The configuration below reads log messages from the Linux kernel using the im_kernel module. In the Exec
block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog module.

Using the conditional statement, messages without the mount options string in the $Message field are
discarded using the drop() procedure.

The remaining messages are converted to JSON using to_json() procedure from the xm_json module.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input from_kernel>
 6 Module im_kernel
 7 <Exec>
 8 parse_syslog_bsd();
 9 if NOT ($Message =~ /mount options/)
10 {
11 drop();
12 }
13 to_json();
14 </Exec>
15 </Input>

100.5. Generating Syslog


NXLog can be configured to generate BSD or IETF Syslog and:

• write it to file,
• send it to the local syslog daemon via the /dev/log Unix domain socket, or

• forward it to another destination over the network (via UDP, TCP, or TLS).

In each case, the to_syslog_bsd() and to_syslog_ietf() procedures are used to generate the $raw_event field from
the corresponding fields in the event record.

100.5.1. Writing Syslog to File


The om_file module is used to write logs to file.

631
Example 427. Writing BSD Syslog to File

This configuration write logs to the specified file in the BSD Syslog format.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/syslog"
8 Exec to_syslog_bsd();
9 </Output>

NXLog can be configured to write BSD Syslog to a file without the PRI part, emulating traditional Syslog
implementations.

Example 428. Writing BSD Syslog Without the PRI

This configuration includes a regular expression for removing the PRI part from the $raw_event field after
it is generated by the to_syslog_bsd() procedure.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/syslog"
8 Exec to_syslog_bsd(); $raw_event =~ s/^\<\d+\>//;
9 </Output>

100.5.2. Sending Syslog to the Local Syslog Daemon via /dev/log


The om_uds module can be used for sending logs to a Unix domain socket.

Example 429. Sending Syslog to /dev/log

This configuration sends BSD Syslog to the Syslog daemon via /dev/log.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_uds
7 UDS /dev/log
8 Exec to_syslog_bsd();
9 </Output>

632
100.5.3. Sending Syslog to a Remote Logger via UDP, TCP, or TLS
The om_udp, om_tcp, and om_ssl modules can be used for sending Syslog over the network.

Example 430. Forwarding BSD Syslog via UDP

This configuration sends logs in BSD Syslog format to the specified host, via UDP port 514.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Output out>
 6 Module om_udp
 7 Host 192.168.1.1
 8 Port 514
 9 Exec to_syslog_bsd();
10 </Output>

Example 431. Forwarding BSD Syslog via TCP

This configuration sends logs in BSD format to the specified host, via TCP port 1514.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Output out>
 6 Module om_tcp
 7 Host 192.168.1.1
 8 Port 1514
 9 Exec to_syslog_bsd();
10 </Output>

633
Example 432. Forwarding IETF Syslog via TLS

With this configuration, NXLog sends logs in IETF format to the specified host, via port 6514.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Output out>
 6 Module om_ssl
 7 Host 192.168.1.1
 8 Port 6514
 9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 OutputType Syslog_TLS
13 Exec to_syslog_ietf();
14 </Output>

The OutputType Syslog_TLS directive is necessary if octet-framing is required. The name


NOTE
was chosen to refer to the octet-framing method described by RFC 5425.

100.6. Extending Syslog


BSD Syslog uses a free-form message field, and does not provide a standard way to include key-value pairs in log
messages. This section documents ways that structured data has been implemented using BSD Syslog as
transport.

100.6.1. IETF Syslog Structured-Data


The Structured-Data part of the IETF Syslog format, as documented above, provides a syntax for key-value pairs.

Log Sample
<13>1 2016-10-13T14:23:11.000000-06:00 myserver - - - [NXLOG@14506 Purpose="test"] This is a test
message.↵

NXLog can parse IETF Syslog with the parse_syslog() procedure provided by the xm_syslog extension module.

634
Example 433. Parsing IETF Syslog With Structured-Data

With this configuration, NXLog will parse the input IETF Syslog format from file, convert it to JSON, and
output the result to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/var/log/json'
18 Exec to_json();
19 </Output>
20
21 <Route r>
22 Path in => out
23 </Route>

Output Sample
{
  "EventReceivedTime": "2016-10-13 15:23:12",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "SyslogFacilityValue": 1,
  "SyslogFacility": "USER",
  "SyslogSeverityValue": 5,
  "SyslogSeverity": "NOTICE",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventTime": "2016-10-13 15:23:11",
  "Hostname": "myserver",
  "Purpose": "test",
  "Message": "This is a test log message."
}

NXLog can also generate IETF Syslog with a Structured-Data part, using the to_syslog_ietf() procedure provided by
the xm_syslog extension module.

635
Example 434. Generating IETF Syslog With Structured-Data

With the following configuration, NXLog will parse the input JSON from file, convert it to IETF Syslog format,
and output the result to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/json'
12 Exec parse_json();
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/var/log/ietf'
18 Exec to_syslog_ietf();
19 </Output>
20
21 <Route r>
22 Path in => out
23 </Route>

Input Sample
{
  "EventTime": "2016-09-13 11:23:11",
  "Hostname": "myserver",
  "Purpose": "test",
  "Message": "This is a test log message."
}

Output Sample
<13>1 2016-09-13T11:23:11.000000-05:00 myserver - - - [NXLOG@14506 EventReceivedTime="2016-09-
13 11:23:12" SourceModuleName="in" SourceModuleType="im_file" Purpose="test] This is a test log
message.↵

100.6.2. JSON over Syslog


JSON has become popular recently to transfer structured data. For compatibility with Syslog devices it is common
practice to encapsulate JSON in Syslog. NXLog can generate JSON with the to_json() procedure function provided
by the xm_json extension module.

636
Example 435. Generating JSON with Syslog Header

With the following configuration, NXLog will read the Windows Event Log, convert it to JSON format, add a
Syslog header, and send the logs via UDP to a Syslog agent. NXLog log messages are also included (via the
im_internal module).

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input internal>
10 Module im_internal
11 </Input>
12
13 <Input eventlog>
14 Module im_msvistalog
15 </Input>
16
17 <Output out>
18 Module om_udp
19 Host 192.168.1.1
20 Port 514
21 Exec $Message = to_json(); to_syslog_bsd();
22 </Output>
23
24 <Route r>
25 Path internal, eventlog => out
26 </Route>

If Syslog compatibility is not a concern, JSON can be transported without the Syslog header
NOTE
(omit the to_syslog_bsd() procedure).

100.6.3. Other Syslog Extensions


See also these formats, which extend BSD Syslog by using the free-form message field to contain key-value pairs.

• ArcSight Common Event Format (CEF)


• Common Event Expression (CEE)
• Log Event Extended Format (LEEF)
• Snare

637
Chapter 101. Sysmon
NXLog can be configured to capture and process audit logs generated by the Sysinternals Sysmon utility. Sysmon
is a Windows system service and device driver that logs system activity to the Windows EventLog. Supported
events include (but are not limited to):

• process creation and the full command line used,


• loading of system drivers,
• network connections, and
• modification of file creation timestamps.

On Windows Vista and higher, Sysmon’s events are stored in the Microsoft-Windows-Sysmon/Operational event log.
On older systems, events are written to the System event log.

101.1. Setting up Sysmon


To download Sysmon, and for full details about configuring and installing Sysmon, see the Sysmon page on
Microsoft Docs.

1. Download and extract the Sysmon ZIP archive.


2. Install the Sysmon service with the default parameters. The service will become active immediately; no
restart is required. The service will remain resident across reboots. Other command-line parameters are
available to enable or disable various types of logging.

> sysmon -accepteula -i

3. A complex configuration with filtering can be deployed by creating a custom XML configuration file for
Sysmon.

See SwiftOnSecurity Sysmon configuration, or IONStorm Sysmon configuration on GitHub. Both provide
good information for understanding what is possible with Sysmon and include many examples.

Use the -c option to update the service with a new configuration.

> sysmon -c config.xml

4. To uninstall the Sysmon service, use the -u option.

> sysmon -u

101.2. Collecting Sysmon Events


When Sysmon generates EventLog data, it encodes details of the event into the EventData tag of the EventLog
record.

638
Example Sysmon EventLog Entry
<EventData>
  <Data Name="UtcTime">2015.04.27. 13:23</Data>
  <Data Name="ProcessGuid">{00000000-3862-553E-0000-001051D40527}</Data>
  <Data Name="ProcessId">25848</Data>
  <Data Name="Image">c:\Program Files (x86)\nxlog\nxlog.exe</Data>
  <Data Name="CommandLine">"c:\Program Files (x86)\nxlog\nxlog.exe" -f</Data>
  <Data Name="User">WIN-OUNNPISDHIG\Administrator</Data>
  <Data Name="LogonGuid">{00000000-568E-5453-0000-0020D5ED0400}</Data>
  <Data Name="LogonId">0x4edd5</Data>
  <Data Name="TerminalSessionId">2</Data>
  <Data Name="IntegrityLevel">High</Data>
  <Data Name="HashType">SHA1</Data>
  <Data Name="Hash">1DCE4B0F24C40473Ce7B2C57EB4F7E9E3E14BF94</Data>
  <Data Name="ParentProcessGuid">{00000000-3862-553E-0000-001088D30527}</Data>
  <Data Name="ParentProcessId">26544</Data>
  <Data Name="ParentImage">C:\msys\1.0\bin\sh.exe</Data>
  <Data Name="ParentCommandLine">C:\msys\1.0\bin\sh.exe</Data>
</EventData>

Sysmon audit log data can be collected with im_msvistalog (or other modules, see Windows Event Log). The Data
tags will be automatically parsed, and the values will be available as fields in the event records. The log data can
then be forwarded to a log analytics system to allow identification of malicious or anomalous activity.

Example 436. Collecting Sysmon Logs

Here, the im_msvistalog module will collect all Sysmon events from the EventLog. A sample event is shown
below.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>

639
Output Sample
{
  "EventTime": "2015-04-27 15:23:46",
  "Hostname": "WIN-OUNNPISDHIG",
  "Keywords": -9223372036854776000,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 1,
  "SourceName": "Microsoft-Windows-Sysmon",
  "ProviderGuid": "{5770385F-C22A-43E0-BF4C-06F5698FFBD9}",
  "Version": 3,
  "Task": 1,
  "OpcodeValue": 0,
  "RecordNumber": 2335906,
  "ProcessID": 1680,
  "ThreadID": 1728,
  "Channel": "Microsoft-Windows-Sysmon/Operational",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "SYSTEM",
  "AccountType": "Well Known Group",
  "Message": "Process Create:\r\nUtcTime: 2015.04.27. 13:23\r\nProcessGuid: {00000000-3862-
553E-0000-001051D40527}\r\nProcessId: 25848\r\nImage: c:\\Program Files (x86)\\nxlog\\
nxlog.exe\r\nCommandLine: \"c:\\Program Files (x86)\\nxlog\\nxlog.exe\" -f\r\nUser: WIN-
OUNNPISDHIG\\Administrator\r\nLogonGuid: {00000000-568E-5453-0000-0020D5ED0400}\r\nLogonId:
0x4edd5\r\nTerminalSessionId: 2\r\nIntegrityLevel: High\r\nHashType: SHA1\r\nHash:
1DCE4B0F24C40473CE7B2C57EB4F7E9E3E14BF94\r\nParentProcessGuid: {00000000-3862-553E-0000-
001088D30527}\r\nParentProcessId: 26544\r\nParentImage: C:\\msys\\1.0\\bin\\sh.exe
\r\nParentCommandLine: C:\\msys\\1.0\\bin\\sh.exe",
  "Opcode": "Info",
  "UtcTime": "2015.04.27. 13:23",
  "ProcessGuid": "{00000000-3862-553E-0000-001051D40527}",
  "Image": "c:\\Program Files (x86)\\nxlog\\nxlog.exe",
  "CommandLine": "\"c:\\Program Files (x86)\\nxlog\\nxlog.exe\" -f",
  "User": "WIN-OUNNPISDHIG\\Administrator",
  "LogonGuid": "{00000000-568E-5453-0000-0020D5ED0400}",
  "LogonId": "0x4edd5",
  "TerminalSessionId": "2",
  "IntegrityLevel": "High",
  "HashType": "SHA1",
  "Hash": "1DCE4B0F24C40473CE7B2C57EB4F7E9E3E14BF94",
  "ParentProcessGuid": "{00000000-3862-553E-0000-001088D30527}",
  "ParentProcessId": "26544",
  "ParentImage": "C:\\msys\\1.0\\bin\\sh.exe",
  "ParentCommandLine": "C:\\msys\\1.0\\bin\\sh.exe",
  "EventReceivedTime": "2015-04-27 15:23:47",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog"
}

101.3. Filtering Sysmon Events


Some scenarios require more advanced filtering of Sysmon logs in order to achieve more useful results. There
are three main ways to filter Sysmon logs.

Sysmon configuration
Sysmon supports filtering tags that can be used to avoid logging unwanted events. See Setting up Sysmon

640
above and the Sysmon page for details about the available tags. This method is the most efficient because it
avoids creating the unwanted log entries in the first place.

EventLog XPath query


The im_msvistalog Query or QueryXML directive can be used to limit the entries that are read via the EventLog
API. Because this method restricts the number of entries that reach NXLog, it is a fairly efficient way to filter
logs.

Example 437. Filtering Sysmon Events With an XPath Query

The following example shows a query that collects only events that have an event ID of 1 (process
creation).

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">
 7 *[System[(EventID='1')]]
 8 </Select>
 9 </Query>
10 </QueryList>
11 </QueryXML>
12 </Input>

NXLog language
Finally, the built-in filtering capabilities of NXLog can be used, which may be easier to write than the XML
query syntax provided by the EventLog API.

Example 438. Filtering Sysmon Events in an Exec Block

This example discards all network connection events (event ID 3) regarding HTTP network connections
to a particular server and port, and all process creation and termination events (event IDs 1 and 5) for
conhost.exe.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 <Exec>
11 if ($EventID in (1, 5) and
12 $Image == "C:\\Windows\\System32\\conhost.exe") or
13 ($EventID == 3 and
14 $DestinationPort == 80 and
15 $DestinationIp == 10.0.0.1)
16 drop();
17 </Exec>
18 </Input>

641
Chapter 102. Ubiquiti UniFi
Ubiquiti UniFi is an enterprise solution for managing wireless networks. The UniFi infrastructure is managed by
the UniFi Controller, which can be configured to send logs to a remote Syslog server via UDP. As a central
management point, it will make sure that logs from all access points, including client authentication messages,
are logged to the Syslog server.

More information about configuring the UniFi Controller can be found in the corresponding user guide.

The steps below have been tested with UniFi Controller v4 and should work for other versions
NOTE
also.

1. Configure NXLog for receiving Syslog log entries via UDP (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the server with the Controller software.
3. Log in to the Controller’s web interface.
4. Go to Settings › Site.

5. Select Enable remote syslog server and specify the IP address and UDP port that the NXLog agent is
listening on. If necessary, also select Enable debug level syslog. Then click [ Apply ].

By default, the UniFi Controller sends a lot of low level information which may complicate field extraction if
additional intelligence is required. The Syslog level can be adjusted individually for each access point from the
Controller server by changing the syslog.level value in the system.cfg file. The location of this file varies
depending on the host operating system. If the Controller software is running on Windows, the file can be found

642
under C:\Ubiquiti UniFi\data\devices\uap\<AP_MAC_ADDRESS>

Unfortunately, once configured with remote Syslog address, the Controller only sends log messages that
originate from access points. The Controller’s own log is located on the server where it is installed. The location
of this file depends on the host operating system, on Windows it can be found at C:\Ubiquiti
UniFi\logs\server.log. If needed, this file can be parsed with the om_file module.

Example 439. Collecting UniFi Logs From the Controller

This example shows UniFi logs as received and processed by NXLog.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/unifi.log"
19 Exec to_json();
20 </Output>

Output Sample
{
  "MessageSourceAddress": "192.168.10.147",
  "EventReceivedTime": "2017-04-27 19:38:55",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.10.147",
  "EventTime": "2017-04-27 19:40:44",
  "Message": "(\"U7P,0418d6809ce2,v3.7.11.5131\") hostapd: ath4: STA 34:02:86:45:8e:e0 IEEE
802.11: disassociated"
}

643
Example 440. Extracting Additional Fields

Additional fields can be extracted from the Syslog messages with a configuration like the one below.

nxlog.conf
 1 <Input in_syslog_udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 <Exec>
 6 parse_syslog();
 7 if $Message =~ / ([a-z]*): (.*)$/
 8 {
 9 $UFProcess = $1;
10 $UFMessage = $2;
11 if $UFMessage =~ /^([a-z0-9]*): (.*)$/
12 {
13 $UFSubsys = $1;
14 $UFMessage = $2;
15 if $UFMessage =~ /^STA (.*) ([A-Z0-9. ]*): (.*)$/
16 {
17 $UFMac = $1;
18 $UFProto = $2;
19 $UFMessage = $3;
20 }
21 }
22 }
23 </Exec>
24 </Input>

Output Sample
{
  "MessageSourceAddress": "192.168.10.149",
  "EventReceivedTime": "2017-05-01 20:30:13",
  "SourceModuleName": "in_syslog_udp",
  "SourceModuleType": "im_udp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 6,
  "SyslogSeverity": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Hostname": "192.168.10.149",
  "EventTime": "2017-05-01 20:32:11",
  "Message": "(\"U7P,0418d6809b78,v3.7.11.5131\") hostapd: ath2: STA 80:19:34:97:62:a6 RADIUS:
stopped accounting session 5907CFDD-00000002",
  "UFProcess": "hostapd",
  "UFSubsys": "ath2",
  "UFMac": "80:19:34:97:62:a6",
  "UFProto": "RADIUS",
  "UFMessage": "stopped accounting session 5907CFDD-00000002"
}

644
Chapter 103. VMware vCenter
NXLog can be used to capture and process logs from VMware vCenter. This guide explains how to do this with
vCenter 5.5 installed on Windows Server 2008 R2.

vCenter logs can be processed in two ways.

• NXLog can be installed directly on the vCenter host machine and configured to collect all logs locally. This
method provides more feedback and more detailed logs, and is the recommended method. See Local
vCenter Logging.
• Alternatively, vCenter logs can be collected remotely using the vSphere Perl SDK. This option is less flexible,
but may be the only feasible option in some environments due to security restrictions. See Remote vCenter
Logging.

103.1. Local vCenter Logging


1. Install NXLog on the vCenter host machine.
2. Log in to the vCenter client.
3. Open Administration › vCenter Server Settings, select Logging Options from the list on the left, and set
vCenter Logging to Verbose (Verbose).

4. Click [ OK ] to save your changes. vCenter will now start writing detailed logs. The location of the logs
depends on the version of vCenter you are running.
◦ vCenter Server 5.x and earlier versions on Windows XP, 2000, and 2003:
%ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\

◦ vCenter Server 5.x and earlier versions on Windows Vista, 7, and 2008:
C:\ProgramData\VMware\VMware VirtualCenter\Logs\

◦ vCenter Server 5.x Linux Virtual Appliance: /var/log/vmware/vpx/

◦ vCenter Server 5.x Linux Virtual Appliance UI: /var/log/vmware/vami/

If vCenter is running under a specific user account, then the logs may be located in the
NOTE
profile directory of that user, instead of %ALLUSERSPROFILE%.

645
5. Determine which log files you want to parse and collect.

Table 64. VMware vCenter Log Files (Source)

Log File Name Usage


vpxd.log The main vCenter Server logs, consisting of all vSphere Client and WebServices
connections, internal tasks and events, and communication with the vCenter
Server Agent (vpxa) on managed ESX/ESXi hosts.

vpxd-profiler.log, Profiled metrics for operations performed in vCenter Server. Used by the VPX
profiler.log Operational Dashboard (VOD) accessible at https://VCHost/vod/index.html.
vpxd-alert.log Non-fatal information logged about the vpxd process.

cim-diag.log and vws.log Common Information Model monitoring information, including communication
between vCenter Server and managed hosts’ CIM interface

drmdump (directory) Actions proposed and taken by VMware Distributed Resource Scheduler (DRS),
grouped by the DRS-enabled cluster managed by vCenter Server. These logs are
compressed.

ls.log Health reports for the Licensing Services extension, connectivity logs to vCenter
Server.

vimtool.log Dump of string used during the installation of vCenter Server with hashed
information for DNS, username and output for JDBC creation.

stats.log Provides information about the historical performance data collection from the
ESXi/ESX hosts

sms.log Health reports for the Storage Monitoring Service extension, connectivity logs to
vCenter Server, the vCenter Server database and the xDB for vCenter Inventory
Service.

eam.log Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter
Server.

catalina.date.log and Connectivity information and status of the VMware Webmanagement Services.
localhost.date.log

jointool.log Health status of the VMwareVCMSDS service and individual ADAM database
objects, internal tasks and events, and replication logs between linked-mode
vCenter Servers.

The various log files use different formats. You must examine your chosen file in order to
NOTE
determine how to parse its entries.

The main log file, vpxd.log, contains all login and management information. This file will be used as an
example. The file has the general format of timestamp [tag-1] [optional-tag-2] message, and the
message part might contain a multi-line trace.

vpxd.log Sample
2014-06-13T22:44:46.878-07:00 [04372 info 'Default' opID=DACDA564-00000004-7c] [Auth]: User
Administrator↵
2014-06-13T23:15:07.222-07:00 [04136 error 'vpxdvpxdMain'] [Vpxd::ServerApp::Init] Init failed:
VpxdVdb::Init(VpxdVdb::GetVcVdbInstId(), false, false, NULL)↵
--> Backtrace:↵
--> backtrace[00] rip 000000018018a8ca↵
--> backtrace[01] rip 0000000180102f28↵
--> backtrace[02] rip 000000018010423e↵
--> backtrace[03] rip 000000018008e00b↵
--> backtrace[04] rip 00000000003c5c2c↵
-->↵

646
6. Configure and restart NXLog.

Example 441. Collecting vCenter Logs Locally

In the configuration below, the xm_multiline extension module is used with the HeaderLine directive to
parse log entries even when they span multiple lines. An Exec directive is used to drop all empty lines. A
regular expression with matching groups adds fields to the event record from each log message, and the
resulting log entries are sent to another host via TCP in JSON format.

nxlog.conf
 1 <Extension vcenter>
 2 Module xm_multiline
 3 HeaderLine /(?x)(\d+-\d+-\d+T\d+:\d+:\d+).\d+-\d+:\d+\s+\[(.*?)\]\s+ \
 4 (?:\[(.*?)\]\s+)?(.*)/
 5 Exec if $raw_event =~ /^\s+$/ drop();
 6 </Extension>
 7
 8 <Extension _json>
 9 Module xm_json
10 </Extension>
11
12 <Input in>
13 Module im_file
14 File "C:\ProgramData\VMware\VMware VirtualCenter\Logs\vpxd*.log"
15 InputType vcenter
16 <Exec>
17 if $raw_event =~ /(?x)(\d+-\d+-\d+T\d+:\d+:\d+.\d+-\d+:\d+)\s+\[(.*?)\]\s+
18 (?:\[(.*?)\]\s+)?((.*\s*)*)/
19 {
20 $EventTime = parsedate($1);
21 $Tag1 = $2;
22 $Tag2 = $3;
23 $Message = $4;
24 }
25 </Exec>
26 </Input>
27
28 <Output out>
29 Module om_tcp
30 Host 192.168.1.1
31 Port 1514
32 Exec to_json();
33 </Output>

647
Output Sample
{
  "EventReceivedTime": "2017-04-29 13:46:49",
  "SourceModuleName": "vcenter_in1",
  "SourceModuleType": "im_file",
  "EventTime": "2014-06-14 07:44:46",
  "Tag1": "04372 info 'Default' opID=DACDA564-00000004-7c",
  "Tag2": "",
  "Message": "[Auth]: User Administrator"
}
{
  "EventReceivedTime": "2017-04-29 13:46:49",
  "SourceModuleName": "vcenter_in1",
  "SourceModuleType": "im_file",
  "EventTime": "2014-06-14 08:15:07",
  "Tag1": "04136 error 'vpxdvpxdMain'",
  "Tag2": "Vpxd::ServerApp::Init",
  "Message": "Init failed: VpxdVdb::Init(VpxdVdb::GetVcVdbInstId(), false, false, NULL)\n-->
Backtrace:\n--> backtrace[00] rip 000000018018a8ca\n--> backtrace[01] rip 0000000180102f28\n-->
backtrace[02] rip 000000018010423e\n--> backtrace[03] rip 000000018008e00b\n--> backtrace[04]
rip 00000000003c5c2c\n-->\n"
}

103.2. Remote vCenter Logging


This method of capturing vCenter logs uses a Perl script with the vSphere SDK. The script periodically connects to
the vCenter server and retrieves logs.

1. Download and install the latest Perl runtime and the vSphere SDK for Perl. For Windows, the vSphere CLI is
recommended instead, because it includes the required Perl runtime environment and VIperl libraries.
2. The script will use a timestamp file to store the timestamp of the most recently downloaded log entry. The
timestamp ensures that even if the vCenter server is restarted, NXLog can correctly resume log collection.
The timestamp file will be created automatically. However, to specify a timestamp manually, create a file with
a timestamp in yyyy-mm-ddThh-mm format (for example, 2017-01-19T18:00). Then use the -r option to
specify the location of the timestamp file. Any logs with earlier timestamps will be skipped.
3. To test the vcenter.pl script with the vCenter host, run the script as shown below. Substitute the correct
server IP address and credentials for the vCenter server. The -t argument is optional and can be used to
adjust the time between polls (the default of 60 seconds is the minimum recommended). The -r argument is
also optional and can be used to specify a custom location for the timestamp file. Events such as connection
or authentication errors are logged to standard output.

$ perl vcenter.pl -s=serverip -u=username -p=password -t=pollinterval \


  -r=timestampfile

Because the script connects to vCenter remotely, we recommend setting up a dedicated user in
NOTE
vCenter as a security measure.

Example 442. Collecting vCenter Logs Remotely

This configuration uses the im_exec module to run the Perl script and accept logs from its standard output.
The xm_json module is used to parse the JSON event data. The $EventTime field is converted to a datetime
value.

648
nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input in>
 6 Module im_exec
 7 # For users who have the VMware CLI installed:
 8 Command "C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe"
 9 # For Linux and regular Perl users this would be sufficient:
10 #Command perl
11 Arg "C:\scripts\vcenter.pl"
12 Arg -u
13 Arg <username>
14 Arg -p
15 Arg <password>
16 Arg -s
17 Arg <server_ip_addr>
18 <Exec>
19 # Parse JSON into fields for later processing if required
20 parse_json();
21
22 # Parse EventTime field as timestamp
23 $EventTime = parsedate($EventTime);
24 </Exec>
25 </Input>

Event Samples
{
  "EventTime": "2014-06-20T18:00:00.163Z",
  "Message": "User Administrator@192.168.71.1 logged in as VI Perl",
  "UserName": "Administrator"
}
{
  "EventTime": "2014-06-20T10:56:23",
  "Message": "Error: Cannot complete login due to an incorrect user name or password.",
  "UserName": "Administrator"
}

vcenter.pl (truncated)
#!/usr/bin/perl -w
use Encode;
use VMware::VIRuntime;
use VMware::VILib;
use Getopt::Long;
use IO::File;
use POSIX;

my $startTime;
my $stopTime;
my $server;
my $sleepTime = 60;
my $userName;
my $passWord;
my $timeStamp;
my $timeStampFile = "timestamp.txt";
my $timeNow;
[...]

649
Chapter 104. Windows AppLocker
Windows AppLocker allows administrators to create rules restricting which executables, scripts, and other files
users are allowed to run. For more information, see What Is AppLocker? on Microsoft Docs.

AppLocker logs events to the Windows Event Log. There are four logs available, shown in the Event Viewer under
Applications and Services Logs > Microsoft > Windows > Applocker:

• EXE and DLL


• MSI and Script
• Packaged app-Deployment
• Packaged app-Execution

NXLog can collect these events with the im_msvistalog module or other Windows Event Log modules.

Example 443. Collecting AppLocker Logs From the Event Log

The following configuration uses the im_msvistalog module to collect Applocker events from the four
EventLog logs listed above. The xm_xml parse_xml() procedure is used to further parse the UserData XML
portion of the event.

nxlog.conf
 1 <Extension _xml>
 2 Module xm_xml
 3 </Extension>
 4
 5 <Input in>
 6 Module im_msvistalog
 7 <QueryXML>
 8 <QueryList>
 9 <Query Id="0">
10 <Select Path="Microsoft-Windows-AppLocker/MSI and Script">
11 *</Select>
12 <Select Path="Microsoft-Windows-AppLocker/EXE and DLL">
13 *</Select>
14 <Select Path="Microsoft-Windows-AppLocker/Packaged app-Deployment">
15 *</Select>
16 <Select Path="Microsoft-Windows-AppLocker/Packaged app-Execution">
17 *</Select>
18 </Query>
19 </QueryList>
20 </QueryXML>
21 Exec if $UserData parse_xml($UserData);
22 </Input>

Output Sample
{
  "EventTime": "2019-01-09T22:34:44.164099+01:00",
  "Hostname": "Host.DOMAIN.local",
  "Keywords": "9223372036854775808",
  "EventType": "ERROR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "EventID": 8004,
  "SourceName": "Microsoft-Windows-AppLocker",
  "ProviderGuid": "{CBDA4DBF-8D5D-4F69-9578-BE14AA540D22}",
  "Version": 0,

650
  "TaskValue": 0,
  "OpcodeValue": 0,
  "RecordNumber": 40,
  "ExecutionProcessID": 5612,
  "ExecutionThreadID": 5220,
  "Channel": "Microsoft-Windows-AppLocker/EXE and DLL",
  "Domain": "DOMAIN",
  "AccountName": "admin",
  "UserID": "S-1-5-21-314323950-2314161084-4234690932-1002",
  "AccountType": "User",
  "Message": "%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES\\WORDPAD.EXE was prevented from
running.",
  "Opcode": "Info",
  "UserData": "<RuleAndFileData
xmlns='http://schemas.microsoft.com/schemas/event/Microsoft.Windows/1.0.0.0'><PolicyNameLength>
3</PolicyNameLength><PolicyName>EXE</PolicyName><RuleId>{4C8E638D-3DE8-4DCB-B0E4-
B0597074D06B}</RuleId><RuleNameLength>113</RuleNameLength><RuleName>WORDPAD.EXE, in MICROSOFT®
WINDOWS® OPERATING SYSTEM, from O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON,
C=US</RuleName><RuleSddlLength>179</RuleSddlLength><RuleSddl>D:(XD;;FX;;;S-1-1-0;((Exists
APPID://FQBN) &amp;&amp; ((APPID://FQBN) &gt;= ({\"O=MICROSOFT CORPORATION, L=REDMOND,
S=WASHINGTON, C=US\\MICROSOFT® WINDOWS® OPERATING SYSTEM\\WORDPAD.EXE
\",0}))))</RuleSddl><TargetUser>S-1-5-21-314323950-2314161084-4234690932-
1002</TargetUser><TargetProcessId>7964</TargetProcessId><FilePathLength>49</FilePathLength><Fil
ePath>%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES
\\WORDPAD.EXE</FilePath><FileHashLength>0</FileHashLength><FileHash></FileHash><FqbnLength>118<
/FqbnLength><Fqbn>O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT® WINDOWS®
OPERATING SYSTEM\\WORDPAD.EXE\\6.3.9600.19060</Fqbn></RuleAndFileData>",
  "EventReceivedTime": "2019-01-09T22:34:45.773240+01:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog",
  "RuleAndFileData.PolicyNameLength": "3",
  "RuleAndFileData.PolicyName": "EXE",
  "RuleAndFileData.RuleId": "{4C8E638D-3DE8-4DCB-B0E4-B0597074D06B}",
  "RuleAndFileData.RuleNameLength": "113",
  "RuleAndFileData.RuleName": "WORDPAD.EXE, in MICROSOFT® WINDOWS® OPERATING SYSTEM, from
O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US",
  "RuleAndFileData.RuleSddlLength": "179",
  "RuleAndFileData.RuleSddl": "D:(XD;;FX;;;S-1-1-0;((Exists APPID://FQBN) && ((APPID://FQBN) >=
({\"O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT® WINDOWS® OPERATING
SYSTEM\\WORDPAD.EXE\",0}))))",
  "RuleAndFileData.TargetUser": "S-1-5-21-314323950-2314161084-4234690932-1002",
  "RuleAndFileData.TargetProcessId": "7964",
  "RuleAndFileData.FilePathLength": "49",
  "RuleAndFileData.FilePath": "%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES\\WORDPAD.EXE",
  "RuleAndFileData.FileHashLength": "0",
  "RuleAndFileData.FqbnLength": "118",
  "RuleAndFileData.Fqbn": "O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT®
WINDOWS® OPERATING SYSTEM\\WORDPAD.EXE\\6.3.9600.19060"
}

651
Chapter 105. Windows Command Line Auditing
Command line auditing implies monitoring the process with the name A new process has been created on
Windows operating systems, and it is carried out for the following processes:

• Creator process — which runs the command line to create another process


• New process — which is being created by the creator process
• Process command line — which contains the path to the new process file and parameters needed to run
the new process

This monitoring featureis available starting from Windows Server 2012 R2, see the Command line process
auditing section on the Microsoft website. For more information about security, see also the Security Monitoring
Recommendations on the Microsoft website.

NXLog can be configured to collect and parse command line auditing logs.

Monitoring of process creation with command line is also available through utilizing Sysmon,
NOTE although the native command line auditing solution may be more preferable since it does not
require installation of any third-party software.

The command line process auditing writes events to the Windows Event Log, which can be monitored by
capturing event entries with the Event ID 4688.

105.1. Enabling Command Line Auditing


For various security and usability reasons, command line auditing is disabled by default. Follow the steps below
to enable it.

1. Open the Group Policy MMC snapin (gpedit.msc).

2. To enable audit process creation, go to Computer Configuration > Windows Settings > Security Settings >
Advanced Audit Policy Configuration > System Audit Policies > Detailed Tracking and open the Audit

652
Process Creation setting, then check the Configure the following audit events and Success checkboxes.

3. To enable command line process creation, go to Computer Configuration > Administrative Templates >
System > Audit Process Creation, click the Include command line in process creation event setting, then
select the Enabled radio button.

4. Reboot the operating system.

For more information about enabling the command line auditing, see How to Determine What Just Ran on
Windows Console section on Microsoft website.

Example 444. Collecting Command Line Auditing Events

The configuration below demonstrates how to collect Windows Event Log entries with the ID 4688 of the
Security channel to log the activity of the C:\Windows\System32\ftp.exe application. First, it drops entries

653
without the ftp.exe substring in the NewProcessName field. After that, the Message field from the selected
entries is deleted to make the example output shorter. Finally, the logs are converted to JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input from_eventlog>
 6 Module im_msvistalog
 7 <QueryXML>
 8 <QueryList>
 9 <Query Id="0">
10 <Select Path="Security">
11 *[System[Level=0 and (EventID=4688)]]
12 </Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 <Exec>
17 if not ($NewProcessName =~ /.*ftp.exe/) drop();
18 delete($Message);
19 json->to_json();
20 </Exec>
21 </Input>

654
Output Sample
{
  "EventTime": "2020-04-18T16:26:48.737490+03:00",
  "Hostname": "WIN-IVR26CIVSF6",
  "Keywords": "9232379236109516800",
  "EventType": "AUDIT_SUCCESS",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 4688,
  "SourceName": "Microsoft-Windows-Security-Auditing",
  "ProviderGuid": "{54849625-5478-4994-A5BA-3E3B0328C30D}",
  "Version": 2,
  "TaskValue": 13312,
  "OpcodeValue": 0,
  "RecordNumber": 18112,
  "ExecutionProcessID": 4,
  "ExecutionThreadID": 3720,
  "Channel": "Security",
  "Category": "Process Creation",
  "Opcode": "Info",
  "SubjectUserSid": "S-1-5-21-2751412651-3826291291-1936999150-500",
  "SubjectUserName": "Administrator",
  "SubjectDomainName": "WIN-IVR26CIVSF6",
  "SubjectLogonId": "0x23d19",
  "NewProcessId": "0xa24",
  "NewProcessName": "C:\\Windows\\System32\\ftp.exe",
  "TokenElevationType": "%%1936",
  "ProcessId": "0x2a8",
  "CommandLine": "ftp -s:ftp.txt",
  "TargetUserSid": "S-1-0-0",
  "TargetUserName": "-",
  "TargetDomainName": "-",
  "TargetLogonId": "0x0",
  "ParentProcessName": "C:\\Windows\\System32\\cmd.exe",
  "MandatoryLabel": "S-1-16-12288",
  "EventReceivedTime": "2020-04-18T16:26:50.674636+03:00",
  "SourceModuleName": "from_eventlog",
  "SourceModuleType": "im_msvistalog"
}

655
Chapter 106. Windows Event Log
This section discusses the various details of Windows Event Logs.

106.1. About Windows Event Log


Windows Event Log captures the details of both system and application events. When such an event occurs,
Windows records it in the event log. The event log is then used to find details about the event and can be helpful
when troubleshooting problems. Beside their use for IT related purposes, Windows Event Logs are also used to
satisfy compliance mandates.

Unlike other event logs, such as the UNIX Syslog, Windows Event Log is not stored as a plain text file, but in a
proprietary binary format. It is not possible to view Windows Event Log in a text editor, nor is it possible to send
it as a Syslog event while retaining its original format. However, the raw event data can be translated into XML
using the Windows Event Log API and forwarded in that format.

106.1.1. The EVTX File Format


Windows stores Windows Event Log files in the EVTX file format since the release of Windows Vista and Windows
Server 2008. Prior to that, event log files were stored in the EVT file format. Both are proprietary formats
readable by the Microsoft Management Console (MMC) snap-in eventvwr.msc.

The EVTX format includes many new features and enhancements: a number of new event properties, the use of
channels to publish events, a new Event Viewer, a rewritten Windows Event Log service, and support for the
Extensible Markup Language (XML) format. From a log processing perspective, the added support for XML is the
most important addition, as it provides the possibility to share or further process the event data in a structured
format.

For the built in channels, Windows automatically saves the corresponding EVTX file into the
C:\Windows\System32\winevt\Logs\ directory. Events can also be saved manually from the Event Viewer MMC
snap-in, in four different formats: EVTX, XML, TXT, and CSV.

NXLog can directly read EVTX and EVT files using the im_msvistalog File directive. In addition, the
CaptureEventXML directive of the same module can be used to store and send raw XML-formatted event data in
the $EventXML field.

106.1.2. Viewing the Windows Event Log


The Windows Event Log can be viewed in the Event Viewer MMC snap-in included in Windows. Windows Event
Logs are stored in a binary source data format, which is the "source" or "on-disk" format. It does not include the
full message, only the event properties. When an event is rendered, property values are inserted into the
localized message template stored elsewhere on disk.

The Event Viewer includes three views for displaying the data for a selected event. These are shown on the
preview pane or in the Event Properties window when an event is opened.

• The general view is shown by default. It includes the full message rendered from template and the "System"
set of key/value pairs.
• The Friendly View is available on the Details tab. It shows a hierachical view of the System properties and
additional EventData properties defined by the event provider. It does not show a rendered message.
• The XML View can be selected under the Details tab. It shows the event properties in XML format. It does
not show a rendered message.

656
A Windows Event Log event in XML format
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
  <Provider Name="Microsoft-Windows-Security-Auditing"
  Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />
  <EventID>4624</EventID>
  [...]
  <Channel>Security</Channel>
  <Computer>USER-WORKSTATION</Computer>
  <Security />
  </System>
  <EventData>
  <Data Name="SubjectUserSid">S-1-5-18</Data>
  [...]
  </EventData>
</Event>

Events can be accessed through the Event Log API (see Windows Event Log Functions on Microsoft Docs). In
particular:

• EvtQuery() fetches events from a given channel or log file that match a given query—see Querying for
Events.
• EvtFormatMessage() generates a message string for an event using the event properties and the localized
message template—see Formatting Event Messages.

106.1.3. Event Channels


The EVTX format introduces event channels. A channel is a stream of events that collects events from a publisher
and writes them to an event log file.

Channels are organized into two groups:

• The Windows Logs group contains a set of exactly five channels, which are used for Windows system events.
• The Applications and Services Logs group contains channels created for individual applications or
components. These channels are further organized in a folder hierarchy.

There are two channel types indicating how the events are handled:

• Serviced channels offer relatively low volume, reliable delivery of events. Events in these channels may be
forwarded to another system, and these channels may be subscribed to.
• Direct channels are for high-performance collection of events. It is not possible to subscribe to a a direct
channel. By default, these channels are disabled. To see these channels in the Event Viewer, check Show
Analytic and Debug Logs in the View menu. To enable logging for one of these channels, select the channel,
open the Action menu, click Properties, and check Enable logging on the General tab.

Each of the above is subdivided into two more channel types according to the the intended audience for the
events collected by that channel:

• Administrative channels collects events for end users, administrators, and support. This is a serviced
channel type.
• Operational channels collect events used for diagnosing problems. This is a serviced channel type.
• Analytic channels are for events that describe program operation. These channels often collect a high
volume of events. This is a direct channel type.
• Debug channels are intended to be used by developers only. This is a direct channel type.

Table 65. Channel Groups and Types

657
Channel Groups Channels Channel Type
Application Administrative (serviced)

Security Administrative (serviced)

Windows Logs Setup Operational (serviced)

System Administrative (serviced)

Forwarded Events Operational (serviced)

DHCP-Server/Admin Administrative (serviced)

DHCP-Server/AuditLogs Analytic (direct)


Applications and Services Logs
DHCP-Server/DebugLogs Debug (direct)

(And many more publisher-defined channels)

The im_msvistalog module can be configured to collect events from a specific channel with the Channel directive.

For more information about event channels, see these two pages on Microsoft Docs: Event Logs and Event Logs
and Channels in Windows Event Log.

106.1.4. Providers
Event log providers write events to event logs. An event log provider can be a service, driver, or program that runs
on the computer and has the necessary instrumentation to write to the event log.

Event providers are categorized into four main types.

• Manage Object Format (MOF) providers (also referred to as "classic")


• Windows Software Trace Preprocessor (WPP) providers
• Manifest-based providers
• TraceLogging providers

For more information on providers, see the Providers section in the Microsoft Windows documentation.

106.2. Collecting Event Log Data


This section lists and discusses the NXLog modules that can be used to collect Windows Event Log data.

106.2.1. NXLog Modules for Windows Event Log


NXLog provides the following modules for capturing Windows Event Log data.

• The im_msvistalog module is available on Windows only, and captures event log data from Windows
2008/Vista and later. It can be configured to collect event log data from the local system or from a remote
system via MSRPC (MSRPC is supported by NXLog Enterprise Edition only). See Local Collection With
im_msvistalog and Remote Collection With im_msvistalog.
• The im_wseventing module is available on both Linux and Windows (NXLog Enterprise Edition only). With it,
event log data can be received from remote Windows systems using Windows Event Forwarding. This is the
recommended module for most cases where remote capturing is required, because it is not necessary to
specify each host that EventLog data will be captured from. See Remote Collection With im_wseventing.
• The im_mseventlog module is available on Windows only, and captures event log data locally from Windows
XP, Windows 2000, and Windows 2003. See Local Collection With im_mseventlog.

658
106.2.2. Local Collection With im_msvistalog
The im_msvistalog module can capture EventLog data from the local system running Windows 2008/Vista or
later.

Because the Windows Event Log subsystem does not support subscriptions to the Debug and
NOTE
Analytic channels, these types of events can not be collected with the im_msvistalog module.

Example 445. Collecting EventLog Locally From Windows 2008/Vista or Later

In this example, NXLog reads all events from the local Windows EventLog. The data is converted to JSON
format and written to a local file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 </Input>
 8
 9 <Output file>
10 Module om_file
11 File 'C:\test\sysmon.json'
12 Exec to_json();
13 </Output>

For information about filtering events, particularly when using im_msvistalog, see Filtering Events.

106.2.3. Remote Collection With im_msvistalog


NXLog Enterprise Edition can be configured with the im_msvistalog module for collection of events generated on
remote Windows systems. In this mode, it is not necessary to run an NXLog agent on the Windows systems.
Instead, MSRPC is used to receive the events.

Because the Windows EventLog subsystem does not support subscriptions to the Debug and
NOTE
Analytic channels, these types of events can not be collected with the im_msvistalog module.

659
Example 446. Receiving EventLog Data over MSRPC

In this example configuration, the im_msvistalog module is used to get events from a remote server named
mywindowsbox using MSRPC.

To replicate this example in your environment, modify the RemoteServer, RemoteUser, RemoteDomain, and
RemotePassword to reflect the access credentials for the target machine.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id='1'>
 6 <Select Path='Application'>*</Select>
 7 <Select Path='Security'>*[System/Level=4]</Select>
 8 <Select Path='System'>*</Select>
 9 </Query>
10 </QueryList>
11 </QueryXML>
12 RemoteServer mywindowsbox
13 RemoteUser Administrator
14 RemoteDomain Workgroup
15 RemotePassword secret
16 </Input>

106.2.4. Local Collection With im_mseventlog


The im_mseventlog module can capture EventLog data from Windows XP, Windows 2000, and Windows 2003.

The module looks up the available EventLog sources stored under the registry key
SYSTEM\CurrentControlSet\Services\Eventlog, and polls logs from each of these or only the sources
defined with the Sources directive of the NXLog configuration.

Example 447. Forwarding EventLog Data from Windows to a Remote Host

This example shows the most basic configuration of the im_mseventlog module. This configuration
forwards all EventLog sources listed in the Windows registry over the network to a remote NXLog instance
at the IP address 192.168.1.1.

nxlog.conf
1 <Input eventlog>
2 Module im_mseventlog
3 </Input>
4
5 <Output tcp>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 514
9 </Output>

106.2.5. Remote Collection With im_wseventing


NXLog Enterprise Edition can use the im_wseventing module to receive Windows EventLog data from remote
machines over WEF (Windows Event Forwarding). It works on both Windows and Linux hosts.

660
Example 448. Receiving Windows EventLog Data using the im_wseventing Module

This configuration receives data from all source computers, by listening on port 5985 for connections from
all sources. It also shows a configured HTTPS certificate, used to secure the transfer of EventLog data.

nxlog.conf
 1 <Input in>
 2 Module im_wseventing
 3 ListenAddr 0.0.0.0
 4 Port 5985
 5 Address https://linux.corp.domain.com:5985/wsman
 6 HTTPSCertFile %CERTDIR%/server-cert.pem
 7 HTTPSCertKeyFile %CERTDIR%/server-key.pem
 8 HTTPSCAFile %CERTDIR%/ca.pem
 9 <QueryXML>
10 <QueryList>
11 <Query Id="0" Path="Application">
12 <Select Path="Application">*</Select>
13 <Select Path="Microsoft-Windows-Winsock-AFD/Operational">*</Select>
14 <Select Path="Microsoft-Windows-Wired-AutoConfig/Operational">
15 *
16 </Select>
17 <Select Path="Microsoft-Windows-Wordpad/Admin">*</Select>
18 <Select Path="Windows PowerShell">*</Select>
19 </Query>
20 </QueryList>
21 </QueryXML>
22 </Input>

A query for specific hosts can be set by adding an additional QueryXML block with a <Computer> tag. This
tag contains a pattern that NXLog matches against the name of the connecting Windows client. Computer
names not matching the pattern will use the default QueryXML block (containing no <Computer> tag). The
following QueryXML block, if added to the above configuration, would provide an alternate query for
computer names matching the pattern foo*.

nxlog.conf
1 <QueryXML>
2 <QueryList>
3 <Computer>foo*</Computer>
4 <Query Id="0" Path="Application">
5 <Select Path="Application">*</Select>
6 </Query>
7 </QueryList>
8 </QueryXML>

106.3. Filtering Events


Systems and services on Windows can generate a large volume of logs, and it is often necessary to collect only a
certain portion of those events. There are several ways to implement filtering of events from the Windows Event
Log when using the im_msvistalog module.

• A specific channel can be specified with the Channel directive to collect all the events written to a single
channel.
• An XPath query can be given with the QueryXML block (or Query directive). The specified query is then used
to subscribe to events. An XPath query can be used to subscribe to multiple channels and/or limit events by
various attributes. However, XPath queries have a maximum length, limiting the possibilities for detailed
event subscriptions. See XPath Filtering below.

661
• A log file can be read by setting the File directive, in which case im_msvistalog will read all events from the
file (for example, Security.evtx). This is intended primarily for forensics purposes, such as with nxlog-
processor.
• After being read from the source, events can be discarded by matching events in an Exec block and
discarding them selectively with the drop() procedure.

Subscribing to a restricted set of events with an XPath query can offer a performance advantage because the
events are never received by NXLog. However, XPath queries have a maximum length and limited filtering
capabilities, so in some cases it is necessary to combine an XPath query with Exec block filtering in an
im_msvistalog configuration. For examples, see examples in Event IDs to Monitor.

106.3.1. XPath Filtering


XPath queries can be used to subscribe to events matching certain criteria, both in the Event Viewer and with the
im_msvistalog QueryXML directive. Windows Event Log supports a subset of XPath 1.0. For more information, see
Consuming Events on Microsoft Docs.

The Event Viewer offers the most practical way to write and test query strings. An XPath query can be generated
and/or tested by filtering the current log or creating a custom view.

1. In the Event Viewer, click an event channel to open it, then right-click the channel and choose Filter Current
Log from the context menu. Or, click Create Custom View in the context menu. Either way, a dialog box will
open and options for basic filtering will be shown in the Filter tab.

2. Specify the desired criteria. The corresponding XPath query on the XML tab will be updated automatically.
3. To view the query string, switch to the XML tab. This string can be copied into the im_msvistalog QueryXML
directive.
4. If required, advanced filtering can be done by selecting the Edit query manually checkbox and editing the
query. The query can then be tested to be sure it matches the correct events and finally copied to the NXLog
configuration with the QueryXML block.

662
Figure 5. A Custom View Querying the Application Channel for Events With ID 1008

Sometimes it is helpful to use a query with sources that may not be available. In this case, set the
TolerateQueryErrors directive to TRUE to ensure that the module will continue to collect logs.

Example 449. Collecting Operational Events Only

Here, NXLog queries the local Windows EventLog for operational events only.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>

Example 450. Collecting Important System Events

This query collects System channel events with levels below 4 (Critical, Error, and Warning).

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path='System'>*[System/Level&lt;4]</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>

663
106.3.2. Exec Block Filtering
NXLog’s built-in filtering capabilities can also be used to filter events, by matching events and using the drop()
procedure. Events can be matched against any of the im_msvistalog fields.

Example 451. Filtering Sysmon Events in an Exec Block

This example discards all Sysmon network connection events (event ID 3) regarding HTTP network
connections to a particular server and port, and all process creation and termination events (event IDs 1
and 5) for conhost.exe.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 <Exec>
11 if ($EventID in (1, 5) and
12 $Image == "C:\\Windows\\System32\\conhost.exe") or
13 ($EventID == 3 and
14 $DestinationPort == 80 and
15 $DestinationIp == 10.0.0.1)
16 drop();
17 </Exec>
18 </Input>

106.4. Event IDs to Monitor


When it comes to Windows log collection, one of the most challenging tasks of a system administrator is deciding
which event IDs to monitor. Due to the large number of event IDs in use, this can be daunting at first sight.
Therefore, this section aims to provide guidance about selecting event IDs to monitor, with some example
configurations.

Event IDs are unique per source but are not globally unique. The same event ID may be used by
NOTE
different sources to identify unrelated occurrences.

106.4.1. Finding the Right Event IDs


An excellent general source to start with is the Windows 10 and Windows Server 2016 security auditing and
monitoring reference. It provides detailed descriptions about event IDs used for security audit policies. There are
additional resources to find events to monitor, see below:

• The Microsoft Events and Errors page on Microsoft Docs provides a directory of events grouped by area.
Start by navigating through the areas listed in the Available Documentation section.
• Palantir has published a Windows Event Forwarding Guidance repository, which contains a comprehensive WEF
Event Mappings table with categorized event IDs and details.
• The NSA Spotting the Adversary with Windows Event Log Monitoring paper provides event IDs for security
monitoring. See the example configuration here.
• The JPCERT/CC Detecting Lateral Movements Tool Analysis resource provides a collection of event codes that are
observed to indicate lateral movements. See the example configuration here.

664
• See the NXLog User Guide on Active Directory Domain Services for a list and configuration sample of security
event IDs relevant to Active Directory.

The table below displays a small sample of important events to monitor in the Windows Server Security Log for a
local server. See the Security-focused Event IDs to Monitor section for the configuration file holding these event
IDs.

Table 66. Example List of Security-focused Event IDs to Monitor

Event Description
ID
1102 The audit log was cleared.

4719 System audit policy was changed.

4704 A user right was assigned.

4717 System security access was granted to an account.

4738 A user account was changed.

4798 A user’s local group membership was enumerated.

4705 A user right was removed.

4674 An operation was attempted on a privileged object.

4732 A member was added to a security-enabled local group.

4697 A service was installed in the system.

4625 An account failed to log on.

4648 A logon was attempted using explicit credentials.

4723 An attempt was made to change an account’s password.

4946 A change has been made to Windows Firewall exception list. A rule was added.

4950 A Windows Firewall setting has changed.

6416 A new external device was recognized by the system.

6424 The installation of this device was allowed, after having previously been forbidden by policy.

106.4.2. Example Monitoring Configurations


Once a set of event IDs has been selected for monitoring, the im_msvistalog module can be configured.

The example configurations in this section are likely to require further modifications to suit each
NOTE
individual deployment.

Due to a bug or limitation of the Windows Event Log API, 23 or more clauses in a query will
result in a failure with the following error message: ERROR failed to subscribe to
NOTE
msvistalog events, the Query is invalid: This operator is unsupported by this
implementation of the filter.; [error code: 15001]

Event IDs are globally applied to all providers of a given XPath expression so events that match
NOTE these IDs will be collected. You should tweak your chosen dashboard or alerting system to
ensure that the right Event IDs and its subsequent providers are appropriately associated.

665
Example 452. Basic Configuration Example of Security-focused Event IDs to Monitor

This configuration provides a basic example of Windows Security events to monitor. Since only a small
number of IDs are presented, this configuration explicitly provides the actual event IDs to be collected.

nxlog.conf
 1 <Input MonitorWindowsSecurityEvents>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Security">*[System[(Level=1 or Level=2 or Level=3 or Level=4
or Level=0) and (EventID=1102 or EventID=4719 or EventID=4704 or EventID=4717 or EventID=4738
or EventID=4798 or EventID=4705 or EventID=4674 or EventID=4697 or EventID=4648 or EventID=4723
or EventID=4946 or EventID=4950 or EventID=6416 or EventID=6424 or EventID=4732)]]</Select>
 7 </Query>
 8 </QueryList>
 9 </QueryXML>
10 </Input>

Example 453. Extended Configuration Example of Security-focused Event IDs to Monitor

This extended configuration provides a much wider scope of log collection. Note that this approach for
specifying the event IDs requires defining the event IDs based on groups of events first. The QueryXML
paths are added in the QueryXML block in bulk. Then the Exec block will filter for the defined event IDs, but
only within the paths specified. It also drops event IDs that are not defined.

nxlog.conf (truncated)
 1 # define Account Usage Events
 2 define AccountUsage 4740, 4648, 4781, 4733, 1518, 4776, 5376, 5377, \
 3 4625, 300, 4634, 4672, 4720, 4722, 4782, 4793, \
 4 4731, 4735, 4766, 4765, 4624, 1511, 4726, 4725, \
 5 4767, 4728, 4732, 4756, 4704
 6
 7 # define Application Crash Events
 8 define AppCrashes 1000, 1002, 1001
 9
10 # define Application Whitelisting Events
11 define AppWhitelisting 8023, 8020, 8002, 8003, 8004, 8006, 8007, 4688, \
12 4689, 8005, 865, 866, 867, 868, 882
13
14 # define Boot Events
15 define BootEvents 13, 12
16
17 # define Certificate Services Events
18 define CertServices 95, 4886, 4890, 4874, 4873, 4870, 4887, 4885, \
19 4899, 4896, 1006, 1004, 1007, 1003, 1001, 1002
20
21 # define Clearing Event Logs Events
22 define ClearingLogs 1100, 104, 1102
23
24 # define DNS and Directory Services Events
25 define DNSDirectoryServ 5137, 5141, 5136, 5139, 5138, 3008, 3020
26
27 # define External Media Detection events
28 [...]

666
Example 454. Configuration Example of Event IDs Corresponding to Lateral Movements

This configuration, similar to the extended configuration above, lists event IDs associated with the detection
of malicious lateral movements. It is based on the security research conducted by the CERT (Computer
Emergency Response Team) cybersecurity researchers on Detecting Lateral Movement through Tracking
Event Logs.

nxlog.conf (truncated)
 1 # define Security Events
 2 define SecurityEvents 4624, 4634, 4648, 4656, 4658, 4660, 4663, 4672, \
 3 4673, 4688, 4689, 4698, 4720, 4768, 4769, 4946, \
 4 5140, 5142, 5144, 5145, 5154, 5156, 5447, 8222
 5
 6 # define Sysmon Events
 7 define SysmonEvents 1, 2, 5, 8, 9
 8
 9 # define Application Management event
10 define ApplicationMgmt 104
11
12 # define Windows Remote Management Events
13 define WRMEvents 80, 132, 143, 166, 81
14
15 # define Task Scheduler - Operational Events
16 define TaskSchedEvents 106, 129, 200, 201
17
18 # define Local Session Manager - Operational Events
19 define LocalSessionMgrEvents 21, 24
20
21 #define BitsClient Events
22 define BitsClientsEvents 60
23
24 <Input LateralMovementEvents>
25 Module im_msvistalog
26 TolerateQueryErrors TRUE
27 <QueryXML>
28 <QueryList>
29 [...]

106.5. Forwarding Event Log Data


After collecting the EventLog data from a Windows system with NXLog, it may need to be sent to another host.
This section provides details and examples for configuring this.

Event descriptions in EventLog data may contain tabs and newlines, but these are not supported by some
formats like BSD Syslog. In this case, a regular expression can be used to remove them.

Example 455. Removing Tabs and Newline Sequences

This input instance is configured to modify the $Message field (the event description) by replacing all tab
characters and newline sequences with spaces.

nxlog.conf
1 <Input in>
2 Module im_mseventlog
3 Exec $Message =~ s/(\t|\R)/ /g;
4 </Input>

667
106.5.1. Forwarding EventLog in BSD Syslog Format
EventLog data is commonly sent in the BSD Syslog format. This can be generated with the to_syslog_bsd()
procedure provided by the xm_syslog module. For more information, see Sending Syslog to a Remote Logger via
UDP, TCP, or TLS.

Example 456. Sending EventLog in BSD Syslog Format

This example configuration removes tab characters and newline sequences from the $Message field,
converts the event record to BSD Syslog format, and forwards the event via UDP.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 Exec $Message =~ s/(\t|\R)/ /g; to_syslog_bsd();
 8 </Input>
 9
10 <Output udp>
11 Module om_udp
12 Host 10.10.1.1
13 Port 514
14 </Output>

NOTE The to_syslog_bsd() procedure will use only a subset of the EventLog fields.

Output Sample
<14>Jan 2 10:21:16 win7host Service_Control_Manager[448]: The Computer Browser service entered
the running state.↵

106.5.2. Forwarding Windows Event Log in JSON Format


To preserve all event log fields, the logs can be formatted as JSON. The xm_json module provides a to_json()
procedure for this purpose. For more information about generating logs in JSON format, see JSON.

668
Example 457. Sending EventLog in JSON Format

This example configuration converts the event record to JSON format and forwards the event via TCP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 Exec to_json();
 8 </Input>
 9
10 <Output tcp>
11 Module om_tcp
12 Host 192.168.10.1
13 Port 1514
14 </Output>

Output Sample
{
  "EventTime": "2017-01-02 10:21:16",
  "Hostname": "win7host",
  "Keywords": -9187343239835812000,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 7036,
  "SourceName": "Service Control Manager",
  "ProviderGuid": "{525908D1-A6D5-5695-8E2E-26921D2011F3}",
  "Version": 0,
  "Task": 0,
  "OpcodeValue": 0,
  "RecordNumber": 2629,
  "ProcessID": 448,
  "ThreadID": 2872,
  "Channel": "System",
  "Message": "The Computer Browser service entered the running state.",
  "param1": "Computer Browser",
  "param2": "running",
  "EventReceivedTime": "2017-01-02 10:21:17",
  "SourceModuleName": "eventlog",
  "SourceModuleType": "im_msvistalog"
}

For compatibility with logging systems that require BSD Syslog, the JSON format can be used with a BSD Syslog
header.

669
Example 458. Encapsulating JSON EventLog in BSD Syslog

This example configuration converts the event record to JSON, adds a BSD Syslog header, and forwards the
event via UDP.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_json(); to_syslog_bsd();
12 </Input>
13
14 <Output udp>
15 Module om_udp
16 Host 192.168.2.1
17 Port 514
18 </Output>

Output Sample
<14>Jan 2 10:21:16 win7host Service_Control_Manager[448]: {"EventTime":"2017-01-02
10:21:16","Hostname":"win7host","Keywords":-
9187343239835811840,"EventType":"INFO","SeverityValue":2,"Severity":"INFO","EventID":7036,"Sour
ceName":"Service Control Manager","ProviderGuid":"{525908D1-A6D5-5695-8E2E-
26921D2011F3}","Version":0,"Task":0,"OpcodeValue":0,"RecordNumber":2629,"ProcessID":448,"Thread
ID":2872,"Channel":"System","Message":"The Computer Browser service entered the running
state.","param1":"Computer Browser","param2":"running","EventReceivedTime":"2017-01-02
10:21:17","SourceModuleName":"eventlog","SourceModuleType":"im_msvistalog"}↵

106.5.3. Forwarding Windows Event Log in the Snare Format


The Snare format is often used for Windows EventLog data. The xm_syslog module includes a to_syslog_snare()
procedure which can generate the Snare format with a Syslog header. For more information about the Snare
format, see Snare.

670
Example 459. Sending EventLog in Snare Format

This example configuration removes tab characters and newline sequences from the $Message field,
converts the event record to the Snare over Syslog format, and forwards the event via UDP.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 Exec $Message =~ s/(\t|\R)/ /g; to_syslog_snare();
 8 </Input>
 9
10 <Output snare>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 </Output>

Output Sample
<14>Jan 2 10:21:16 win7host MSWinEventLog ⇥ 1 ⇥ System ⇥ 193 ⇥ Mon Jan 02 10:21:16 2017 ⇥
7036 ⇥ Service Control Manager ⇥ N/A ⇥ N/A ⇥ Information ⇥ win7host ⇥ N/A ⇥ ⇥ The Computer
Browser service entered the running state. ⇥ 2773↵

671
Chapter 107. Windows Firewall
Windows Firewall provides local protection from network attacks that might pass through your perimeter
network or originate inside your organization. It also provides computer-to-computer connection security by
allowing you to require authentication and data protection for communications.

107.1. Traffic Logging


The Windows Firewall can be configured to log traffic information via the Advanced Security Log. These logs can
provide valuable information like source and destination IP addresses, port numbers, and protocols for both
blocked and allowed traffic. The log file follows the standard W3C format—see W3C Extended Log File Format
section for more information.

Log Sample
#Software: Microsoft Windows Firewall↵
#Time Format: Local↵
#Fields: date time action protocol src-ip dst-ip src-port dst-port size tcpflags tcpsyn tcpack
tcpwin icmptype icmpcode info path↵

2018-10-16 08:20:36 ALLOW UDP 127.0.0.1 127.0.0.1 54348 53 0 - - - - - - - SEND↵
2018-10-16 08:20:36 ALLOW UDP 127.0.0.1 127.0.0.1 54348 53 0 - - - - - - - RECEIVE↵
2018-10-16 08:20:36 ALLOW 250 127.0.0.1 127.0.0.1 - - 0 - - - - - - - SEND↵

There are several different actions that can be logged in the action field: DROP for dropping a connection, OPEN
for opening a connection, CLOSE for closing a connection, OPEN-INBOUND for an inbound session opened to the
local computer, and INFO-EVENTS-LOST for events processed by the Windows Firewall but which were not
recorded in the Security Log.

For information about configuring the Windows Firewall Security log, please refer to Configure the Windows
Defender Firewall with Advanced Security Log on Microsoft Docs.

Example 460. Collecting Events From the Advanced Security Log

This example configuration collects and parses firewall logs using the im_file and xm_w3c modules.

nxlog.conf
 1 define EMPTY_EVENT_REGEX /(^$|^\s+$)/
 2
 3 <Extension w3c_parser>
 4 Module xm_w3c
 5 </Extension>
 6
 7 <Input pfirewall>
 8 Module im_file
 9 File 'C:\Windows\system32\LogFiles\Firewall\pfirewall.log'
10 InputType w3c_parser
11 Exec if $raw_event =~ %EMPTY_EVENT_REGEX% drop();
12 </Input>

107.2. Change Auditing


Auditing the activity of Windows Firewall is part of a defense-in-depth strategy because it can be used to
generate alerts about malicious software that is attempting to modify firewall settings. Auditing can also help
administrators determine the network needs of their applications and design appropriate policies for
deployment to users.

672
There are several ways to enable Windows Firewall audit logging.

Enabling locally via the GUI


1. Open the Local Security Settings console.
2. In the console tree, click Local Policies, and then click Audit Policy.
3. In the details pane of the Local Security Settings console, double-click Audit policy change. Select
Success and Failure, and then click OK.
4. In the details pane of the Local Security Settings console, double-click Audit process tracking. Select
Success and Failure, and then click OK.

Using Group Policy


Alternatively, audit logging can be enabled for multiple computers in an Active Directory domain using Group
Policy. Modify the Audit Policy Change and Audit Process Tracking settings at Computer
Configuration\Windows Settings\Security Settings\Local Policies\Audit Policy for the Group Policy
objects in the appropriate domain system containers.

With auditpol.exe
Finally, the following command can be used to enable Windows Firewall audit logs.

> auditpol.exe /set /SubCategory:"MPSSVC rule-level Policy Change","Filtering Platform policy


change","IPsec Main Mode","IPsec Quick Mode","IPsec Extended Mode","IPsec Driver","Other System
Events","Filtering Platform Packet Drop","Filtering Platform Connection" /success:enable
/failure:enable

After audit logging is enabled, audit events can be viewed in the Security event log or collected with NXLog. For a
full list of Windows Security Audit events, download the Windows security audit events spreadsheet from the
Microsoft Download Center.

Example 461. Collecting Windows Firewall and Advanced Security Events from the EventLog

This example collects Windows Firewall events from the EventLog using the im_msvistalog module.

 1 <Input WinFirewallEventLog>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0">
 6 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
 7 Security/ConnectionSecurity">*</Select>
 8 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
 9 Security/ConnectionSecurityVerbose">*</Select>
10 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
11 Security/Firewall">*</Select>
12 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
13 Security/FirewallVerbose">*</Select>
14 <Select Path="Network Isolation Operational">*</Select>
15 </Query>
16 </QueryList>
17 </QueryXML>
18 </Input>

107.3. Event Tracing


Event Tracing for Windows (ETW) is a logging and tracing mechanism used by developers. ETW includes event
logging and tracing capabilities provided by the operating system. Implemented in the kernel, it traces events in
user mode applications, the operating system kernel, and kernel-mode device drivers. For more information, see

673
Event Tracing on Microsoft Docs.

Example 462. Collecting Windows Firewall and Advanced Security Traces from ETW

This configuration uses the im_etw module to collect Windows Firewall related traces from Event Tracing for
Windows.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Firewall
4 </Input>
5
6 <Input etw2>
7 Module im_etw
8 Provider Microsoft-Windows-Windows Firewall With Advanced Security
9 </Input>

674
Chapter 108. Windows Group Policy
Windows Group Policy allows the centralized management and administration of user and computer accounts in
a Microsoft Active Directory environment.

There are several ways Group Policy related logs can be acquired.

• Group Policy Operational logs and Security logs from Windows Event Log
• Event Tracing for Windows (ETW)
• File-based logs found in the file system

This topic covers the methods that can be used to collect these logs with NXLog.

The Group Policy Operational logs are displayed in the Operational object under the Applications and Services
> Microsoft > Windows > GroupPolicy directory in Event Viewer.

Group Policy stores some events in the Security channel of the Windows Event Log. These events are related to
the access, deletion, modification and creation of objects.

Example 463. Collecting Group Policy Logs from Windows Event Log

The following configuration uses the im_msvistalog module to collect Group Policy logs from the Security
channel. It includes a custom query that will filter for events based on specified EventIDs.

nxlog.conf
 1 <Input in>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Security">
 6 <Select Path="Security">
 7 *[System[(EventID=4663 or EventID=5136 or \
 8 EventID=5137 or EventID=5141)]]
 9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>

The Microsoft-Windows-GroupPolicy provider supplies Group Policy related logs via an event tracing session
that can be collected via ETW. It gets the logs from the same source as Windows Event Log provides in the
previous example, however the im_etw module is capable of collecting ETW trace data then forwarding it without
saving the data to disk, which results in improved efficiency. Also, there are slight differences in the level of
verbosity, therefore it is worth considering both options and picking the one best suits your environment.

Example 464. Collecting Group Policy Logs via ETW

The following configuration uses the im_etw module to collect Group Policy logs from an ETW provider.

nxlog.conf
1 <Input in>
2 Module im_etw
3 Provider Microsoft-Windows-GroupPolicy
4 </Input>

Group Policy stores Group Policy Client Service (GPSVC) and Group Policy Management Console (GPMC) logs, in

675
the %windir%\debug\usermode directory.

Example 465. Collecting Group Policy Logs from File

The following configuration uses the im_file module to collect GPMC and GPSVC logs from the above
mentioned %windir%\debug\usermode directory. Since these logs are encoded in UTF-16LE, they need to
be converted into UTF-8 using the xm_charconv extension module.

nxlog.conf (truncated)
 1 <Extension _charconv>
 2 Module xm_charconv
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 define GroupPolicy /(?x)\w+\((?<PID>[\w\d]{3,4}). \
10 (?<TID>[\w\d]{3,4})\)\s+ \
11 (?<time>[\d\:]+)\s+ \
12 (?<Message>.*)/
13
14 <Input in>
15 Module im_file
16 File 'C:\Windows\debug\usermode\gpsvc.log'
17 File 'C:\Windows\debug\usermode\gpmc.log'
18 <Exec>
19 #Query the current filename
20 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
21
22 # Convert character encoding from UTF-16LE to UTF-8
23 $raw_event = convert($raw_event, 'UTF-16LE', 'UTF-8');
24
25 #Parse $raw_event
26 if $raw_event =~ %GroupPolicy%
27
28 #Query year, month and day details from the current system
29 [...]

Input sample (Group Policy Management Console logs)


GPMC(1a1c.1a20) 19:04:10:376 CGPONode::~CGPONode: Destroying object 0x228cf90 \↵
with nodedeletedflag 0x0↵

Output sample (Group Policy Management Console logs)


{
  "EventReceivedTime": "2019-07-20T15:06:13.690052+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "FileName": "gpmc.log",
  "Message": "CGPONode::~CGPONode: Destroying object 0x228cf90 with nodedeletedflag 0x0",
  "PID": "1a1c",
  "TID": "1a20",
  "EventTime": "2019-07-20T19:04:10.000000+02:00"
}

676
Chapter 109. Windows Management Instrumentation
(WMI)
The Windows Management Instrumentation (WMI) system is an implementation of the Web-Based Enterprise
Management (WBEM) and Common Information Model (CIM) standards. It provides an infrastructure for
managing remote systems and providing management data. For more information about WMI, see Windows
Management Instrumentation on Microsoft Docs.

WMI event logging uses Event Tracing for Windows (ETW). These logs can be collected via Windows EventLog or
ETW. For Windows versions prior to Windows Vista and Windows Server 2008, it is also possible to read from
WMI log files.

109.1. Reading WMI Events From the EventLog


WMI logs events to Microsoft-Windows-WMI-Activity/Operational in the Windows EventLog, including these
event IDs:

• 5857: Operation_StartedOperational
• 5858: Operation_ClientFailure
• 5859: Operation_EssStarted
• 5860: Operation_TemporaryEssStarted
• 5861: Operation_ESStoConsumerBinding

Example 466. Collecting WMI Logs With im_msvistalog

The following configuration will collect and parse these events from Microsoft-Windows-WMI-
Activity/Operational using the im_msvistalog module. The xm_xml module is used to further parse the
XML data in the $UserData field.

nxlog.conf
 1 <Extension _xml>
 2 Module xm_xml
 3 </Extension>
 4
 5 <Input in>
 6 Module im_msvistalog
 7 <QueryXML>
 8 <QueryList>
 9 <Query Id="0">
10 <Select Path="Microsoft-Windows-WMI-Activity/Operational">*</Select>
11 </Query>
12 </QueryList>
13 </QueryXML>
14 Exec if $UserData parse_xml($UserData);
15 </Input>

677
Output Sample
{
  "EventTime": "2019-02-24T21:19:36.603548+01:00",
  "Hostname": "Host.DOMAIN.local",
  "Keywords": "4611686018427387904",
  "EventType": "ERROR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "EventID": 5858,
  "SourceName": "Microsoft-Windows-WMI-Activity",
  "ProviderGuid": "{1418EF04-B0B4-4623-BF7E-D74AB47BBDAA}",
  "Version": 0,
  "TaskValue": 0,
  "OpcodeValue": 0,
  "RecordNumber": 7314,
  "ActivityID": "{3459A8FD-CC70-0000-47C6-593470CCD401}",
  "ExecutionProcessID": 1020,
  "ExecutionThreadID": 8840,
  "Channel": "Microsoft-Windows-WMI-Activity/Operational",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "Message": "Id = {3459A8FD-CC70-0000-47C6-593470CCD401}; ClientMachine = HOST; User = NT
AUTHORITY\\SYSTEM; ClientProcessId = 3640; Component = Unknown; Operation = Start
IWbemServices::ExecQuery - root\\cimv2 : Select * from Win32_Service Where Name = 'MpsSvc';
ResultCode = 0x80041032; PossibleCause = Unknown",
  "Opcode": "Info",
  "UserData": "<Operation_ClientFailure
xmlns='http://manifests.microsoft.com/win/2006/windows/WMI'><Id>{3459A8FD-CC70-0000-47C6-
593470CCD401}</Id><ClientMachine>HOST</ClientMachine><User>NT AUTHORITY
\\SYSTEM</User><ClientProcessId>3640</ClientProcessId><Component>Unknown</Component><Operation>
Start IWbemServices::ExecQuery - root\\cimv2 : Select * from Win32_Service Where Name =
'MpsSvc'</Operation><ResultCode>0x80041032</ResultCode><PossibleCause>Unknown</PossibleCause></
Operation_ClientFailure>",
  "EventReceivedTime": "2019-02-24T21:19:38.104568+01:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog",
  "Operation_ClientFailure.Id": "{3459A8FD-CC70-0000-47C6-593470CCD401}",
  "Operation_ClientFailure.ClientMachine": "HOST",
  "Operation_ClientFailure.User": "NT AUTHORITY\\SYSTEM",
  "Operation_ClientFailure.ClientProcessId": "3640",
  "Operation_ClientFailure.Component": "Unknown",
  "Operation_ClientFailure.Operation": "Start IWbemServices::ExecQuery - root\\cimv2 : Select *
from Win32_Service Where Name = 'MpsSvc'",
  "Operation_ClientFailure.ResultCode": "0x80041032",
  "Operation_ClientFailure.PossibleCause": "Unknown"
}

109.2. Reading WMI Events via ETW


WMI events can also be collected via ETW directly. Note that WMI tracing is not enabled by default—see Tracing
WMI Activity on Microsoft Docs.

678
Example 467. Collecting WMI Logs With im_etw

The following configuration uses the im_etw module to collect ETW logs from the Microsoft-Windows-
WMI-Activity provider.

nxlog.conf
1 <Input etw_in>
2 Module im_etw
3 Provider Microsoft-Windows-WMI-Activity
4 </Input>

Output Sample
{
  "SourceName": "Microsoft-Windows-WMI-Activity",
  "ProviderGuid": "{1418EF04-B0B4-4623-BF7E-D74AB47BBDAA}",
  "EventId": 100,
  "Version": 0,
  "Channel": 18,
  "OpcodeValue": 0,
  "TaskValue": 0,
  "Keywords": "2305843009213693952",
  "EventTime": "2019-03-04T19:48:48.842576+01:00",
  "ExecutionProcessID": 1500,
  "ExecutionThreadID": 8104,
  "ActivityID": "{AF4CFCDC-66C1-4A9A-B7D7-13ECD1AAE01A}",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "ComponentName": "MI_Client",
  "MessageDetail": "Operation Enumerate Instances: session=0000008F1C752638,
operation=0000008F1D03DCF0, internal-operation=0000008F1D63ED90, namespace=root\\Microsoft
\\Windows\\Storage\\SM, classname=MSFT_SMStorageVolume",
  "FileName": "admin\\wmi\\wmiv2\\client\\api\\operation.c:2008",
  "EventReceivedTime": "2019-03-04T19:48:49.888767+01:00",
  "SourceModuleName": "etw_in",
  "SourceModuleType": "im_etw"
}

109.3. Reading From WMI Log Files


There are three WMI provider log files available on Windows versions prior to Windows Vista and Windows
Server 2008. These files are normally located in %systemroot%\system32\wbem\logs. For more information, see
WMI Provider Log Files on Microsoft Docs.

These log files can be configured by modifying the Windows Registry


HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WBEM\CIMOM\Logging value. Set it to 1 for error logging or 2 for
verbose logging. For more details about configuring the WMI log files, see Logging WMI Activity.

679
Example 468. Collecting and Parsing WMI Provider Log Files

This configuration collects and parses events from the three WMI log files.

nxlog.conf
 1 <Input in>
 2 Module im_file
 3 File 'C:\WINDOWS\system32\wbem\Logs\wmiprov.log'
 4 File 'C:\WINDOWS\system32\wbem\Logs\ntevt.log'
 5 File 'C:\WINDOWS\system32\wbem\Logs\dsprovider.log'
 6 <Exec>
 7 file_name() =~ /(?<Filename>[^\\]+)$/;
 8 if $raw_event =~ /^\((?<EventTime>.+)\.\d{7}\) : (?<Message>.+)$/
 9 $EventTime = strptime($EventTime, "%a %b %d %H:%M:%S %Y");
10 </Exec>
11 </Input>

Output Sample (wmiprov.log)


{
  "EventReceivedTime": "2019-03-12T18:32:16.296875+01:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_file",
  "Filename": "wmiprov.log",
  "EventTime": "2019-03-12T18:32:16.000000+01:00",
  "Message": "C:\\WINDOWS\\system32\\DRIVERS\\bthpan.sys[NdisMofResource]"
}

680
Chapter 110. Windows PowerShell
PowerShell is an command-line shell based on the .NET framework.

110.1. Using PowerShell Scripts


PowerShell scripts can be used with NXLog for generating, processing, and forwarding logs, as well as for
generating configuration content.

By default, Windows services run under the NT AUTHORITY\SYSTEM user account. Depending on the purpose of
a PowerShell script, its operations may require additional permissions. In this case, either change the NXLog
service account (see Running Under a Custom Account on Windows) or add permissions as required to the
SYSTEM account.

110.1.1. Generating Logs


For generating logs, a PowerShell script can be used with the im_exec module. For another example, see
Collecting Audit Logs via the SharePoint API.

Example 469. Using PowerShell to Generate Logs

This configuration uses the im_exec module to execute powershell.exe with the specified arguments,
including the path to the script. The script creates an event and writes it to standard output in JSON format.
The xm_json parse_json() procedure is used to parse the JSON so all the fields are available in the event
record.

The script shows header examples for running the script under a different architecture than the NXLog
agent. Also, a simple file-based position cache is included to demonstrate how a script can resume from the
previous position when the agent or module instance is stopped and started again.

Because the end value of one poll and the start value of the next poll are equal, an actual source read
should not include exact matches for both start and end values (to prevent reading duplicate events). For
example, either the start value should be excluded ($start < $event ≤ $end) or the end value ($start ≤
$event < $end).

This example requires PowerShell 3 or later to transport structured data in JSON format. If
NOTE structured data is required with an earlier version of PowerShell, CSV format could be used
instead; see the next example.

681
nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 envvar systemroot
 6 <Input powershell>
 7 Module im_exec
 8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
 9 # Use "-Version" to select a specific PowerShell version.
10 #Arg "-Version"
11 #Arg "3"
12 # Bypass the system execution policy for this session only.
13 Arg "-ExecutionPolicy"
14 Arg "Bypass"
15 # Skip loading the local PowerShell profile.
16 Arg "-NoProfile"
17 # This specifies the path to the PowerShell script.
18 Arg "-File"
19 Arg "C:\ps_input.ps1"
20 # Any additional arguments are passed to the PowerShell script.
21 Arg "arg1"
22 <Exec>
23 # Parse JSON
24 parse_json();
25
26 # Convert $EventTime field to datetime
27 $EventTime = parsedate($EventTime);
28 </Exec>
29 </Input>

ps_input.ps1 (truncated)
#Requires -Version 3

# Use this if you need 64-bit PowerShell (has no effect on 32-bit systems).
#if ($env:PROCESSOR_ARCHITEW6432 -eq "AMD64") {
# Write-Debug "Running 64-bit PowerShell."
# &"$env:SYSTEMROOT\SysNative\WindowsPowerShell\v1.0\powershell.exe" `
# -NonInteractive -NoProfile -ExecutionPolicy Bypass `
# -File "$($myInvocation.InvocationName)" $args
# exit $LASTEXITCODE
#}

# Use this if you need 32-bit PowerShell.


#if ($env:PROCESSOR_ARCHITECTURE -ne "x86") {
# Write-Debug "Running 32-bit PowerShell."
# &"$env:SYSTEMROOT\SysWOW64\WindowsPowerShell\v1.0\powershell.exe" `
# -NonInteractive -NoProfile -ExecutionPolicy Bypass `
# -File "$($myInvocation.InvocationName)" $args
# exit $LASTEXITCODE
[...]

PowerShell 2 does not support JSON. Instead, events can be formatted as CSV and parsed with an xm_csv
module instance.

682
Example 470. Using PowerShell 2 as Input

In this example, the PowerShell script generates output strings in CSV format. The xm_csv parse_csv()
procedure is used to parse the CSV strings into fields in the event record. Note that the fields must be
provided, sorted by name, in the xm_csv Fields directive (and corresponding types should be provided via
the FieldTypes directive).

For best results with structured data, use JSON with PowerShell 3 or later (see the
WARNING
previous example).

nxlog.conf
 1 <Extension csv_parser>
 2 Module xm_csv
 3 Fields Arguments, EventTime, Message
 4 FieldTypes string, datetime, string
 5 </Extension>
 6
 7 envvar systemroot
 8 <Input powershell>
 9 Module im_exec
10 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
11 Arg "-Version"
12 Arg "2"
13 Arg "-ExecutionPolicy"
14 Arg "Bypass"
15 Arg "-NoProfile"
16 Arg "-File"
17 Arg "C:\ps2_input.ps1"
18 Exec csv_parser->parse_csv();
19 </Input>

ps2_input.ps1 (truncated)
#Requires -Version 2

$count = 0

while($true) {
  $count += 1
  $now = [System.DateTime]::UtcNow

  # Set event fields


  $event = @{
  Arguments = [system.String]::Join(", ", $args);
  EventTime = $now.ToString('o');
  Message = "event$count";
  }

  # Return event as CSV


  $row = New-Object PSObject
  $event.GetEnumerator() | Sort-Object -Property Name | ForEach-Object {
[...]

110.1.2. Forwarding Logs


For forwarding logs, a PowerShell script can be used with the om_exec module.

683
Example 471. Using PowerShell to Forward Logs

This configuration uses om_exec to execute powershell.exe with the specified arguments, including the
path to the script. The script reads events on standard input.

This configuration requires PowerShell 3 or later for its JSON support and to correctly read
NOTE
lines from standard input.

See the Using PowerShell to Generate Logs example above for more details about
TIP powershell.exe arguments and PowerShell code for explicitly specifying a 32-bit or 64-bit
environment.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 envvar systemroot
 6 <Output powershell>
 7 Module om_exec
 8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
 9 Arg "-ExecutionPolicy"
10 Arg "Bypass"
11 Arg "-NoProfile"
12 Arg "-File"
13 Arg "C:\ps_output.ps1"
14 Exec to_json();
15 </Output>

ps_output.ps1
#Requires -Version 3

while($line = [Console]::In.ReadLine()) {
  # Convert from JSON
  $record = $line | ConvertFrom-Json

  # Write out to file


  $record | Out-File -FilePath 'C:\out.log' -Append
}

110.1.3. Generating Configuration


A PowerShell script can be used with the include_stdout directive to generate dynamic NXLog configuration.
NXLog will execute the script during parsing of the configuration file.

Because include_stdout does not support arguments, it is simplest to use a batch/PowerShell polyglot script for
this purpose. For another example, see Automatic Retrieval of IIS Site Log Locations.

The Command Prompt may print '@' is not recognized if a Unicode byte order mark
TIP (BOM) is included in the batch file. To fix this, use Notepad and select the ANSI encoding when
saving the file.

684
Example 472. Using PowerShell and include_stdout

This configuration uses PowerShell code to generate the File directive for the Input instance.

nxlog.conf
1 <Input in>
2 Module im_file
3 include_stdout C:\include.cmd
4 </Input>

include.cmd (truncated)
@( Set "_= (
REM " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
set powershell=powershell.exe

REM Use this if you need 64-bit PowerShell (has no effect on 32-bit systems).
REM if defined PROCESSOR_ARCHITEW6432 (
REM set powershell=%SystemRoot%\SysNative\WindowsPowerShell\v1.0\powershell.exe
REM )

REM Use this if you need 32-bit PowerShell.


REM if NOT %PROCESSOR_ARCHITECTURE% == x86 (
REM set powershell=%SystemRoot%\SysWOW64\WindowsPowerShell\v1.0\powershell.exe
REM )

%powershell% -ExecutionPolicy Bypass -NoProfile ^


[...]

110.2. Logging PowerShell Activity


Recent versions of Windows PowerShell provide several features for logging of activity from PowerShell sessions.
NXLog can be configured to collect and parse these logs.

In addition to the sections below, see Securing PowerShell in the Enterprise, Greater Visibility Through
PowerShell Logging, and PowerShell ♥ the Blue Team. Also see the Command line process auditing article on
Microsoft Docs, the Windows Command Line Auditing and Sysmon chapters, which can be used to generate
events for command line process creation (but not for commands executed through the PowerShell engine).

110.2.1. Module Logging


Module logging, available since PowerShell 3, logs pipeline execution events for specified PowerShell modules.
This feature writes Event ID 4103 events to the Microsoft-Windows-PowerShell/Operational channel.

Module logging can be enabled by setting the LogPipelineExecutionDetails property of a module to True. Or this
property can be enabled for selected modules through Group Policy as follows.

1. Open the Group Policy MMC snapin (gpedit.msc).

2. Go to Computer Configuration › Administrative Templates › Windows Components › Windows


PowerShell and open the Turn on Module Logging setting.

685
3. Select Enabled. Then click the [ Show… ] button and enter the modules for which to enable logging. Use an
asterisk (*) to enable logging for all modules.

Example 473. Collecting Module Logging Events

This configuration collects all events with ID 4103 from the Windows PowerShell Operational channel. First,
the key-value pairs from the ContextInfo field are parsed to remove the \n and \r\n characters where
required, after that, the ContextInfo_ prefix is added to enhance visibility. In addition, the original
Message and ContextInfo fields are removed with their corresponding content as they are available
elsewhere in the output. Finally, the logs are converted to JSON.

686
nxlog.conf
 1 <Extension kvp>
 2 Module xm_kvp
 3 KVPDelimiter ,
 4 KVDelimiter =
 5 </Extension>
 6
 7 <Extension json>
 8 Module xm_json
 9 </Extension>
10
11 <Input in>
12 Module im_msvistalog
13 <QueryXML>
14 <QueryList>
15 <Query Id="0" Path="Microsoft-Windows-PowerShell/Operational">
16 <Select Path="Microsoft-Windows-PowerShell/Operational">
17 *[System[EventID=4103]]</Select>
18 </Query>
19 </QueryList>
20 </QueryXML>
21 <Exec>
22 if defined($ContextInfo)
23 {
24 $ContextInfo = replace($ContextInfo, "\r\n", ",");
25 $ContextInfo = replace($ContextInfo, "\n", ",");
26 $ContextInfo = replace($ContextInfo, " ", "");
27 kvp->parse_kvp($ContextInfo, "ContextInfo_");
28 delete($ContextInfo);
29 delete($Message);
30 }
31 json->to_json();
32 </Exec>
33 </Input>

687
Output Sample
{
  "EventTime": "2020-01-29T05: 30: 45.727799-08: 00",
  "Hostname": "NXLog-Server",
  "Keywords": "0",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 4103,
  "SourceName": "Microsoft-Windows-PowerShell",
  "ProviderGuid": "{A0C1853B-5C40-4B15-8766-3CF1C58F985A}",
  "Version": 1,
  "TaskValue": 106,
  "OpcodeValue": 20,
  "RecordNumber": 170,
  "ActivityID": "{9C1FE60B-D6F2-0000-3316-209CF2D6D501}",
  "ExecutionProcessID": 3648,
  "ExecutionThreadID": 1060,
  "Channel": "Microsoft-Windows-PowerShell/Operational",
  "Domain": "NXLog-Server",
  "AccountName": "Administrator",
  "UserID": "S-1-5-21-2463765617-934790487-2583750676-500",
  "AccountType": "User",
  "Category": "Executing Pipeline",
  "Opcode": "To be used when operation is just executing a method",
  "Payload": "Update-Help has completed successfully.",
  "EventReceivedTime": "2020-01-29T05: 30: 47.161585-08: 00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog",
  "ContextInfo_Severity": "Informational",
  "ContextInfo_Host Name": "ConsoleHost",
  "ContextInfo_Host Version": "5.1.17763.592",
  "ContextInfo_Host ID": "67d049eb-f3d6-4718-8cd2-b9dae30c4c7b",
  "ContextInfo_Host Application": "C: \\Windows\\System32\\WindowsPowerShell\\v1.0
\\powershell.exe",
  "ContextInfo_Engine Version": "5.1.17763.592",
  "ContextInfo_Runspace ID": "3145a9e1-18e3-4fa1-8700-fc78c783684b",
  "ContextInfo_Pipeline ID": 6,
  "ContextInfo_Command Name": "Update-Help",
  "ContextInfo_Command Type": "Cmdlet",
  "ContextInfo_Script Name": null,
  "ContextInfo_Command Path": null,
  "ContextInfo_Sequence Number": 79,
  "ContextInfo_User": "NXLog-Server\\Administrator",
  "ContextInfo_Connected User": null,
  "ContextInfo_Shell ID": "Microsoft.PowerShell"
}

110.2.2. Script Block Logging


PowerShell 5 introduces script block logging, which records the content of all script blocks that are processed.
Events with ID 4104 are written to the Microsoft-Windows-PowerShell/Operational channel. Start and stop events
can also be enabled; these events have IDs 4105 and 4106.

Script block logging can be configured through Group Policy as follows.

1. Open the Group Policy MMC snapin (gpedit.msc).

688
2. Go to Computer Configuration › Administrative Templates › Windows Components › Windows
PowerShell and open the Turn on PowerShell Script Block Logging setting.

3. Select Enabled. Optionally, check the Log script block invocation start/stop events option (this will
generate a high volume of event logs).

689
Example 474. Collecting Script Block Logging Events

The following configuration collects events with IDs 4104, 4105, and 4106 from the Windows PowerShell
Operational channel. Verbose level events are excluded.

nxlog.conf
 1 <Input script_block_logging>
 2 Module im_msvistalog
 3 <QueryXML>
 4 <QueryList>
 5 <Query Id="0" Path="Microsoft-Windows-PowerShell/Operational">
 6 <Select Path="Microsoft-Windows-PowerShell/Operational">
 7 *[System[(Level=0 or Level=1 or Level=2 or Level=3 or Level=4)
 8 and ((EventID &gt;= 4104 and EventID &lt;= 4106))]]
 9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>

110.2.3. Transcription
PowerShell provides "over-the-shoulder" transcription of PowerShell sessions with the Start-Transcript cmdlet.
With PowerShell 5, system-wide transcription can be enabled via Group Policy; this is equivalent to calling the
Start-Transcript cmdlet on each PowerShell session. Transcriptions are written to the current user’s Documents
directory unless a system-level output directory is set in the policy settings.

Log Sample (With Invocation Headers Enabled)


**********************↵
Windows PowerShell transcript start↵
Start time: 20171030223248↵
Username: WIN-FT17VBNL4B2\Administrator↵
RunAs User: WIN-FT17VBNL4B2\Administrator↵
Machine: WIN-FT17VBNL4B2 (Microsoft Windows NT 10.0.14393.0)↵
Host Application: C:\Windows\system32\WindowsPowerShell\v1.0\PowerShell.exe↵
Process ID: 4268↵
PSVersion: 5.1.14393.1770↵
PSEdition: Desktop↵
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1770↵
BuildVersion: 10.0.14393.1770↵
CLRVersion: 4.0.30319.42000↵
WSManStackVersion: 3.0↵
PSRemotingProtocolVersion: 2.3↵
SerializationVersion: 1.1.0.1↵
**********************↵
**********************↵
Command start time: 20171030223255↵
**********************↵
PS C:\Users\Administrator> echo test↵
test↵
**********************↵
Command start time: 20171030223256↵
**********************↵
PS C:\Users\Administrator> exit↵
**********************↵
Windows PowerShell transcript end↵
End time: 20171030223256↵
**********************↵

690
System-wide transcription can be enabled through Group Policy as follows.

If system-wide transcription to a shared location is enabled, access to that directory should


WARNING
be limited to prevent users from viewing the transcripts of other users or computers.

1. Open the Group Policy MMC snapin (gpedit.msc).

2. Go to Computer Configuration › Administrative Templates › Windows Components › Windows


PowerShell and open the Turn on PowerShell Transcription setting.
3. Select Enabled. Set a system-wide transcript output directory if required. Check the Include invocation
headers option (this setting generates a timestamp for each command and is recommended).

Example 475. Parsing PowerShell Transcriptions

This configuration reads and parses transcript files written to the TRANSCRIPTS_DIR directory (which
should be set appropriately). Headers, footers, and commands are parsed as separate events. $File and
$EventTime fields are set for each event (invocation headers must be enabled for command timestamps).
$Command and $Output fields are added for command events. Fields from the header entries are parsed
with xm_kvp and added to the event record. Finally, the logs are converted to JSON format and forwarded
via TCP.

The HeaderLine below must be changed if invocation headers are not enabled. See the
NOTE
comment in the configuration.

691
nxlog.conf (truncated)
 1 define TRANSCRIPTS_DIR C:\powershell
 2
 3 <Extension transcript_parser>
 4 Module xm_multiline
 5 # Use this if invocation headers are ON (recommended)
 6 HeaderLine /^\*{22}$/
 7 # Use this if invocation headers are OFF (not recommended)
 8 #HeaderLine /^(\*{22}$|PS[^>]*>)/
 9 <Exec>
10 $raw_event =~ s/^\xEF\xBB\xBF//;
11 if get_var('include_next_record') and $raw_event =~ /^\*{22}$/
12 {
13 set_var('include_next_record', FALSE);
14 $raw_event =~ s/^\*//;
15 }
16 else if $raw_event =~ /^Command start time: \d{14}$/
17 set_var('include_next_record', TRUE);
18 </Exec>
19 </Extension>
20
21 <Extension transcript_header_parser>
22 Module xm_kvp
23 KVPDelimiter \n
24 </Extension>
25
26 <Input transcription>
27 Module im_file
28 File '%TRANSCRIPTS_DIR%\\*PowerShell_transcript.*'
29 [...]

The following output shows the first two events of the log sample above.

692
Output Sample
{
  "EventReceivedTime": "2017-10-30 22:32:49",
  "SourceModuleName": "transcription",
  "SourceModuleType": "im_file",
  "File": "C:\\powershell\\\\20171030\\PowerShell_transcript.WIN-
FT17VBNL4B2.LcxuCZbr.20171030223248.txt",
  "Message": "Windows PowerShell transcript start\r\nStart time: 20171030223248\r\nUsername:
WIN-FT17VBNL4B2\\Administrator\r\nRunAs User: WIN-FT17VBNL4B2\\Administrator\r\nMachine: WIN-
FT17VBNL4B2 (Microsoft Windows NT 10.0.14393.0)\r\nHost Application: C:\\Windows\\system32
\\WindowsPowerShell\\v1.0\\PowerShell.exe\r\nProcess ID: 4268\r\nPSVersion: 5.1.14393.1770
\r\nPSEdition: Desktop\r\nPSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1770
\r\nBuildVersion: 10.0.14393.1770\r\nCLRVersion: 4.0.30319.42000\r\nWSManStackVersion: 3.0
\r\nPSRemotingProtocolVersion: 2.3\r\nSerializationVersion: 1.1.0.1",
  "Start time": "20171030223248",
  "Username": "WIN-FT17VBNL4B2\\Administrator",
  "RunAs User": "WIN-FT17VBNL4B2\\Administrator",
  "Machine": "WIN-FT17VBNL4B2 (Microsoft Windows NT 10.0.14393.0)",
  "Host Application": "C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\PowerShell.exe",
  "Process ID": "4268",
  "PSVersion": "5.1.14393.1770",
  "PSEdition": "Desktop",
  "PSCompatibleVersions": "1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1770",
  "BuildVersion": "10.0.14393.1770",
  "CLRVersion": "4.0.30319.42000",
  "WSManStackVersion": "3.0",
  "PSRemotingProtocolVersion": "2.3",
  "SerializationVersion": "1.1.0.1",
  "EventTime": "2017-10-30 22:32:48"
}
{
  "EventReceivedTime": "2017-10-30 22:32:56",
  "SourceModuleName": "transcription",
  "SourceModuleType": "im_file",
  "File": "C:\\powershell\\\\20171030\\PowerShell_transcript.WIN-
FT17VBNL4B2.LcxuCZbr.20171030223248.txt",
  "Command": "echo test",
  "EventTime": "2017-10-30 22:32:55",
  "Output": "test",
  "Message": "Command start time: 20171030223255\r\n**********************\r\nPS C:\\Users
\\Administrator> echo test\r\ntest"
}

693
Chapter 111. Microsoft Windows Update
Windows Update is a Windows system service that manages the updates for the Windows operating system.
Updates and patches are scheduled to be released through Windows Update on every second Tuesday of the
month.

The event logs related to Windows Update are accessible in two ways depending on the version of your
operating system:

• Via Event Tracing for Windows (ETW), for Windows 10, Windows Server 2016 and Windows Server 2019.
• Via the file system, in the the earlier versions of Windows.

Log Collection via Event Tracing for Windows


The im_etw module of NXLog allows collecting Windows Update logs from Windows 10, Windows Server 2016
and Windows Server 2019.

Example 476. Collecting Windows Update Logs With ETW

The following configuration collects Windows Update logs using the im_etw module. The collected logs are
then converted to JSON using the xm_json extension module.

nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in_etw>
6 Module im_etw
7 Provider Microsoft-Windows-WindowsUpdateClient
8 Exec to_json();
9 </Input>

Output Sample
{
  "SourceName": "Microsoft-Windows-WindowsUpdateClient",
  "ProviderGuid": "{945A8954-C147-4ACD-923F-40C45405A658}",
  "EventId": 38,
  "Version": 0,
  "Channel": 16,
  "OpcodeValue": 17,
  "TaskValue": 1,
  "Keywords": "4611686018427388544",
  "EventTime": "2019-06-06T15:08:01.098200+02:00",
  "ExecutionProcessID": 820,
  "ExecutionThreadID": 2440,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "EventReceivedTime": "2019-06-06T15:08:01.847001+02:00",
  "SourceModuleName": "in_etw",
  "SourceModuleType": "im_etw"
}

694
File-based Log Collection
Prior to the release of Windows Server 2016 and Windows 10, all Windows Update logs were stored in the
WindowsUpdate.log file under the %SystemRoot% directory.

Although this log file is deprecated, it can still be generated as described in the Generating
NOTE
WindowsUpdate.log Microsoft article.

Example 477. Collecting Windows Update Logs from Microsoft Windows Server 2008 and 2012

The following configuration collects and parses logs using the im_file module. The parser section is based
on the description of the Windows Update log files section of the Microsoft documentation.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 define windowsupdate /(?x)(?<Date>([\d\-]+))\s+ \
 6 (?<Time>([\d\:]+))\s+ \
 7 (?<PID>\d{3,5})\s+ \
 8 (?<TID>([\d\w]+))\s+ \
 9 (?<Category>(\w+))\s+ \
10 (?<Message>(.*)) /
11
12 <Input windowsupdate>
13 Module im_file
14 File 'C:\Windows\WindowsUpdate.log'
15 <Exec>
16 $raw_event =~ %windowsupdate%;
17 $EventTime = ($Date + ' ' + $Time);
18 to_json();
19 </Exec>
20 </Input>

Input Sample
2019-06-06 18:22:14:390 1012 1080 DnldMgr PurgeContentForPatchUpdates removing unused
directory "b7c04a03c3650087ddea456a018dba62"

Output Sample
{
  "EventReceivedTime": "2019-06-06T18:22:14.843037+02:00",
  "SourceModuleName": "windowsupdate",
  "SourceModuleType": "im_file",
  "Category": "DnldMgr",
  "Date": "2019-06-06",
  "Message": "PurgeContentForPatchUpdates removing unused directory
\"b7c04a03c3650087ddea456a018dba62\"",
  "PID": "1012",
  "TID": "1080",
  "Time": "18:22:14:390",
  "EventTime": "2019-06-06 18:22:14:390"
}

695
Chapter 112. Windows USB Auditing
Portable devices provide the user easy access to company related data in a corporate environment. As the usage
of USB devices increase, so do the risks associated with them.

This section discusses the possibilities of collecting USB related events in a Microsoft Windows environment
using NXLog.

There are four ways USB activity related events can be tracked down.

• From Windows Event Log


• Tracing them using ETW
• Monitoring them in Windows Registry
• Looking at the file system

112.1. USB Events in Windows Event Log


Microsoft Windows logs USB related events into Windows Event Log. They are logged under the the System and
Security channels as well as in various places under the Applications and Services Logs\Microsoft\Windows path in
Event Viewer.

Events From the System Channel


These events are only generated once, during the driver installation phase, when the external device is
connected for the first time.

NOTE The logging of these events are enabled by default.

Source Trigger Condition Event


ID
DriverFramework-Usermode First connection 10000

UserPNP Installed or updated 20001

WPD-ClassInstaller Successful Installation 24576

Events From the Security Channel


These events are generated when some kind of USB activity is observed by the Operating System.

NOTE The logging of these events are not enabled by default.

Plug and Play Events

They are generated every time when a device is plugged in. Tracking these USB related events are useful for
Audit purposes.

Object Access Audit Events

They can be used to monitor object manipulation, such as creation, deletion as well as other changes. This can
be useful for monitoring for possible data leaks.

These two events can be turned on in the Local Security Policy or by the auditpol tool with the command below
in Windows PowerShell.

auditpol /set /subcategory:"Plug and Play Events","Removable


Storage","Handle Manipulation" /success:enable /failure:enable

696
The following command could be used to check the status of subcategories if necessary.

auditpol /get /subcategory:"Plug and Play Events","Removable


Storage","Handle Manipulation"

Source Trigger Condition Event


ID
Plug and Play (detailed tracking) Device connection 6416

Object Access Audit Handle request 4656

Object Access Audit Attampt to access an object 4663

Event 4663 is the most useful. It is the event that tells what exactly happened on the object. What has been
accessed, what process did it and what kind of operation it was.

Events From Applications and Services Logs\Microsoft\Windows


There are some useful USB related logs located under the Applications and Services Logs\Microsoft\Windows path
in Windows Event Viewer, these sources listed below. The sources contain different information about different
aspects of the subject.

Source Trigger Event ID


Condition
Partition Diagnostic Connection and 1006
ejection.

NTFS Connection 142

StorSVC Diagnostic Connection 1001

DriverFrameworks-UserMode (not Connection 1003, 1004, 2000, 2001, 2003, 2004, 2005, 2006, 2010,
enabled by default) 2100, 2101, 2105, 2016

Ejection 1006, 1008, 2100, 2101, 2102, 2105, 2106, 2900, 2901

Kernel-PnP First connection 400, 410, 430

DeviceSetupManager-Admin First Connection 112

The group of events created in Microsoft-Windows-DriverFrameworks-UserMode are correlate


TIP
to each other based on their LifetimeIds. They will be the same for the corresponding events.

Enabling Microsoft-Windows-DriverFrameworks-UserMode Logging


Enabling on a local computer:

In Event Viewer (eventvwr) under Applications and Services Logs › Microsoft › Windows › DriverFrameworks-
UserMode\Operational, right-click on Operational and select Enable Log.

Enabling on multiple computers in an Active Directory Domain environment using wevtutil:

1. Enable a Remote Administration exception on the firewall of the client computers via a GPO. The following
needs to be enabled. [Computer Configuration\Administrative Templates\Network\Network
Connections\Windows Firewall\Domain Profile\Windows Firewall: Allow inbound remote
administration exception]

2. Prepare a text file for the client computer names. For example, c:\computers.txt.

3. Run the following command with Domain Administrator’s privilege.

for /F %i in (c:\computers.txt) do wevtutil sl Microsoft-Windows-DriverFrameworks-


UserMode/Operational /e:true /r:%i

697
The following PowerShell command checks the status of logging:

Get-WinEvent -ListLog Microsoft-Windows-DriverFrameworks-UserMode/Operational | Format-List


IsEnabled

Example 478. Collecting Events From Windows Event Log

This configuration uses the im_msvistalog module to collect USB events. EventIDs that are useful from the
audit perspective are listed in the configuration define lines.

nxlog.conf (truncated)
 1 <Extension _xml>
 2 Module xm_xml
 3 </Extension>
 4
 5 # StorSvc Diagnostic
 6 define ID1 1001
 7 # PnP detailed tracking
 8 define ID2 6416
 9 # Partition Diagnostic
10 define ID3 1006
11 # NTFS
12 define ID4 142
13 # DriverFw preconnection
14 define ID5 1003
15 # DriverFw connection-related
16 define ID6 2003
17 # DriverFw removal-related
18 define ID7 1008
19 # System: DriverFramework-Usermode
20 define ID8 10000
21 # System: UserPNP
22 define ID9 20001
23 #Object Access Audit
24 define ID10 4656
25
26 <Input in>
27 Module im_msvistalog
28 # For Windows 2003 and earlier, use the im_mseventlog module.
29 [...]

698
Output Sample
{
  "EventTime": "2019-10-19T20:41:06.700337+02:00",
  "Hostname": "Host",
  "Keywords": "9223372036854775808",
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "EventID": 1008,
  "SourceName": "Microsoft-Windows-DriverFrameworks-UserMode",
  "ProviderGuid": "{2E35AAEB-857F-4BEB-A418-2E6C0E54D988}",
  "Version": 1,
  "TaskValue": 18,
  "OpcodeValue": 2,
  "RecordNumber": 42756,
  "ExecutionProcessID": 908,
  "ExecutionThreadID": 504,
  "Channel": "Microsoft-Windows-DriverFrameworks-UserMode/Operational",
  "Domain": "NT AUTHORITY",
  "AccountName": "SYSTEM",
  "UserID": "S-1-5-18",
  "AccountType": "User",
  "Message": "The host process ({1208e11e-4339-4c06-86bb-7430fd254ee6}) has been shutdown.",
  "Category": "Shutdown of a driver host process.",
  "Opcode": "Stop",
  "UserData": "<UMDFDriverManagerHostShutdown
xmlns='http://www.microsoft.com/DriverFrameworks/UserMode/Event'><LifetimeId>{1208e11e-4339-
4c06-86bb-
7430fd254ee6}</LifetimeId><TerminateStatus>0</TerminateStatus><ExitCode>0</ExitCode></UMDFDrive
rManagerHostShutdown>",
  "EventReceivedTime": "2019-10-19T20:41:08.115696+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_msvistalog",
  "UMDFDriverManagerHostShutdown.LifetimeId": "{1208e11e-4339-4c06-86bb-7430fd254ee6}",
  "UMDFDriverManagerHostShutdown.TerminateStatus": "0",
  "UMDFDriverManagerHostShutdown.ExitCode": "0"
}

112.2. USB Events Available via ETW


USB related events can be retrieved by using Event Tracing for Windows (ETW) providers. There are a numbers of
providers can be used to gain information about USB related activity. The most notables are listed below.

Providers for USB2 events:

Provider Details
Microsoft-Windows-USB-USBHUB Provides USB2 hub events

Microsoft-Windows-USB-USBPORT Provides USB2 port events

Providers for USB3 events:

Provider Details
Microsoft-Windows-USB-USBHUB3 Provides USB3 hub events

Microsoft-Windows-USB-UCX Provides USB UCX events

699
Provider Details
Microsoft-Windows-USB-USBXHCI Provides USB XHCI events

Providers for Smart Card related USB events:

Provider Details
Microsoft-Windows-USB-CCID Monitors Smart Card readers using USB to connect to the computer

Microsoft-Windows-Smartcard-Trigger Triggers a log when inserting and removing a USB smart card reader

Example 479. Collecting Events from ETW

This configuration uses the im_etw module to collect logs when a USB Smart Card reader is inserted.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Smartcard-Trigger
4 </Input>

Output Sample
{
  "SourceName": "Microsoft-Windows-Smartcard-Trigger",
  "ProviderGuid": "{AEDD909F-41C6-401A-9E41-DFC33006AF5D}",
  "EventId": 1000,
  "Version": 0,
  "ChannelID": 0,
  "OpcodeValue": 0,
  "TaskValue": 0,
  "Keywords": "0",
  "EventTime": "2019-12-05T14:12:11.453805+01:00",
  "ExecutionProcessID": 13180,
  "ExecutionThreadID": 7608,
  "EventType": "INFO",
  "SeverityValue": 2,
  "Severity": "INFO",
  "Domain": "NT AUTHORITY",
  "AccountName": "LOCAL SERVICE",
  "UserID": "S-1-5-19",
  "AccountType": "Well Known Group",
  "Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
  "ScDeviceEnumGuid": "{5a236687-d307-44e2-9241-e1c6c27ceb28}",
  "EventReceivedTime": "2019-12-05T14:12:13.457624+01:00",
  "SourceModuleName": "etw",
  "SourceModuleType": "im_etw"
}

112.3. USB Events in Windows Registry


When a USB device is inserted or ejected to and from a Windows system, the Plug-and-Play(PnP) manager
triggers a query for the device, then it stores the related information in the Windows Registry.

This information is stored in the registry keys under the following three registry paths.

• "HKLM\SYSTEM\CurrentControlSet\Enum\USB\"

700
• "HKLM\SYSTEM\CurrentControlSet\Enum\USBSTOR\"

• "HKLM\SYSTEM\CurrentControlSet\Control\DeviceClasses\"

The first two stores information about the plugged in USB devices. The third on stores additional information as
USB drives are recognized as disks and mounted as a drive volume in the system. For more information, see the
USB Device Registry Entries documentation from Microsoft.

TIP These events could be correlated based on the serial numbers of the USB devices.

This configuration uses the im_regmon module to collect USB related events from the Windows Registry. It
scans the registry every 60 second.

nxlog.conf
1 <Input in>
2 Module im_regmon
3 RegValue 'HKLM\SYSTEM\CurrentControlSet\Control\DeviceClasses\*'
4 RegValue 'HKLM\SYSTEM\CurrentControlSet\Enum\USB\*'
5 RegValue 'HKLM\SYSTEM\CurrentControlSet\Enum\USBSTOR\*'
6 Recursive TRUE
7 ScanInterval 60
8 </Input>

Output Sample
{
  "EventTime": "2019-10-20T11:07:56.473658+02:00",
  "Hostname": "Host",
  "EventType": "CHANGE",
  "RegistryValueName": "HKLM\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR
\\Disk&Ven_Kingston&Prod_DataTraveler_3.0&Rev_\\60A44C413A8CF320B9110053&0\\Properties\\{83da63
26-97a6-4088-9453-a1923f573b29}\\0066\\",
  "PrevValueSize": 8,
  "ValueSize": 8,
  "DigestName": "SHA1",
  "PrevDigest": "a477f34abec7da133ad5ff2dcf67b3b7e089d2d6",
  "Digest": "e47f5d5668fa31237f198a2e4cb9bc78003f3cc8",
  "Severity": "WARNING",
  "SeverityValue": 3,
  "EventReceivedTime": "2019-10-20T11:07:56.473658+02:00",
  "SourceModuleName": "in",
  "SourceModuleType": "im_regmon"
}

112.4. USB Events logged into a file


In Windows Vista and later editions, the Plug and Play (PnP) manager and SetupAPI log events about device
installation into the SetupAPI.dev.log file. The file contains a wealth of information about all installed devices
including the ones that has been attached via USB to the system.

The file is located in the C:\Windows\INF directory. NXLog can read, parse and forward the logs contained in this
file.

701
This configuration uses the im_file module to read the events from the SetupAPI.dev.log file.

nxlog.conf
1 <Input in>
2 Module im_file
3 File 'C:\Windows\INF\SetupAPI.dev.log'
4 </Input>

702
Chapter 113. Zeek (formerly Bro) Network Security
Monitor
NXLog can be configured to collect events generated by Zeek formerly known as the Bro Network Security
Monitor, a powerful open source Intrusion Detection System (IDS) and network traffic analysis framework. The
Zeek engine captures traffic and converts it to a series of high-level events. These events are then analyzed
according to customizable policies. Zeek supports real-time alerts, data logging for further investigation, and
automatic program execution for detected anomalies. Zeek is able to analyze different protocols, including HTTP,
FTP, SMTP, and DNS; as well as run host and port scans, detect signatures, and discover syn-floods.

113.1. About Zeek Logs


Zeek creates different log files in order to record network activities such as files transferred over the network, SSL
sessions, and HTTP requests. By default, Zeek provides 60 different log files.

Table 67. A Few of Zeek’s Default Log Files

File Description
conn.log TCP/UDP/ICMP connections

dhcp.log DHCP leases

dns.log DNS activity

files.log Summaries of files transferred over the network

ftp.log FTP activity

http.log HTTP requests and replies

smtp.log SMTP transactions

ssl.log SSL/TLS handshake information

weird.log Unexpected network-level activity

Zeek produces human-readable logs in a format similar to W3C. Each log file uses a different set of fields.

dns.log Sample
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path dns
#open 2020-05-27-22-00-01
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto
trans_id rtt query qclass qclass_name qtype qtype_name rcode rcode_name
AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count interval string
count string count string count string bool bool bool bool count
vector[string] vector[interval] bool
1590634800.248362 C1ggH7liCnwAfLjw9 192.168.1.7 53743 192.168.1.1 53 udp
18876 - 250.255.255.239.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F
1590634800.259227 C1ggH7liCnwAfLjw9 192.168.1.7 53743 192.168.1.1 53 udp
18876 - 250.255.255.239.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F
1590634800.274483 CTQxOg2sSOuUO5AZy8 192.168.1.7 47182 192.168.1.1 53 udp
48442 - 7.1.168.192.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F

703
For more information about Zeek logging, see the Zeek Manual.

113.2. Parsing Zeek Logs


NXLog Enterprise Edition can parse Zeek logs with the xm_w3c module.

NOTE The following configurations have been tested with Zeek version 3.0.6 LTS.

704
Example 480. Using xm_w3c to Parse Zeek Logs

This configuration reads Zeek logs from a directory, parses with xm_w3c, and writes out events in JSON
format.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension w3c_parser>
 6 Module xm_w3c
 7 </Extension>
 8
 9 <Input zeek>
10 Module im_file
11 File '/opt/zeek/logs/current/*.log'
12 InputType w3c_parser
13 </Input>
14
15 <Output zeek_json>
16 Module om_file
17 File '/tmp/zeek_logs.json'
18 Exec to_json();
19 </Output>

The following output from this configuration represents a sample event logged by Zeek after being parsed
by NXLog and converted to JSON format. Spacing and line breaks have been added for readability.

Output sample
{
  "ts": "1590636144.680688",
  "uid": "C1InwK3K6fhY6YdvRe",
  "id.orig_h": "192.168.1.7",
  "id.orig_p": "45500",
  "id.resp_h": "35.222.85.5",
  "id.resp_p": "80",
  "version": "1",
  "cipher": "GET",
  "curve": "connectivity-check.ubuntu.com",
  "server_name": "/",
  "resumed": null,
  "last_alert": "1.1",
  "next_protocol": null,
  "established": null,
  "cert_chain_fuids": "0",
  "client_cert_chain_fuids": "0",
  "subject": "204",
  "issuer": "No Content",
  "client_subject": null,
  "client_issuer": null,
  "validation_status": "(empty)",
  "EventReceivedTime": "2020-05-27T22:22:26.917647-05:00",
  "SourceModuleName": "zeek",
  "SourceModuleType": "im_file"
}

The xm_w3c module is recommended because it supports reading the field list from the W3C-style log file

705
header. For NXLog Community Edition, the xm_csv module could be used instead to parse Zeek logs. A separate
instance of xm_csv must be configured for each log type.

Example 481. Using xm_csv to Parse Zeek Logs

This example has separate xm_csv module instances for the DNS and DHCP log types. Additional CSV
parsers could be added for the remaining Zeek log types.

nxlog.conf (truncated)
 1 <Extension csv_parser_dns>
 2 Module xm_csv
 3 Fields ts, uid id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, \
 4 trans_id, rtt query, qclass, qclass_name, qtype, qtype_name, \
 5 rcode, rcode_name, AA, TC, RD, RA, Z, answers, TTLs, rejected
 6 Delimiter \t
 7 </Extension>
 8
 9 <Extension csv_parser_dhcp>
10 Module xm_csv
11 Fields ts, uid, id.orig_h, id.orig_p, id.resp_h, id.resp_p, mac, \
12 assigned_ip, lease_time, trans_id
13 Delimiter \t
14 </Extension>
15
16 # xm_fileop provides the `file_basename()` function
17 <Extension _fileop>
18 Module xm_fileop
19 </Extension>
20
21 <Extension json>
22 Module xm_json
23 </Extension>
24
25 <Input zeek>
26 Module im_file
27 File '/opt/zeek/spool/zeek/*.log'
28 <Exec>
29 [...]

The following output from this configuration represents a sample event logged by Zeek after being parsed
by NXLog and converted to JSON format. Spacing and line breaks have been added for readability.

706
Output sample
{
  "EventReceivedTime": "2020-05-29 10:55:51",
  "SourceModuleName": "zeek",
  "SourceModuleType": "im_file",
  "ts": "1590767749.877652",
  "uid": "CAhAIX1Dl5KFfnhKbi",
  "id.orig_h": "192.168.1.7",
  "id.orig_p": "42157",
  "id.resp_h": "192.168.1.1",
  "id.resp_p": "53",
  "proto": "udp",
  "trans_id": "56765",
  "rtt": "0.051801",
  "query": "zeek.org",
  "qclass": "1",
  "qclass_name": "C_INTERNET",
  "qtype": "1",
  "qtype_name": "A",
  "rcode": "0",
  "rcode_name": "NOERROR",
  "AA": "F",
  "TC": "F",
  "RD": "T",
  "RA": "T",
  "Z": "0",
  "answers": "192.0.78.212,192.0.78.150",
  "TTLs": "60.000000,60.000000",
  "rejected": "F"
}

707
Troubleshooting

708
Chapter 114. Internal Logs
When issues arise while configuring or maintaining an NXLog instance, a stepwise troubleshooting approach
(moving from the most likely and simple cases to the more complex and rare ones) generally yields favorable
results. The first step is always to inspect the internal log which NXLog generates.

114.1. Default Settings


By default, NXLog generates log messages about its own operations. These messages are essential for
troubleshooting problems, and should be checked first if NXLog is not functioning as expected.

These internal messages are written to the file defined in the LogFile directive in nxlog.conf. On Windows that
file is C:\Program Files\nxlog\data\nxlog.log; on Linux, /opt/nxlog/var/log/nxlog/nxlog.log. If this
directive is not specified, internal logging is disabled.

114.2. Enable Internal Logging


Internal logging is enabled by default after installation, but may be disabled if edits were made to nxlog.conf, or
if the LogFile directive points to a path which is unavailable. Enable internal logging by editing nxlog.conf to
ensure the LogFile directive is set to an available path.

Some Windows applications (WordPad, for example) cannot open the log file while the NXLog
NOTE process is running because of exclusive file locking. Use a viewer that does not lock the file, like
Notepad.

114.3. Raise the Severity Level of Logged Events


By default, internal logs are generated with a log level of INFO. To get detailed information about NXLog’s
operations, the log level can be raised to DEBUG level. This level of detail reliably produces a large amount of log
messages, and so we recommend setting this log level only for sustained troubleshooting sessions.

To raise the log level temporarily (until NXLog is restarted).

On Linux, send SIGUSR2:

# kill -SIGUSR2 $PID

On Windows, send service control command 201:

> sc control nxlog 201

To raise the log level for an extended troubleshooting session.

• On all systems, set the LogLevel directive to DEBUG, then restart NXLog.

114.4. Send Customized Log Messages to the Internal Log


It may be helpful to add extra logging statements to your configuration using the log_info() procedure.

The generated messages will be visible:

• in the file defined in the LogFile global directive


• in the input from the im_internal module
• on standard output when running NXLog in the foreground with the -f command line switch

709
Example 482. Writing Specific Fields and Values to the Internal Log

This configuration uses the log_info() procedure to send values to the internal log. Log messages are
accepted over UDP on port 514. If keyword is found in the unparsed message, an INFO level message will
be generated.

nxlog.conf
1 <Input in>
2 Module im_udp
3 Port 514
4 <Exec>
5 if $raw_event =~ /keyword/
6 log_info("FOUND KEYWORD IN MSG: [" + $raw_event + "]");
7 </Exec>
8 </Input>

114.5. Send All Fields to the Internal Log

710
Example 483. Send All Fields to the Internal Log

In this configuration, the to_json() procedure from the xm_json module is used to send all the fields to the
internal log.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15
16 # Dump $raw_event
17 log_info("raw_event is: " + $raw_event);
18
19 # Dump fields in JSON
20 log_info("Other fields are: " + to_json());
21 </Exec>
22 </Input>

Output Sample
{
  "MessageSourceAddress": "127.0.0.1",
  "EventReceivedTime": "2012-05-18 13:11:35",
  "SourceModuleName": "in",
  "SourceModuleType": "im_tcp",
  "SyslogFacilityValue": 3,
  "SyslogFacility": "DAEMON",
  "SyslogSeverityValue": 3,
  "SyslogSeverity": "ERR",
  "SeverityValue": 4,
  "Severity": "ERROR",
  "Hostname": "host",
  "EventTime": "2010-10-12 12:49:06",
  "SourceName": "app",
  "ProcessID": "12345",
  "Message": "test message"
}

114.6. Send Debug Dump to the Internal Log


A simple way to quickly get a more complete picture of NXLog’s current status is to dump debug info into the
internal log. This information can be helpful in determining, for example, why an input module is not sending to
an output module. Normally, internal events are written to the log file configured with the LogFile directive.

On Linux, send SIGUSR1 to the application:

# kill -SIGUSR1 $PID

711
On Windows, send the service control command "200" to the application:

> sc control nxlog 200

Dumped debug info example


2017-03-29 10:05:19 INFO event queue has 2 events;jobgroup with priority 10;job↵
of module in/im_file, events: 0;job of module out/om_null, events: 0;non-module↵
job, events: 0;jobgroup with priority 99;non-module job, events: 0;[route 1]; -↵
in: type INPUT, status: RUNNING queuesize: 0; - out: type OUTPUT, status:↵
RUNNING queuesize: 0;↵

The status is the most important piece of information in the dumped log entries. A status of
PAUSED means the input module is not able to send because the output module queue is full. In
NOTE
such a case the queuesize for the corresponding output(s) would be over 99. A status of
STOPPED means the module is fully stopped, usually due to an error.

114.7. Send Internal Log to STDOUT


Run NXLog in the foreground to print internal logs to the standard output and standard error streams, which are
both visible in the terminal.

Use nxlog -f to run NXLog in the foreground.

114.8. Send Internal Log to an Existing Route


NXLog can also write internal log data into a normal route using the im_internal module. Internal log messages
can then be forwarded like any other log source.

Local logging is more fault-tolerant than routed logging, and is therefore recommended for
TIP
troubleshooting.

It is not possible to use a log level higher than INFO with the im_internal module. DEBUG level
NOTE
messages can only be written to the local log file.

114.9. Send Information to an External File


Sending the internal log or other information to external files can also be useful while troubleshooting.

712
In this example configuration, the file_write() procedure (from the xm_fileop module) is used to dump
information to an external file.

nxlog.conf
 1 <Extension _fileop>
 2 Module xm_fileop
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15
16 # Debug $SyslogSeverity and $Hostname fields
17 file_write("/tmp/debug.txt",
18 "Severity: " + $SyslogSeverity +
19 ", Hostname: " + $Hostname + "\n");
20 </Exec>
21 </Input>

713
Chapter 115. Common Issues
Common issues are easily resolved by internal logs to identify typical symptoms, finding the corresponding
description of the symptom below, and then following the suggested remediation steps.

115.1. NXLog Fails to Start


You may receive this error message in the log file when NXLog fails to start (line break added):

nxlog failed to start: Invalid keyword: ÿþ# at \↵


C:\Program Files (x86)\nxlog\conf\nxlog.conf:1↵

This issue occurs because the NXLog configuration file has been saved in either UTF-16 text encoding, or UTF-8
text encoding with a BOM header.

Open the configuration file in a text editor and save it using ASCII encoding or plain UTF-8.

TIP On Windows, you can use Notepad to correct the text encoding of this file.

115.2. Permission Errors


115.2.1. "Permission denied"
When configured to read from a file in the /var/log directory on Linux, NXLog may log the following error:

ERROR failed to open /var/log/messages;Permission denied↵

This error occurs because NXLog does not have permission to read the file with the configured User and Group.
See the Reading Rsyslog Log Files for more information about using NXLog to read files from the /var/log
directory.

115.2.2. Windows Event Log Error: "ignoring source"


When NXLog is set up to access the Windows Event Log, the permissions may not be sufficient. In this case, the
NXLog log files show errors such as:

2013-01-10 13:43:30 WARNING ignoring source as it cannot be subscribed to (error code: 5)↵

If this occurs, use the wevutil utility to grant the new user access to Windows Event Log. See this TechNet article
for more details about the procedure: Giving Non Administrators permission to read Event Logs Windows 2003
and Windows 2008.

115.3. Connection Errors


115.3.1. "Connection refused" with im_tcp or im_ssl
When using the im_tcp and im_ssl modules to transfer data over the network, firewalls and other network issues
can prevent successful connections. This can result in Connection refused errors.

To resolve this issue:

• Check that no firewall, gateway, or other network issue is blocking the connection
• Verify that the system can resolve the host name used in the Host directive of the configuration file

714
115.4. Log Format Errors
115.4.1. Log Entries are Concatenated With Logstash
If you are using Logstash and find that log entries are concatenated, make sure that you are using the
json_lines codec in your Logstash server configuration.

The default json codec in Logstash sometimes fails to parse log entries passed from NXLog. Switch to the
json_lines codec for better reliability.

115.5. Data Missing Errors


115.5.1. "Missing logdata" Error
This happens when NXLog tries to evaluate a directive, but the required log data is not available in the current
context. This causes any dependent operations to fail and the directive to terminate as well as the following error
to be logged:

missing record, assignment possibly after drop()↵

This error occurs when attempting to access a field from the Exec directive of a Schedule block. The log data is
not available in the current context. Log data is never available to a scheduled Exec directive because its
execution is not triggered by a log message.

An attempt to access a field can occur directly with a field assignment, or indirectly by calling a function or
procedure that accesses log data.

115.6. Processing Unexpectedly Paused or Stopped


115.6.1. Processing Stops if an Output Fails
NXLog can send one log stream to multiple outputs. This can be configured by either using the same input in
multiple routes or using multiple outputs in the same route (see Routes). By default, when one of the outputs
fails NXLog will stop sending logs to all outputs. This is caused by NXLog’s flow control mechanism, which is
designed to prevent messages from being lost. Flow control pauses an Input or Processor module when the next
module in the route is not ready to accept data.

In some cases, it is preferred for NXLog to continue sending logs to the remaining active output and discard logs
for the failed output. The simplest solution is to disable flow control. This can be done globally with the global
FlowControl directive, or for the corresponding Input (and Processor, if any) modules only, with the module
FlowControl directive.

With flow control disabled, an Input or Processor module will continue to process logs even if
NOTE
the next module’s buffers are full (and the logs will be dropped).

To retain the improved message durability provided by flow control, it is possible to instead explicitly specify
when to drop logs by using a separate route for each output that may fail. Add a pm_buffer module instance to
that route, and configure the buffer to drop logs when it reaches a certain size. The output that follows the buffer
can fail without causing any Input or Processor module before the buffer to pause processing. For examples, see
the Using Buffers section.

115.6.2. Check Open Files and Limits


When a system nears or exceeds its open files limit, significant performance penalties are typically quick to
follow. LSOF (List Open Files) is a common debugging tool found on the majority of Linux systems and can reveal
a great deal about the running system.

715
On Linux, run the following command to see, for example, which files NXLog has open:

$ lsof -u nxlog

This example returns the number of open files:

$ lsof -Fn -u nxlog | sort | uniq | wc -l

To check NXLog’s system limits use the following command:

$ cat /proc/$(cat /opt/nxlog/var/run/nxlog/nxlog.pid)/limits

On Systems not using /proc, check the system’s open file limit with:

$ sysctl kern.maxfiles

or with:

$ sysctl fs.file-max

There is no Windows equivalent to lsof. You can use Windows Process Explorer from
NOTE
Microsoft’s Windows Sysinternals to inspect which program has files or directories open.

115.6.2.1. systemd and Open Files Limit


There are certain cases where systemd ignores system level file limits. This can cause "too many files open"
errors.

2019-01-22 15:26:37 ERROR SSL error, failed to load ca cert from


'/opt/nxlog/var/lib/nxlog/cert/agent-ca.pem', reason: Too many open files, system lib,↵
system lib↵

This scenario requires edits to the service file or an override. To check NXLog’s system limits use the following
command:

$ cat /proc/$(cat /opt/nxlog/var/run/nxlog/nxlog.pid)/limits

On Systems not using /proc, check the system’s open file limit:

$ sysctl kern.maxfiles

To adjust limits for nxlog, create /etc/systemd/system/nxlog.service.d/override.conf and add the


following definition:

1 [Service]
2 LimitNOFILE=100000

Update the service settings with:

$ systemctl daemon-reload

115.6.3. Log File is in Use by Another Application


When trying to view the NXLog’s internal log file on Windows, you may receive an error message indicating that
the log file is in use by another application and cannot be accessed.

To resolve this issue, either:

716
• Open the log file with an application that does not use exclusive locking (such as Notepad)

or

• Stop NXLog before opening the log file

717
Chapter 116. Debugging NXLog
When other troubleshooting fails to identify (or resolve) an issue, inspecting the NXLog agent itself can prove
useful. Some techniques are outlined below.

116.1. Generate Core Dumps


Core dumps can act as a helpful resource for the NXLog development and support teams when debugging
issues. Contact support to find out the level of work available for your installation.

116.1.1. Core Dumps on Linux


It is necessary to install the NXLog debug symbols package in order to produce useful core
NOTE
dump files.

1. Remove the User and Group directives from the configuration. NXLog needs to be running as root:root to
produce a core dump.
2. Use ulimit to remove the core file size limit.

# ulimit -c unlimited

3. Run NXLog manually to test that it can create a core dump.

# /opt/nxlog/bin/nxlog -f

4. Find the NXLog process and kill it with the SIGABRT signal.

# kill -ABRT `ps aux | grep [/]opt/nxlog/bin/nxlog | awk '{print $2}'`

5. Verify that a core dump file was created at /opt/nxlog/var/spool/nxlog/core.

# ls -l /opt/nxlog/var/spool/nxlog/
total 26708
-rw------- 1 root root 27348992 Oct 30 08:51 core

6. If the core dump file was created successfully, run NXLog again as root in order to catch the next crash.

# /opt/nxlog/bin/nxlog -f

116.1.2. Core Dumps on Windows


Core dumps can be generated on Windows by using ProcDump from Microsoft Sysinternals.

NOTE ProcDump runs on Windows Vista and higher, and Windows Server 2008 and higher.

For example, run the following to write a full dump of the nxlog process when its handle count exceeds 10,000:

> procdump -ma nxlog -p "\Process(nxlog)\Handle Count" 10000

116.2. Inspect Memory Leaks


If NXLog’s memory usage exceeds 200 MB, there is likely a memory leak.

718
116.2.1. Inspecting Memory Leaks on Linux
We recommend using Valgrind on GNU/Linux to debug memory leaks.

1. Install the debug symbols (-dbg) package (for example, nxlog-dbg_3.0.1759_amd64.deb).

The NXLog debug symbols package is currently only available for Linux. This package is not
NOTE
included with NXLog by default, but can be provided on request.

2. Install Valgrind.
3. Set the NoFreeOnExit directive to TRUE in the NXLog configuration file. This directive ensures that modules
are not unloaded when NXLog is stopped, which allows Valgrind to properly resolve backtraces into modules.
4. Start NXLog under Valgrind with the following command. If User is set to nxlog in the configuration, then the
command must be executed with su, otherwise Valgrind will not be able to create the massif.out file at the
end of the sampling process.

# cd /tmp
# su -lc "valgrind --tool=massif --pages-as-heap=yes /opt/nxlog/bin/nxlog -f" nxlog

5. Let NXLog run for a while until the Valgrind process shows the memory increase, then interrupt it with
Ctrl+C. The output is written to /tmp/massif.out.xxxx.
6. Send the massif.out.xxxx file with a bug report.

7. Optionally, create a report from the massif.out.xxxx file with the ms_print command:

# ms_print massif.out.xxxx

The output of the ms_print report contains an ASCII chart at the top showing the increase in memory usage.
The chart shows the sample number with the highest memory usage, marked with (peak). This is normally
at the end of the chart (the last sample). The backtrace from this sample indicates where the most memory is
allocated.

116.2.2. Inspecting Memory Leaks on Windows


Windows Process Explorer from Microsoft Sysinternals can be used to inspect memory use of all running
programs.

Once a potential source of excessive memory use has been determined, use DebugView from Microsoft
Sysinternals to inspect the application’s debug output.

719
Enterprise Edition Reference Manual

720
Chapter 117. Man Pages
117.1. nxlog(8)
NAME
nxlog - collects, processes, converts, and forwards event logs in many different formats

SYNOPSIS
nxlog [-c conffile] [-f]

nxlog [-c conffile] -v

nxlog [-r | -s]

DESCRIPTION
NXLog can process high volumes of event logs from many different sources. Supported types of log processing
include rewriting, correlating, alerting, filtering, and pattern matching. Additional features include scheduling, log
file rotation, buffering, and prioritized processing. After processing, NXLog can store or forward event logs in any
of many supported formats. Inputs, outputs, and processing are implemented with a modular architecture and a
powerful configuration language.

While the details provided here apply to NXLog installations on Linux and other UNIX-style operating systems in
particular, a few Windows-specific notes are included.

OPTIONS
-c conffile, --conf conffile
Specify an alternate configuration file conffile. To change the configuration file used by the NXLog service on
Windows, modify the service parameters.

-f, --foreground
Run in foreground, do not daemonize.

-q, --quiet
Suppress output to STDOUT/STDERR.

-h, --help
Print help.

-r, --reload
Reload configuration of a running instance.

-s, --stop
Send stop signal to a running instance.

-v, --verify
Verify configuration file syntax.

SIGNALS
Various signals can be used to control the NXLog process. Some corresponding Windows control codes are also
available; these are shown in parentheses where applicable.

721
SIGHUP
This signal causes NXLog to reload the configuration and restart the modules. On Windows, "sc stop nxlog"
and "sc start nxlog" can be used instead.

SIGUSR1 (200)
This signal generates an internal log message with information about the current state of NXLog and its
configured module instances. The message will be generated with INFO log level, written to the log file (if
configured with LogFile), and available via the im_internal module.

SIGUSR2 (201)
This signal causes NXLog to switch to the DEBUG log level. This is equivalent to setting the LogLevel directive
to DEBUG but does not require NXLog to be restarted.

SIGINT/SIGQUIT/SIGTERM
NXLog will exit if it receives one of these signals. On Windows, "sc stop nxlog" can be used instead.

On Linux/UNIX, a signal can be sent with the kill command. The following, for example, sends the SIGUSR1
signal:

kill -SIGUSR1 $(cat /opt/nxlog/var/run/nxlog/nxlog.pid)

On Windows, a signal can be sent with the sc command. The following, for example, sends the 200 signal:

sc control nxlog 200

FILES
/opt/nxlog/bin/nxlog
The main NXLog executable

/opt/nxlog/bin/nxlog-stmnt-verifier
This tool can be used to check NXLog Language statements. All statements are read from standard input and
then validated. If a statement is invalid, the tool prints an error to standard error and exits non-zero.

/opt/nxlog/etc/nxlog.conf
The default configuration file

/opt/nxlog/lib/nxlog/modules
The NXLog modules are located in this directory, by default. See the ModuleDir directive.

/opt/nxlog/spool/nxlog
If PersistLogqueue is set to TRUE, module queues are stored in this directory. See also LogqueueDir and
SyncLogqueue.

/opt/nxlog/spool/nxlog/configcache.dat
This is the position cache file where positions are saved. See the NoCache directive, in addition to CacheDir,
CacheFlushInterval, and CacheSync.

/opt/nxlog/var/run/nxlog/nxlog.pid
The process ID (PID) of the currently running NXLog process is written to this file. See the PidFile directive.

ENVIRONMENT
To access environment variables in the NXLog configuration, use the envvar directive.

722
SEE ALSO
nxlog-processor(8)

NXLog website: https://nxlog.co

NXLog User Guide: https://nxlog.co/documentation/nxlog-user-guide

COPYRIGHT
Copyright © Copyright © NXLog Ltd. 2020

The NXLog Community Edition is licensed under the NXLog Public License. The NXLog Enterprise Edition is not
free and has a commercial license.

117.2. nxlog-processor(8)
NAME
nxlog-processor - performs batch log processing

SYNOPSIS
nxlog-processor [-c conffile] [-v]

DESCRIPTION
The nxlog-processor tool is similar to the NXLog daemon and uses the same configuration file. However, it runs
in the foreground and exits after all input log data has been processed. Common input sources are files and
databases. This tool is useful for log processing tasks such as:

• loading a group of files into a database,


• converting between different formats,
• testing patterns,
• doing offline event correlation, or
• checking HMAC message integrity.

While the details provided here apply to NXLog installations on Linux and other UNIX-style operating systems in
particular, a few Windows-specific notes are included.

OPTIONS
-c conffile, --conf conffile
Specify an alternate configuration file conffile.

-h, --help
Print help.

-v, --verify
Verify configuration file syntax.

FILES

723
/opt/nxlog/bin/nxlog-processor
The main NXLog-processor executable

/opt/nxlog/bin/nxlog-stmnt-verifier
This tool can be used to check NXLog Language statements. All statements are read from standard input and
then validated. If a statement is invalid, the tool prints an error to standard error and exits non-zero.

/opt/nxlog/etc/nxlog.conf
The default configuration file

/opt/nxlog/spool/nxlog/configcache.dat
This is the position cache file where positions are saved. To disable position caching, as may be desirable
when using nxlog-processor, set the NoCache directive to TRUE.

ENVIRONMENT
To access environment variables in the NXLog configuration, use the envvar directive.

SEE ALSO
nxlog(8)

NXLog website: https://nxlog.co

NXLog User Guide: https://nxlog.co/documentation/nxlog-user-guide

COPYRIGHT
Copyright © Copyright © NXLog Ltd. 2020

The NXLog Community Edition is licensed under the NXLog Public License. The NXLog Enterprise Edition is not
free and has a commercial license.

724
Chapter 118. Configuration
An NXLog configuration consists of global directives, module instances, and routes. The following sections list the
core NXLog directives provided. Additional directives are provided at the module level.

A configuration is valid without any module instances specified, however for NXLog to process data the
configuration should contain at least one input module instance and at least one output module instance. If no
route is specified, a route will be automatically generated; this route will connect all input module instances and
all output module instances in a single path.

A module instance name may contain letters, digits, periods (.), and underscores (_). The first character in a
module instance name must be a letter or an underscore. The corresponding regular expression is [a-zA-
Z_][a-zA-Z0-9._]*.

A route instance name may contain letters, digits, periods (.), and underscores (_). The first character in a route
instance name must be a letter, a digit, or an underscore. The corresponding regular expression is [a-zA-Z0-
9_][a-zA-Z0-9._]*.

Inserting comments within a configuration is accomplished exactly as it is in shell scripts; any text written on the
line after the hash mark (#) is ignored and treated as a comment, including the backslash (\). Multi-line
comments need a # on each line.

118.1. General Directives


The following directives can be used throughout the configuration file. These directives are handled by the
configuration parser, and substitutions occur before the configuration check.

define
Use this directive to configure a constant or macro to be used later. Refer to a define by surrounding the
name with percent signs (%). Enclose a group of statements with curly braces ({}).

Example 484. Using the define Directive

This configuration shows three example defines: BASEDIR is a constant, IMPORTANT is a statement, and
WARN_DROP is a group of statements.

nxlog.conf
 1 define BASEDIR /var/log
 2 define IMPORTANT if $raw_event =~ /important/ \
 3 $Message = 'IMPORTANT ' + $raw_event;
 4 define WARN_DROP { log_warning("dropping message"); drop(); }
 5
 6 <Input messages>
 7 Module im_file
 8 File '%BASEDIR%/messages'
 9 </Input>
10
11 <Input proftpd>
12 Module im_file
13 File '%BASEDIR%/proftpd.log'
14 <Exec>
15 %IMPORTANT%
16 if $raw_event =~ /dropme/ %WARN_DROP%
17 </Exec>
18 </Input>

725
envvar
This directive works like define, except that the value is retrieved from the environment.

Example 485. Using the envvar Directive

This example is like the previous one, but BASEDIR is fetched from the environment instead.

nxlog.conf
1 envvar BASEDIR
2
3 <Input in>
4 Module im_file
5 File '%BASEDIR%/messages'
6 </Input>

include
This directive allows a specified file or files to be included in the current NXLog configuration. Wildcarded
filenames are supported.

The SpoolDir directive only takes effect after the configuration is parsed, so relative paths
NOTE specified with the include directive must be relative to the working directory NXLog was
started from.

The examples below provide various ways of using the include directive.

Example 486. Using the include Directive

This example includes a file relative to the working directory.

nxlog.conf
1 include modules/module1.conf

In the case, when multiple .conf files are to be defined, they can be saved in the nxlog.d directory and then
automatically included in the NXLog configuration along with the nxlog.conf file. Adding .conf files into the
nxlog.d directory extends the NXLog configuration, while no modification for the nxlog.conf file is needed.

This solution could be useful to specify OS-specific configuration snippets (like


TIP
windows2003.conf) or application-specific snippets (such as syslog.conf).

Inclusion of subdirectories inside the configuration directory is not supported.

Example 487. Including Files With Wildcarded Names

This example includes all matching files from the nxlog.d directory and uses absolute paths on Unix-
like systems and Windows.

nxlog.conf
1 include /etc/nxlog.d/*.conf

nxlog.conf
1 include C:\Program Files\nxlog\conf\nxlog.d\*.conf

726
include_stdout
This directive accepts the name of an external command or script. Configuration content will be read from
the command’s standard output. Command arguments are not supported.

Example 488. Using the include_stdout Directive

This directive executes the custom script, which fetches the configuration.

nxlog.conf
1 include_stdout /opt/nxset/etc/fetch_conf.sh

118.2. Global Directives


BatchSize
Input and processor modules will batch multiple records together, before forwarding them to the next
module in the route. This directive specifies the maximum number of records to accumulate in a batch. If not
specified, it defaults to maximum 50 records per batch. The global batch size can also be overridden per-
module, by the BatchSize module level directive.

BatchFlushInterval
This directive specifies the timeout, in seconds, before a record-batch will be forwarded to the next module in
the route, even if the batch has accumulated fewer than the maximum number of records given by the
BatchSize directive. If this directive is not specified, it defaults to 0.1 (100 milliseconds). It can also be
overriden per-module, by the BatchFlushInterval module level directive.

CacheDir
This directive specifies a directory where the cache file (configcache.dat) should be written. This directive
has a compiled-in value which is used by default.

CacheFlushInterval
This directive specifies how often the in-memory position cache should be flushed to the cache file. The value
of 0 indicates that the cache should only be flushed to the file when the agent shuts down; if the server or
agent crashes, the current position cache will be lost. A positive integer indicates the length of the interval
between flushes of the cache, in seconds. The string always specifies that the cache should be flushed to file
immediately when a module sets a value. If this directive is not specified the default value of 5 seconds is
used. See also the CacheSync directive below.

CacheInvalidationTime
NXLog persists saved positions in cache that is written the disk. To prevent the cache growing indefinitely an
invalidation period is used. This directive defines the invalidation period. If the last modification time of an
entry exceeds the value set with this directive, the entry is discarded when the cache is read from disk. This
directive accepts a positive Integer value. If the directive is not specified, the default value of 864000 (10 days)
is used.

CacheSync
When the in-memory position cache is flushed to the cache file, the cache may not be immediately written to
the disk due to file system buffering. When this directive is set to TRUE, the cache file is synced to disk
immediately when it is written. The default is FALSE. CacheSync has no effect if CacheFlushInterval is set to
the default of 0. Setting this to TRUE when CacheFlushInterval is set to always greatly reduces performance,
though only this guarantees crash-safe operation.

DateFormat
This directive can be used to change the default date format as it appears in the LogFile, in the $raw_event
generated by the modules, and when a datetime type value is converted to a string. The following values are
accepted (corresponding to the formats accepted by the NXLog strftime() function):

727
• YYYY-MM-DD hh:mm:ss (the default)

• YYYY-MM-DDThh:mm:ssTZ

• YYYY-MM-DDThh:mm:ss.sTZ

• YYYY-MM-DD hh:mm:ssTZ

• YYYY-MM-DD hh:mm:ss.sTZ

• YYYY-MM-DDThh:mm:ssUTC

• YYYY-MM-DDThh:mm:ss.sUTC

• YYYY-MM-DD hh:mm:ssUTC

• YYYY-MM-DD hh:mm:ss.sUTC

• A format string accepted by the strftime() function in the C library

EscapeGlobPatterns
This boolean directive specifies whether the backslash (\) character in glob patterns or wildcarded entries
should be enabled as an escape sequence. If set to TRUE, this directive implies that the backslash character (
\) needs to be escaped by another backslash character (\\). File and directory patterns on Windows do not
require escaping and are processed as non-escaped even if this directive is set to TRUE. The default is FALSE.
This directive is used in im_file, im_fim, and im_regmon modules.

FlowControl
This optional boolean directive specifies the flow control default for input and processor module instances.
Output module instances do not inherit from this directive. By default, the global FlowControl value is TRUE.
See the description of the module level FlowControl directive for more information.

FlowControlFIFO
This bolean directive, when set to TRUE, which is also the default, enables FIFO mode for modules that have
flow control disabled. In this mode, when the log queue of a module is full, older records will be dropped in
order to make room for newer ones. When set to FALSE, the old behavior is in effect: while the log queue is
full, no records will be dropped, but new incoming records will be discarded instead.

GenerateDateInUTC
If set to TRUE, this boolean directive specifies that UTC should be used when generating dates in the format
YYYY-MM-DD hh:mm:ss. If set to FALSE, local time will be used when generating dates in this format. The
default is FALSE. See also ParseDateInUTC.

Group
Similar to User, NXLog will set the group ID to run under. The group can be specified by name or numeric ID.
This directive has no effect when running on the Windows platform or with nxlog-processor(8).

IgnoreErrors
If set to FALSE, NXLog will stop when it encounters a problem with the configuration file (such as an invalid
module directive) or if there is any other problem which would prevent all modules functioning correctly. If
set to TRUE, NXLog will start after logging the problem. The default value is TRUE.

LogFile
NXLog will write its internal log to this file. If this directive is not specified, self logging is disabled. Note that
the im_internal module can also be used to direct internal log messages to files or different output
destinations, but this does not support log level below INFO. This LogFile directive is especially useful for
debugging.

LogLevel
This directive has five possible values: CRITICAL, ERROR, WARNING, INFO, and DEBUG. It will set both the logging

728
level used for LogFile and the standard output if NXLog is started in the foreground. The default LogLevel is
INFO. This directive can also be used at the module level.

LogqueueDir
This directive specifies the directory where the files of the persistent queues are stored, for Processor and
Output module instances. Even if PersistLogqueue is set to FALSE, NXLog will persist in-memory queues to
the LogqueueDir on shutdown. If not specified, the default is the value of CacheDir. This directive can also be
used at the module level to specify a log queue directory for a specific module instance.

LogqueueSize
This directive controls the size of the log queue for all Processor and Output module instances. The default is
100 record batches. See the module-level LogQueueSize for more information.

ModuleDir
By default the NXLog binaries have a compiled-in value for the directory to search for loadable modules. This
can be overridden with this directive. The module directory contains sub-directories for each module type
(extension, input, output, and processor), and the module binaries are located in those.

NoCache
Some modules save data to a cache file which is persisted across a shutdown/restart. Modules such as im_file
will save the file position in order to continue reading from the same position after a restart as before. This
caching mechanism can be explicitly turned off with this directive. This is mostly useful with nxlog-
processor(8) in offline mode. If this boolean directive is not specified, it defaults to FALSE (caching is enabled).
Note that many input modules, such as im_file, provide a SavePos directive that can be used to disable the
position cache for a specific module instance. SavePos has no effect if the cache is disabled globally with
NoCache TRUE.

NoFreeOnExit
This directive is for debugging. When set to TRUE, NXLog will not free module resources on exit, allowing
valgrind to show proper stack trace locations in module function calls. The default value is FALSE.

Panic
A panic condition is a critical state which usually indicates a bug. Assertions are used in NXLog code for
checking conditions where the code will not work unless the asserted condition is satisfied, and for security.
Failing assertions result in a panic and suggest a bug in the code. A typical case is checking for NULL pointers
before pointer dereference. This directive can take three different values: HARD, SOFT, or OFF. HARD will cause
an abort in case the assertion fails. This is how most C based programs work. SOFT will cause an exception to
be thrown at the place of the panic/assertion. In case of NULL pointer checks this is identical to a
NullPointerException in Java. It is possible that NXLog can recover from exceptions and can continue to
process log messages, or at least the other modules can. In case of assertion failure the location and the
condition is printed at CRITICAL log level in HARD mode and ERROR log level in SOFT mode. If Panic is set to
OFF, the failing condition is printed in the logs but the execution will continue on the normal code path. Most
of the time this will result in a segmentation fault or other undefined behavior, though in some cases turning
off a buggy assertion or panic will solve the problems caused by it in HARD/SOFT mode. The default value for
Panic is SOFT.

ParseDateInUTC
If set to TRUE, this boolean directive specifies that dates in the format YYYY-MM-DD hh:mm:ss should be
parsed as UTC. If set to FALSE, dates in this format are assumed to be in local time. The default is FALSE. See
also GenerateDateInUTC.

PersistLogqueue
This boolean directive specifies that log queues of Processor and Output module instances should be disk-
based. See the module level PersistLogqueue directive for more information.

PidFile
Under Unix operating systems, NXLog writes a PID file as other system daemons do. The default PID file can

729
be overridden with this directive in case multiple daemon instances need to be running. This directive has no
effect when running on the Windows platform or with nxlog-processor(8).

ReadTimeout
This directive is specific to nxlog-processor(8). It controls the exit condition of nxlog-processor(8). Its value is a
timeout in seconds. If nxlog-processor(8) gets no data to process during this period then it will stop waiting
for more data and will exit. The default value is 0.05 s. For any non-negative value which is less than 0.05 this
will be 0.05. In case nxlog-processor(8) is configured to read data from the network it is recommended to set
this to higher value.

RootDir
NXLog will set its root directory to the value specified with this directive. If SpoolDir is also set, this will be
relative to the value of RootDir (chroot() is called first). This directive has no effect when running on the
Windows platform or with the nxlog-processor(8).

SpoolDir
NXLog will change its working directory to the value specified with this directive. This is useful with files
created through relative filenames (for example, with om_file) and in case of core dumps. This directive has
no effect with the nxlog-processor(8).

StringLimit
To protect against memory exhaustion (and possibly a denial-of-service) caused by over-sized strings, there is
a limit on the length of each string (in bytes). The default value is 5242880 bytes (strings will be truncated at 5
MiB).

SuppressRepeatingLogs
Under some circumstances it is possible for NXLog to generate an extreme amount of internal logs consisting
of the same message due to an incorrect configuration or a software bug. In this case, the LogFile can quickly
consume the available disk space. With this directive, NXLog will write at most 2 lines per second if the same
message is generated successively, by logging "last message repeated n times" messages. If this boolean
directive is not specified, it defaults to TRUE (suppression of repeating messages is enabled).

SyncLogqueue
When this directive is set to TRUE and PersistLogqueue is enabled, the disk-based queues of Processor and
Output module instances will be immediately synced after each new entry is added to the queue. This greatly
reduces performance but makes the queue more reliable and crash-safe. This directive can also be used at
the module level.

Threads
This directive specifies the number of worker threads to use. The number of the worker threads is calculated
and set to an optimal value if this directive is not defined. Do not set this unless you know what you are
doing.

User
NXLog will drop to the user specified with this directive. This is useful if NXLog needs privileged access to
some system resources (such as kernel messages or to bind a port below 1024). On Linux systems NXLog will
use capabilities to access these resources. In this case NXLog must be started as root. The user can be
specified by name or numeric ID. This directive has no effect when running on the Windows platform or with
nxlog-processor(8).

118.3. Common Module Directives


The following directives are common to all modules. The Module directive is mandatory.

Module
This mandatory directive specifies which binary should be loaded. The module binary has a .so extension on
Unix and a .dll on Windows platforms and resides under the ModuleDir location. Each module binary name

730
is prefixed with im_, pm_, om_, or xm_ (for input, processor, output, and extension, respectively). It is possible for
multiple instances to use the same loadable binary. In this case the binary is only loaded once but
instantiated multiple times. Different module instances may have different configurations.

BatchSize
For input and processor modules, specifies how many records will be collected by the module, and forwarded
together as a batch to the next module in the route. See the description of the global BatchSize directive for
more information.

BatchFlushInterval
For input and processor modules, specifies the timeout before a record-batch is forwarded to the next
module in the route. See the description of the global BatchFlushInterval directive for more information.

BufferSize
This directive specifies the size of the read or write buffer (in bytes) used by the Input or Output module,
respectively. The BufferSize directive is only valid for Input and Output module instances. The default buffer
size is 65000 bytes.

FlowControl
This optional boolean directive specifies whether the module instance should use flow control. FlowControl
is only valid for input, processor, and output modules. For input and processor modules, the FlowControl
default is inherited from the global FlowControl directive (which defaults to TRUE). To maintain backward
compatibility, the FlowControl default for output modules is TRUE regardless of the global FlowControl value.

Under normal conditions, when all module instances are operating normally and buffers are not full, flow
control has no effect. However, if a module becomes blocked and is unable to process events, flow control
will automatically suspend module instances as necessary to prevent events from being lost. For example,
consider a route with im_file and om_tcp. If a network error blocks the output, NXLog will stop reading events
from file until the network error is resolved. If flow control is disabled, NXLog will continue reading from file
and processing events; the events will be discarded when passed to the output module.

In most cases, flow control should be left enabled, but it may be necessary to disable it in certain scenarios. It
is recommended to leave flow control enabled globally and only specify the FlowControl directive in two
cases. First, set FlowControl FALSE on any input module instance that should never be suspended. Second,
if a route contains multiple output instances, it may be desirable to continue sending events to other outputs
even if one output becomes blocked—set FlowControl FALSE on the output(s) where events can be
discarded to prevent the route from being blocked.

Internally, flow control takes effect when the log queue of the next module instance in the route becomes full.
If flow control is enabled, the instance suspends. If flow control is disabled, events are discarded. If the next
module in the route is an output module, both FlowControl values are consulted—flow control is enabled
only if both are TRUE.

Suspending an im_linuxaudit instance could cause an Audit backlog, blocking processes


that generate Audit messages. Suspending an im_udp instance is ineffective, because
UDP provides no receipt acknowledgement. Suspending an im_uds instance when
WARNING
collecting local Syslog messages from the /dev/log Unix domain socket will cause the
syslog() system call to block in any programs trying to write to the system log. It is
generally recommended to disable flow control in these cases.

InputType
This directive specifies the name of the registered input reader function to be used for parsing raw events
from input data. Names are treated case insensitively. This directive is only available for stream oriented
input modules: im_file, im_exec, im_ssl, im_tcp, im_udp, im_uds, and im_pipe. These modules work by filling
an input buffer with data read from the source. If the read operation was successful (there was data coming
from the source), the module calls the specified callback function. If this is not explicitly specified, the module

731
default will be used. Note that im_udp may only work properly if log messages do not span multiple packets
and are within the UDP message size limit. Otherwise the loss of a packet may lead to parsing errors.

Modules may provide custom input reader functions. Once these are registered into the NXLog core, the
modules listed above will be capable of using these. This makes it easier to implement custom protocols
because these can be developed without concern for the transport layer.

The following input reader functions are provided by the NXLog core:

Binary
The input is parsed in the NXLog binary format, which preserves the parsed fields of the event records.
The LineBased reader will automatically detect event records in the binary NXLog format, so it is only
recommended to configure InputType to Binary if compatibility with other logging software is not
required.

Dgram
Once the buffer is filled with data, it is considered to be one event record. This is the default for the
im_udp input module, since UDP Syslog messages arrive in separate packets.

LineBased
The input is assumed to contain event records separated by newlines. It can handle both CRLF (Windows)
and LF (Unix) line-breaks. Thus if an LF (\n) or CRLF (\r\n) is found, the function assumes that it has
reached the end of the event record. If the input begins with the UTF-8 byte order mark (BOM) sequence
(0xEF,0xBB,0xBF), that sequence is automatically skipped.

Example 489. TCP Input Assuming NXLog Format

This configuration explicitly specifies the Binary InputType.

nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Port 2345
4 InputType Binary
5 </Input>

With the im_file module, this directive also supports one or several stream processors to process input data
before reading.

The input log data is processed from the left-most to the right-most processor like stream-name-one → …
→ stream-name-n. The example syntax of the InputType directive with stream processors is shown below.

1 InputType module-name.stream-name-one, module-name.stream-name-two, InputReaderFunction

The format of declaration can be either in module-name.stream-name or module-name format, where:

• The module-name is the name of an extension module instance which implements stream processing.
Currently, this is supported by the xm_crypto and xm_zlib modules.
• The stream-name is the name of a stream processor which is implemented by the module-name. If not
specified, the first available stream will be used by the module-name instance.
• The InputType directive can contain several instances of the same extension module.

For more details, see the the documentation to the m_crypto and xm_zlib modules.

732
Example 490. Decompression and Decryption of Data

This configuration contains one instance of the xm_zlib module to decompress the input data and one
instance of the m_crypto module to decrypt it.

The result is read by the im_file module using the LineBased function.

nxlog.conf
 1 <Extension crypto>
 2 Module xm_crypto
 3 UseSalt TRUE
 4 PasswordFile /tmp/passwordfile
 5 </Extension>
 6
 7 <Extension zlib>
 8 Module xm_zlib
 9 Format gzip
10 CompressionLevel 9
11 CompBufSize 16384
12 DecompBufsize 16384
13 </Extension>
14
15 <Input from_file>
16 Module im_file
17 File '/tmp/input'
18 InputType crypto.aes_decrypt, zlib.decompress, LineBased
19 </Input>

LogLevel
This directive can be used to override the value of the global LogLevel. This can be useful for debugging
purposes when DEBUG is set at the module level, or a noisy module can be silenced when set to CRITICAL or
ERROR.

Example 491. Using LogLevel at Module Level

1 <Input fim>
2 Module im_fim
3 LogLevel Debug
4 ...
5 </Input>

LogqueueDir
This directive specifies the directory where the files of the persistent queue are stored. LogqueueDir is only
valid for Processor and Output module instances. See the description of the global LogqueueDir for more
information.

LogqueueSize
Every Processor and Output instance has an input log queue for events waiting to be processed by that
module. The size of the queue is measured in batches of event records, and can be set with this
directive—the default is 100 batches. When the log queue of a module instance is full and FlowControl is
enabled for the preceeding module, the preceeding module will be suspended. If flow control is not enabled
for the preceeding module, events will be discarded. This directive is only valid for Processor and Output
module instances. This directive can be used at the global level to affect all modules.

OutputType
This directive specifies the name of the registered output writer function to be used for formatting raw events

733
when storing or forwarding output. Names are treated case insensitively. This directive is only available for
stream oriented output modules: om_exec, om_file, om_pipe, om_redis, om_ssl, om_tcp, om_udp,
om_udpspoof, om_uds, and om_zmq. These modules work by filling the output buffer with data to be written
to the destination. The specified callback function is called before the write operation. If this is not explicitly
specified, the module default will be used.

Modules may provide custom output formatter functions. Once these are registered into the NXLog core, the
modules listed above will be capable of using these. This makes it easier to implement custom protocols
because these can be developed without concern for the transport layer.

The following output writer functions are provided by the NXLog core:

Binary
The output is written in the NXLog binary format which preserves parsed fields of the event records.

Dgram
Once the buffer is filled with data, it is considered to be one event record. This is the default for the
om_udp, om_udpspoof, om_redis, and om_zmq output modules.

LineBased
The output will contain event records separated by newlines. The record terminator is CRLF (\r\n) on
Windows and LF (\n) on Unix.

LineBased_CRLF
The output will contain event records separated by Windows style newlines where the record terminator is
CRLF (\r\n).

LineBased_LF
The output will contain event records separated by Unix style newlines where the record terminator is LF
(\n).

Example 492. TCP Output Sending Messages in NXLog Format

This configuration explicitly specifies the Binary OutputType.

nxlog.conf
1 <Output tcp>
2 Module om_tcp
3 Port 2345
4 Host localhost
5 OutputType Binary
6 </Output>

With the om_file module, this directive also supports one or several stream processors to process output data
after writing.

The output log data is processed from the left-most to the right-most processor like stream-name-one → …
→ stream-name-n. The example syntax of the OutputType directive with stream processors is displayed
below.

1 OutputType OutputWriterFunction, module-name.stream-name-one, module-name.stream-name-two

The format of declaration can be either in module-name.stream-name or module-name format, where:

• The module-name is the name of an extension module instance which implements stream processing.
Currently, this is supported by the xm_crypto and xm_zlib modules.

734
• The stream-name is the name of a stream processor which is implemented by the module-name. If not
specified, the first available stream will be used by the module-name instance.
• The OutputType directive can contain several instances of the same extension module.

NOTE Rotation of files is done automatically when encrypting log data with the xm_crypto module.

For more details, see the the documentation of the xm_crypto and xm_zlib modules.

Example 493. Compression and Encryption of Data

This configuration writes the data using the LineBased function of the om_file module. It also contains
one instance of the xm_zlib module to compress the output data and one instance of the xm_crypto
module to encrypt them.

nxlog.conf
 1 <Extension zlib>
 2 Module xm_zlib
 3 Format gzip
 4 CompressionLevel 9
 5 CompBufSize 16384
 6 DecompBufsize 16384
 7 </Extension>
 8
 9 <Extension crypto>
10 Module xm_crypto
11 UseSalt TRUE
12 PasswordFile /tmp/passwordfile
13 </Extension>
14
15 <Input from_tcp>
16 Module im_tcp
17 Host 192.168.31.11
18 Port 10500
19 </Input>
20
21 <Output to_file>
22 Module om_file
23 File '/tmp/output'
24 OutputType LineBased, zlib.compress, crypto.aes_encrypt
25 </Input>

PersistLogqueue
When a module passes an event to the next module along the route, it puts it into the next module’s queue.
This queue can be either a memory-based or disk-based (persistent) queue. When this directive is set to
TRUE, the module will use a persistent (disk-based) queue. With the default value of FALSE, the module’s
incoming log queue will not be persistent (will be memory-based); however, in-memory log queues will still be
persisted to disk on shutdown. This directive is only valid for Processor and Output module instances. This
directive can also be used at the global level.

SyncLogqueue
When this directive is set to TRUE and PersistLogqueue is enabled, the disk-based queue will be immediately
synced after each new entry is added to the queue. This greatly reduces performance but makes the queue
more reliable and crash-safe. This directive is only valid for Processor and Output module instances. This
directive can be used at the global level to affect all modules.

735
118.3.1. Exec
The Exec directive/block contains statements in the NXLog language which are executed when a module receives
a log message. This directive is available in all input, processor, and output modules. It is not available in most
extension modules because these do not handle log messages directly (the xm_multiline and xm_rewrite
modules do provide Exec directives).

Example 494. Simple Exec Statement

This statement assigns a value to the $Hostname field in the event record.

nxlog.conf
1 Exec $Hostname = 'myhost';

Each directive must be on one line unless it contains a trailing backslash (\) character.

Example 495. Exec Statement Spanning Multiple Lines

This if statement uses line continuation to span multiple lines.

nxlog.conf
1 Exec if $Message =~ /something interesting/ \
2 log_info("found something interesting"); \
3 else \
4 log_debug("found nothing interesting");

More than one Exec directive or block may be specified. They are executed in the order of appearance. Each
Exec directive must contain a full statement. Therefore it is not possible to split the lines in the previous example
into multiple Exec directives. It is only possible to split the Exec directive if it contains multiple statements.

Example 496. Equivalent Use of Statements in Exec

This example shows two equivalent uses of the Exec directive.

nxlog.conf
1 Exec log_info("first"); \
2 log_info("second");

This produces identical behavior:

nxlog.conf
1 Exec log_info("first");
2 Exec log_info("second");

The Exec directive can also be used as a block. To use multiple statements spanning more than one line, it is
recommended to use the <Exec> block instead. When using a block, it is not necessary to use the backslash (\)
character for line continuation.

736
Example 497. Using the Exec Block

This example shows two equivalent uses of Exec, first as a directive, then as a block.

nxlog.conf
1 Exec log_info("first"); \
2 log_info("second");

The following Exec block is equivalent. Notice the backslash (\) is omitted.

nxlog.conf
1 <Exec>
2 log_info("first");
3 log_info("second");
4 </Exec>

118.3.2. Schedule
The Schedule block can be used to execute periodic jobs, such as log rotation or any other task. Scheduled jobs
have the same priority as the module. The Schedule block has the following directives:

Every
In addition to the crontab format it is possible to schedule execution at periodic intervals. With the crontab
format it is not possible to run a job every five days for example, but this directive enables it in a simple way.
It takes a positive integer value with an optional unit. The unit can be one of the following: sec, min, hour,
day, or week. If the unit is not specified, the value is assumed to be in seconds. The Every directive cannot be
used in combination with RunCount 1.

Exec
The mandatory Exec directive takes one or more NXLog statements. This is the code which is actually being
scheduled. Multiple Exec directives can be specified within one Schedule block. See the module-level Exec
directive, this behaves the same. Note that it is not possible to use fields in statements here because
execution is not triggered by log messages.

First
This directive sets the first execution time. If the value is in the past, the next execution time is calculated as if
NXLog has been running since and jobs will not be run to make up for missed events in the past. The directive
takes a datetime literal value.

RunCount
This optional directive can be used to specify a maximum number of times that the corresponding Exec
statement(s) should be executed. For example, with RunCount 1 the statement(s) will only be executed once.

When
This directive takes a value similar to a crontab entry: five space-separated definitions for minute, hour, day,
month, and weekday. See the crontab(5) manual for the field definitions. It supports lists as comma
separated values and/or ranges. Step values are also supported with the slash. Month and week days are not
supported, these must be defined with numeric values. The following extensions are also supported:

737
@startup Run once when NXLog starts.
@reboot (Same as @startup)
@yearly Run once a year, "0 0 1 1 *".
@annually (Same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
@midnight (Same as @daily)
@hourly Run once an hour, "0 * * * *".

Example 498. Scheduled Exec Statements

This example shows two scheduled Exec statements in a im_tcp module instance. The first is executed
every second, while the second uses a crontab(5) style value.

nxlog.conf
 1 <Input in>
 2 Module im_tcp
 3 Port 2345
 4
 5 <Schedule>
 6 Every 1 sec
 7 First 2010-12-17 00:19:06
 8 Exec log_info("scheduled execution at " + now());
 9 </Schedule>
10
11 <Schedule>
12 When 1 */2 2-4 * *
13 Exec log_info("scheduled execution at " + now());
14 </Schedule>
15 </Input>

118.4. Route Directives


The following directives can be used in Route blocks. The Path directive is mandatory.

Path
The data flow is defined by the Path directive. First the instance names of Input modules are specified. If
more than one Input reads log messages which feed data into the route, then these must be separated by
commas. The list of Input modules is followed by an arrow (=>). Either processor modules or output modules
follow. Processor modules must be separated by arrows, not commas, because they operate in series, unlike
Input and Output modules which work in parallel. Output modules are separated by commas. The Path must
specify at least an Input and an Output. The syntax is illustrated by the following:

Path INPUT1[, INPUT2...] => [PROCESSOR1 [=> PROCESSOR2...] =>] OUTPUT1[, OUTPUT2...]

738
Example 499. Specifying Routes

The following configuration shows modules being used in two routes.

nxlog.conf
 1 <Input in1>
 2 Module im_null
 3 </Input>
 4
 5 <Input in2>
 6 Module im_null
 7 </Input>
 8
 9 <Processor p1>
10 Module pm_null
11 </Processor>
12
13 <Processor p2>
14 Module pm_null
15 </Processor>
16
17 <Output out1>
18 Module om_null
19 </Output>
20
21 <Output out2>
22 Module om_null
23 </Output>
24
25 <Route 1>
26 # Basic route
27 Path in1 => out1
28 </Route>
29
30 <Route 2>
31 # Complex route with multiple input/output/processor modules
32 Path in1, in2 => p1 => p2 => out1, out2
33 </Route>

Priority
This directive takes an integer value in the range of 1-100 as a parameter, and the default is 10. Log messages
in routes with a lower Priority value will be processed before others. Internally, this value is assigned to each
module part of the route. The events of the modules are processed in priority order by the NXLog engine.
Modules of a route with a lower Priority value (higher priority) will process log messages first.

739
Example 500. Prioritized Processing

This configuration prioritizes the UDP route over the TCP route in order to minimize loss of UDP Syslog
messages when the system is busy.

nxlog.conf
 1 <Input tcpin>
 2 Module im_tcp
 3 Host localhost
 4 Port 514
 5 </Input>
 6
 7 <Input udpin>
 8 Module im_udp
 9 Host localhost
10 Port 514
11 </Input>
12
13 <Output tcpfile>
14 Module om_file
15 File "/var/log/tcp.log"
16 </Output>
17
18 <Output udpfile>
19 Module om_file
20 File "/var/log/udp.log"
21 </Output>
22
23 <Route udp>
24 Priority 1
25 Path udpin => udpfile
26 </Route>
27
28 <Route tcp>
29 Priority 2
30 Path tcpin => tcpfile
31 </Route>

740
Chapter 119. Language
119.1. Types
The following types are provided by the NXLog language.

Unknown
This is a special type for values where the type cannot be determined at compile time and for uninitialized
values. The undef literal and fields without a value also have an unknown type. The unknown type can also be
thought of as "any" in case of function and procedure API declarations.

Boolean
A boolean value is TRUE, FALSE or undefined. Note that an undefined value is not the same as a FALSE value.

Integer
An integer can hold a signed 64 bit value in addition to the undefined value. Floating point values are not
supported.

String
A string is an array of characters in any character set. The binary type should be used for values where the
NUL byte can also occur. An undefined string is not the same as an empty string. Strings have a limited length
to prevent resource exhaustion problems, this is a compile-time value currently set to 1M.

Datetime
A datetime holds a microsecond value of time elapsed since the Epoch. It is always stored in UTC/GMT.

IP Address
The ipaddr type can store IP addresses in an internal format. This type is used to store both dotted-quad IPv4
addresses (for example, 192.168.1.1) and IPv6 addresses (for example,
2001:0db8:85a3:0000:0000:8a2e:0370:7334).

Regular expression
A regular expression type can only be used with the =~ or !~ operators.

Binary
This type can hold an array of bytes.

Variadic arguments
This is a special type only used in function and procedure API declarations to indicate variadic arguments.

119.2. Expressions
119.2.1. Literals
Undef
The undef literal has an unknown type. It can be also used in an assignment to unset the value of a field.

Example 501. Un-Setting the Value of a Field

This statement unsets the $ProcessID field.

1 $ProcessID = undef;

741
Boolean
A boolean literal is either TRUE or FALSE. It is case-insensitive, so True, False, true, and false are also valid.

Integer
An integer starts with a minus (-) sign if it is negative. A "0X" or "0x" prepended modifier indicates a
hexadecimal notation. The "K", "M" and "G" modifiers are also supported; these mean Kilo (1024), Mega
(1024^2), or Giga (1024^3) respectively when appended.

Example 502. Setting an Integer Value

This statement uses a modifier to set the $Limit field to 44040192 (42×1024^2).

1 $Limit = 42M;

String
String literals are quoted characters using either single or double quotes. String literals specified with double
quotes can contain the following escape sequences.

\\
The backslash (\) character.

\"
The double quote (") character.

\n
Line feed (LF).

\r
Carriage return (CR).

\t
Horizontal tab.

\b
Audible bell.

\xXX
A single byte in the form of a two digit hexadecimal number. For example the line-feed character can also
be expressed as \x0A.

String literals in single quotes do not process the escape sequences: "\n" is a single
NOTE character (LF) while '\n' is two characters. The following comparison is FALSE for this
reason: "\n" == '\n'.

Extra care should be taken with the backslash when using double quoted string literals to
NOTE specify file paths on Windows. For more information about the possible complications,
see this note for the im_file File directive.

Example 503. Setting a String Value

This statement sets the $Message field to the specified string.

1 $Message = "Test message";

742
Datetime
A datetime literal is an unquoted representation of a time value expressing local time in the format of YYYY-
MM-DD hh:mm:ss.

Example 504. Setting a Datetime Value

This statement sets the $EventTime field to the specified datetime value.

1 $EventTime = 2000-01-02 03:04:05;

IP Address
An IP address literal can be expressed in the form of a dotted quad notation for IPv4 (192.168.1.1) or by
using 8 colon-separated (:) groups of 16-bit hexadecimal values for IPv6
(2001:0db8:85a3:0000:0000:8a2e:0370:7334).

119.2.2. Regular Expressions


The PCRE engine is used to execute regular expressions in NXLog. For more information about the PCRE syntax,
see the pcre2syntax(3) and pcre2pattern(3) man pages.

Regular expressions must be used with one of the =~ and !~ operators, and must be quoted with slashes (/) as in
Perl. Captured sub-strings are accessible through numeric reference, and the full subject string is placed into $0.

Example 505. A Regular Expression Match Operation

If the regular expression matches the $Message field, the log_info() procedure is executed. The captured
sub-string is used in the logged string ($1).

1 if $Message =~ /^Test (\S+)/ log_info("captured: " + $1);

It is also possible to use named capturing such that the resulting field name is defined in the regular expression.

Example 506. Regular Expression Match Using Named Capturing

This statement causes the same behavior as the previous example, but it uses named capturing instead.

1 if $Message =~ /^Test: (?<test>\S+)/ log_info("captured: " + $test);

Substitution is supported with the s/// operator. Variables and captured sub-string references cannot be used
inside the regular expression or the substitution operator (they will be treated literally).

119.2.2.1. Regular Expression Modifiers


The following regular expression modifiers are supported:

g
The /g modifier can be used for global replacement.

743
Example 507. Replace Whitespace Occurrences

If any whitespace is found in the $SourceName field, it is replaced with underscores (_) and a log
message is generated.

1 if $SourceName =~ s/\s/_/g log_info("removed all whitespace in SourceName");

s
The dot (.) normally matches any character except newline. The /s modifier causes the dot to match all
characters including line terminator characters (LF and CRLF).

Example 508. Dot Matches All Characters

The regular expression in this statement will match a message that begins and ends with the given
keywords, even if the message spans multiple lines.

1 if $Message =~ /^Backtrace.*END$/s drop();

m
The /m modifier can be used to treat the string as multiple lines (^ and $ match newlines within data).

i
The /i modifier does case insensitive matching.

119.2.3. Fields
See Fields for a list of fields provided by the NXLog core. Additional fields are available through input modules.

Fields are referenced in the NXLog language by prepending a dollar sign ($) to the field name.

Normally, a field name may contain letters, digits, the period (.), and the underscore (_). Additionally, field names
must begin with a letter or an underscore. The corresponding regular expression is:

[a-zA-Z_][a-zA-Z0-9._]*

However, those restrictions are relaxed if the field name is specified with curly braces ({}). In this case, the field
name may also contain hyphens (-), parentheses (()), and spaces. The field name may also begin with any one
of the allowed characters. The regular expression in this case is:

[a-zA-Z0-9._() -]+

Example 509. Referencing a Field

This statement generates an internal log message indicating the time when the message was received by
NXLog.

1 log_debug('Message received at ' + $EventReceivedTime);

This statement uses curly braces ({}) to refer to a field with a hyphenated name.

1 log_info('The file size is ' + ${file-size});

A field which does not exist has an unknown type.

744
119.2.4. Operations

119.2.4.1. Unary Operations


The following unary operations are available. It is possible to use brackets around the operand to make it look
like a function call as in the "defined" example below.

not
The not operator expects a boolean value. It will evaluate to undef if the value is undefined. If it receives an
unknown value which evaluates to a non-boolean, it will result in a run-time execution error.

Example 510. Using the "not" Operand

If the $Success field has a value of false, an error is logged.

1 if not $Success log_error("Job failed");

defined
The defined operator will evaluate to TRUE if the operand is defined, otherwise FALSE.

Example 511. Using the Unary "defined" Operation

This statement is a no-op, it does nothing.

1 if defined undef log_info("never printed");

If the $EventTime field has not been set (due perhaps to failed parsing), it will be set to the current time.

1 if not defined($EventTime) $EventTime = now();

119.2.4.2. Binary Operations


The following binary operations are available.

The operations are described with the following syntax:

LEFT_OPERAND_TYPE OPERATION RIGHT_OPERAND_TYPE = EVALUATED_VALUE_TYPE

=~
This is the regular expression match operation as in Perl. This operation takes a string and a regular
expression operand and evaluates to a boolean value which will be TRUE if the regular expression matches
the subject string. Captured sub-strings are accessible through numeric reference (such as $1) and the full
subject string is placed into $0. Regular expression based string substitution is supported with the s///
operator. For more details, see Regular Expressions.

• string =~ regexp = boolean

• regexp =~ string = boolean

Example 512. Regular Expression Based String Matching

A log message will be generated if the $Message field matches the regular expression.

1 if $Message =~ /^Test message/ log_info("matched");

745
!~
This is the opposite of =~: the expression will evaluate to TRUE if the regular expression does not match on
the subject string. It can be also written as not LEFT_OPERAND =~ RIGHT_OPERAND. The s/// substitution
operator is supported.

• string !~ regexp = boolean

• regexp !~ string = boolean

Example 513. Regular Expression Based Negative String Matching

A log message will be generated if the $Message field does not match the regular expression.

1 if $Message !~ /^Test message/ log_info("didn't match");

==
This operator compares two values for equality. Comparing a defined value with an undefined results in
undef.

• undef == undef = TRUE

• string == string = boolean

• integer == integer = boolean

• boolean == boolean = boolean

• datetime == datetime = boolean

• ipaddr == ipaddr = boolean

• ipaddr == string = boolean

• string == ipaddr = boolean

Example 514. Equality

A log message will be generated if $SeverityValue is 1.

1 if $SeverityValue == 1 log_info("severity is one");

!=
This operator compares two values for inequality. Comparing a defined value with an undefined results in
undef.

• undef != undef = FALSE

• string != string = boolean

• integer != integer = boolean

• boolean != boolean = boolean

• datetime != datetime = boolean

• ipaddr != ipaddr = boolean

• ipaddr != string = boolean

• string != ipaddr = boolean

746
Example 515. Inequality

A log message will be generated if $SeverityValue is not 1.

1 if $SeverityValue != 1 log_info("severity is not one");

<
This operation will evaluate to TRUE if the left operand is less than the right operand, and FALSE otherwise.
Comparing a defined value with an undefined results in undef.

• integer < integer = boolean

• datetime < datetime = boolean

Example 516. Less

A log message will be generated if $SeverityValue is less than 1.

1 if $SeverityValue < 1 log_info("severity is less than one");

<=
This operation will evaluate to TRUE if the left operand is less than or equal to the right operand, and FALSE
otherwise. Comparing a defined value with an undefined results in undef.

• integer <= integer = boolean

• datetime <= datetime = boolean

Example 517. Less or Equal

A log message will be generated if $SeverityValue is less than or equal to 1.

1 if $SeverityValue < 1 log_info("severity is less than or equal to one");

>
This operation will evaluate to TRUE if the left operand is greater than the right operand, and FALSE
otherwise. Comparing a defined value with an undefined results in undef.

• integer > integer = boolean

• datetime > datetime = boolean

Example 518. Greater

A log message will be generated if $SeverityValue is greater than 1.

1 if $SeverityValue > 1 log_info("severity is greater than one");

>=
This operation will evaluate to TRUE if the left operand is greater than or equal to the right operand, and
FALSE otherwise. Comparing a defined value with an undefined results in undef.

• integer >= integer = boolean

747
• datetime >= datetime = boolean

Example 519. Greater or Equal

A log message will be generated if $SeverityValue is greater than or equal to 1.

1 if $SeverityValue >= 1 log_info("severity is greater than or equal to one");

and
This operation evaluates to TRUE if and only if both operands are TRUE. The operation will evaluate to undef
if either operand is undefined.

boolean and boolean = boolean

Example 520. And Operation

A log message will be generated only if both $SeverityValue equals 1 and $FacilityValue equals 2.

1 if $SeverityValue == 1 and $FacilityValue == 2 log_info("1 and 2");

or
This operation evaluates to TRUE if either operand is TRUE. The operation will evaluate to undef if both
operands are undefined.

boolean or boolean = boolean

Example 521. Or Operation

A log message will be generated if $SeverityValue is equal to either 1 or 2.

1 if $SeverityValue == 1 or $SeverityValue == 2 log_info("1 or 2");

+
This operation will result in an integer if both operands are integers. If either operand is a string, the result
will be a string where non-string typed values are converted to strings. In this case it acts as a concatenation
operator, like the dot (.) operator in Perl. Adding an undefined value to a non-string will result in undef.

• integer + integer = integer

• string + undef = string

• undef + string = string

• undef + undef = undef

• string + string = string (Concatenate two strings.)

• datetime + integer = datetime (Add the number of seconds in the right value to the datetime stored
in the left value.)
• integer + datetime = datetime (Add the number of seconds in the left value to the datetime stored
in the right value.)

748
Example 522. Concatenation

This statement will always cause a log message to be generated.

1 if 1 + "a" == "1a" log_info("this will be printed");

-
Subtraction. The result will be undef if either operand is undefined.

• integer - integer = integer (Subtract two integers.)

• datetime - datetime = integer (Subtract two datetime types. The result is the difference between to
two expressed in microseconds.)
• datetime - integer = datetime (Subtract the number of seconds from the datetime stored in the left
value.)

Example 523. Subtraction

This statement will always cause a log message to be generated.

1 if 4 - 1 == 3 log_info("four minus one is three");

*
Multiply an integer with another. The result will be undef if either operand is undefined.

integer * integer = integer

Example 524. Multiplication

This statement will always cause a log message to be generated.

1 if 4 * 2 == 8 log_info("four times two is eight");

/
Divide an integer with another. The result will be undef if either operand is undefined. Since the result is an
integer, a fractional part is lost.

integer / integer = integer

Example 525. Division

This statement will always cause a log message to be generated.

1 if 9 / 4 == 2 log_info("9 divided by 4 is 2");

%
The modulo operation divides an integer with another and returns the remainder. The result will be undef if
either operand is undefined.

integer % integer = integer

749
Example 526. Modulo

This statement will always cause a log message to be generated.

1 if 3 % 2 == 1 log_info("three mod two is one");

IN
This operation will evaluate to TRUE if the left operand is equal to any of the expressions in the list on the
right, and FALSE otherwise. Comparing a undefined value results in undef.

unknown IN unknown, unknown … = boolean

Example 527. IN

A log message will be generated if $EventID is equal to any one of the values in the list.

1 if $EventID IN (1000, 1001, 1004, 4001) log_info("EventID found");

NOT IN
This operation is equivalent to NOT expr IN expr_list.

unknown NOT IN unknown, unknown … = boolean

Example 528. NOT IN

A log message will be generated if $EventID is not equal to any of the values in the list.

1 if $EventID NOT IN (1000, 1001, 1004, 4001) log_info("EventID not in list");

119.2.4.3. Ternary Operation


The ternary operator expr1 ? expr2 : expr3 evaluates to expr2 if expr1 is TRUE, otherwise to expr3. The
parentheses as shown here are optional.

Example 529. Using the Ternary Operator

The $Important field is set to TRUE if $SeverityValue is greater than 2, or FALSE otherwise.

1 $Important = ( $SeverityValue > 2 ? TRUE : FALSE );

119.2.5. Functions
See Functions for a list of functions provided by the NXLog core. Additional functions are available through
modules.

Example 530. A Function Call

This statement uses the now() function to set the field to the current time.

1 $EventTime = now();

It is also possible to call a function of a specific module instance.

750
Example 531. Calling a Function of a Specific Module Instance

This statement calls the file_name() and file_size() functions of a defined om_file instance named out in
order to log the name and size of its currently open output file.

1 log_info('Size of output file ' + out->file_name() + ' is ' + out->file_size());

119.3. Statements
The following elements can be used in statements. There is no loop operation (for or while) in the NXLog
language.

119.3.1. Assignment
The assignment operation is declared with an equal sign (=). It loads the value from the expression evaluated on
the right into a field on the left.

Example 532. Field Assignment

This statement sets the $EventReceivedTime field to the value returned by the now() function.

1 $EventReceivedTime = now();

119.3.2. Block
A block consists of one or more statements within curly braces ({}). This is typically used with conditional
statements as in the example below.

Example 533. Conditional Statement Block

If the expression matches, both log messages will be generated.

1 if now() > 2000-01-01 00:00:00


2 {
3 log_info("we are in the");
4 log_info("21st century");
5 }

119.3.3. Procedures
See Procedures for a list of procedures provided by the NXLog core. Additional procedures are available through
modules.

Example 534. A Procedure Call

The log_info() procedure generates an internal log message.

1 log_info("No log source activity detected.");

It is also possible to call a procedure of a specific module instance.

751
Example 535. Calling a Procedure of a Specific Module Instance

This statement calls the parse_csv() procedure of a defined xm_csv module instance named csv_parser.

1 csv_parser->parse_csv();

119.3.4. If-Else
A conditional statement starts with the if keyword followed by a boolean expression and a statement. The else
keyword, followed by another statement, is optional. Brackets around the expression are also optional.

Example 536. Conditional Statements

A log message will be generated if the expression matches.

1 if now() > 2000-01-01 00:00:00 log_info("we are in the 21st century");

This statement is the same as the previous, but uses brackets.

1 if ( now() > 2000-01-01 00:00:00 ) log_info("we are in the 21st century");

This is a conditional statement block.

1 if now() > 2000-01-01 00:00:00


2 {
3 log_info("we are in the 21st century");
4 }

This conditional statement block includes an else branch.

1 if now() > 2000-01-01 00:00:00


2 {
3 log_info("we are in the 21st century");
4 }
5 else log_info("we are not yet in the 21st century");

Like Perl, the NXLog language does not have a switch statement. Instead, this can be accomplished by using
conditional if-else statements.

Example 537. Emulating "switch" With "if-else"

The generated log message various based on the value of the $value field.

1 if ( $value == 1 )
2 log_info("1");
3 else if ( $value == 2 )
4 log_info("2");
5 else if ( $value == 3 )
6 log_info("3");
7 else
8 log_info("default");

NOTE The Perl elsif and unless keywords are not supported.

752
119.4. Variables
A module variable can only be accessed from the same module instance where it was created. A variable is
referenced by a string value and can store a value of any type.

See the create_var(), delete_var(), set_var(), and get_var() procedures.

119.5. Statistical Counters


The following types are available for statistical counters:

COUNT
Added values are aggregated, and the value of the counter is increased if only positive integers are added
until the counter is destroyed or indefinitely if the counter has no expiry.

COUNTMIN
This calculates the minimum value of the counter.

COUNTMAX
This calculates the maximum value of the counter.

AVG
This algorithm calculates the average over the specified interval.

AVGMIN
This algorithm calculates the average over the specified interval, and the value of the counter is always the
lowest which was ever calculated during the lifetime of the counter.

AVGMAX
Like AVGMIN, but this returns the highest value calculated during the lifetime of the counter.

RATE
This calculates the value over the specified interval. It can be used to calculate events per second (EPS) values.

RATEMIN
This calculates the value over the specified interval, and returns the lowest rate calculated during the lifetime
of the counter.

RATEMAX
Like RATEMIN, but this returns the highest rate calculated during the lifetime of the counter.

GRAD
This calculates the change of the rate of the counter over the specified interval, which is the gradient.

GRADMIN
This calculates the gradient and returns the lowest gradient calculated during the lifetime of the counter.

GRADMAX
Like GRADMIN, but this returns the highest gradient calculated during the lifetime of the counter.

119.6. Fields
The following fields are used by core.

$raw_event (type: string)

753
The data received from stream modules (im_file, im_tcp, etc.).

$EventReceivedTime (type: datetime)


The time when the event is received. The value is not modified if the field already exists.

$SourceModuleName (type: string)


The name of the module instance, for input modules. The value is not modified if the field already exists.

$SourceModuleType (type: string)


The type of module instance (such as im_file), for input modules. The value is not modified if the field
already exists.

119.7. Functions
The following functions are exported by core.

binary base64decode(string base64str)


Return the decoded binary value of base64str.

string base64encode(unknown arg)


Return the BASE64 encoded string of arg, which can be either string or binary.

string bin2str(binary arg)


Return the raw string from the binary value of arg. The zero bytes in arg will be converted to periods (.) in the
returned string value. This function is intended for debugging purposes.

datetime datetime(integer arg)


Convert the integer argument, expressing the number of microseconds since epoch, to datetime.

integer day(datetime datetime)


Return the day part of the time value.

integer dayofweek(datetime datetime)


Return the number of days since Sunday in the range of 0-6.

integer dayofyear(datetime datetime)


Return the day number of the year in the range of 1-366.

boolean dropped()
Return TRUE if the currently processed event has already been dropped.

string escape_html(string html)


Return the HTML escaped html string.

string escape_json(string jsonstr)


Escape and return jsonstr according to the JSON specification.

string escape_url(string url)


Return the URL encoded string for url.

string escape_xml(string xmlstr)


Return the XML escaped xmlstr string.

754
datetime fix_year(datetime datetime)
Return a corrected datetime value for a datetime which was parsed with a missing year, such as BSD Syslog or
Cisco timestamps. The current year is used unless it would result in a timestamp that is more than 30 days in
the future, in which case the previous year is used instead. If using the current year results in a timestamp
that is less than or equal to 30 days in the future, it is assumed that the source device’s clock is incorrect (and
the returned datetime value will be up to 30 days in the future).

integer get_rand()
Return a random integer value.

integer get_rand(integer max)


Return a random integer value between 0 and max.

unknown get_registryvalue(string mainkey, string subkeys, string valuename, boolean


64bit_view)
Return a value from the Windows Registry. mainkey must be one of the following predefined registry keys:
HKCC, HKU, HKCU, HKCR, or HKLM. subkeys must be a series of backslash-separated valid Registry keys to open
from mainkey. valuename must be a valid name of a value in last key of the subkeys. If 64bit_view is FALSE, then
it indicates that 64-bit Windows should operate on the 32-bit Registry view; otherwise 64-bit Windows should
operate on the 64-bit Registry view. Returns the value belonging to valuename. Returns undef if valuename or
any of the subkeys can not be accessed in the Registry.

integer get_sequence(string name)


Return a number for the specified sequence that is incremented after each call to this function.

integer get_stat(string statname)


Return the value of the statistical counter or undef if it does not exist.

integer get_stat(string statname, datetime time)


Return the value of the statistical counter or undef if it does not exist. The time argument specifies the current
time.

string get_uuid()
Return a UUID string.

unknown get_var(string varname)


Return the value of the variable or undef if it does not exist.

ipaddr host_ip()
Return the first non-loopback IP address the hostname resolves to.

ipaddr host_ip(integer nth)


Return the nth non-loopback IP address the hostname resolves to. The nth argument starts from 1.

string hostname()
Return the hostname (short form).

string hostname_fqdn()
Return the FQDN hostname. This function will return the short form if the FQDN hostname cannot be
determined.

integer hour(datetime datetime)


Return the hour part of the time value.

755
integer integer(unknown arg)
Parse and convert the string argument to an integer. For datetime type it returns the number of
microseconds since epoch.

ipaddr ipaddr(integer arg)


Convert the integer argument to an ipaddr type.

ipaddr ipaddr(integer arg, boolean ntoa)


Convert the integer argument to an ipaddr type. If ntoa is set to true, the integer is assumed to be in network
byte order. Instead of 1.2.3.4 the result will be 4.3.2.1.

string lc(string arg)


Convert the string to lower case.

string md5sum(unknown arg)


Return the MD5 hash of arg as a hexadecimal string. arg can be either string or binary.

unknown md5sum(unknown arg, boolean isbinary)


Return the MD5 hash of arg as a binary value or a hexadecimal string. When isbinary is TRUE, the return type
will be binary. arg can be either string or binary.

integer microsecond(datetime datetime)


Return the microsecond part of the time value.

integer minute(datetime datetime)


Return the minute part of the time value.

integer month(datetime datetime)


Return the month part of the datetime value.

datetime now()
Return the current time.

string nxlog_version()
Return the NXLog version string.

datetime parsedate(string arg)


Parse a string containing a timestamp. Dates without timezone information are treated as local time. The
current year is used for formats that do not include the year. An undefined datetime type is returned if the
argument cannot be parsed, so that the user can fix the error (for example, $EventTime =
parsedate($somestring); if not defined($EventTime) $EventTime = now();). Supported timestamp
formats are listed below.

RFC 3164 (legacy Syslog) and variations

756
Nov 6 08:49:37
Nov 6 08:49:37
Nov 06 08:49:37
Nov 3 14:50:30.403
Nov 3 14:50:30.403
Nov 03 14:50:30.403
Nov 3 2005 14:50:30
Nov 3 2005 14:50:30
Nov 03 2005 14:50:30
Nov 3 2005 14:50:30.403
Nov 3 2005 14:50:30.403
Nov 03 2005 14:50:30.403
Nov 3 14:50:30 2005
Nov 3 14:50:30 2005
Nov 03 14:50:30 2005

RFC 1123
RFC 1123 compliant dates are also supported, including a couple others which are similar such as those
defined in RFC 822, RFC 850, and RFC 1036.

Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
Sun, 6 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sun, 06 Nov 94 08:49:37 GMT ; RFC 822
Sun, 6 Nov 94 08:49:37 GMT ; RFC 822
Sun, 6 Nov 94 08:49:37 GMT ; RFC 822
Sun, 06 Nov 94 08:49 GMT ; Unknown
Sun, 6 Nov 94 08:49 GMT ; Unknown
Sun, 06 Nov 94 8:49:37 GMT ; Unknown [Elm 70.85]
Sun, 6 Nov 94 8:49:37 GMT ; Unknown [Elm 70.85]
Mon, 7 Jan 2002 07:21:22 GMT ; Unknown [Postfix]
Sun, 06-Nov-1994 08:49:37 GMT ; RFC 850 with four digit years

The above formats are also recognized when the leading day of week and/or the timezone are omitted.

Apache/NCSA date
This format can be found in Apache access logs and other sources.

24/Aug/2009:16:08:57 +0200

ISO 8601 and RFC 3339


NXLog can parse the ISO format with or without sub-second resolution, and with or without timezone
information. It accepts either a comma (,) or a dot (.) in case there is sub-second resolution.

1977-09-06 01:02:03
1977-09-06 01:02:03.004
1977-09-06T01:02:03.004Z
1977-09-06T01:02:03.004+02:00
2011-5-29 0:3:21
2011-5-29 0:3:21+02:00
2011-5-29 0:3:21.004
2011-5-29 0:3:21.004+02:00

Windows timestamps
20100426151354.537875
20100426151354.537875-000
20100426151354.537875000
3/13/2017 8:42:07 AM ; Microsoft DNS Server

757
Integer timestamp
This format is XXXXXXXXXX.USEC. The value is expressed as an integer showing the number of seconds
elapsed since the epoch UTC. The fractional microsecond part is optional.

1258531221.650359
1258531221

BIND9 timestamps
23-Mar-2017 06:38:30.143
23-Mar-2017 06:38:30
2017-Mar-23 06:38:30.143
2017-Mar-23 06:38:30

datetime parsedate(string arg, boolean utc)


Dates without timezone information are treated as UTC when utc is TRUE. If utc is FALSE, input strings are
parsed in local time—the same behavior as parsedate(arg).

string replace(string subject, string src, string dst)


Replace all occurrences of src with dst in the subject string.

string replace(string subject, string src, string dst, integer count)


Replace count number occurrences of src with dst in the subject string.

integer second(datetime datetime)


Return the second part of the time value.

string sha1sum(unknown arg)


Return the SHA1 hash of arg as a hexadecimal string. arg can be either string or binary.

unknown sha1sum(unknown arg, boolean isbinary)


Return the SHA1 hash of arg as a binary value or a hexadecimal string. When isbinary is TRUE, the return type
will be binary. arg can be either string or binary.

string sha512sum(unknown arg)


Return the SHA512 hash of arg as a hexadecimal string. arg can be either string or binary.

unknown sha512sum(unknown arg, boolean isbinary)


Return the SHA512 hash of arg as a binary value or a hexadecimal string. When isbinary is TRUE, the return
type will be binary. arg can be either string or binary.

integer size(string str)


Return the size of the string str in bytes.

string strftime(datetime datetime, string fmt)


Convert a datetime value to a string with the given format. The format must be one of:

• YYYY-MM-DD hh:mm:ss,

• YYYY-MM-DDThh:mm:ssTZ,

• YYYY-MM-DDThh:mm:ss.sTZ,

• YYYY-MM-DD hh:mm:ssTZ,

• YYYY-MM-DD hh:mm:ss.sTZ,

• YYYY-MM-DDThh:mm:ssUTC,

758
• YYYY-MM-DDThh:mm:ss.sUTC,

• YYYY-MM-DD hh:mm:ssUTC,

• YYYY-MM-DD hh:mm:ss.sUTC, or

• a format string accepted by the C strftime() function (see the strftime(3) manual or the Windows strftime
reference for the format specification).

string string(unknown arg)


Convert the argument to a string.

datetime strptime(string input, string fmt)


Convert the string to a datetime with the given format. See the manual of strptime(3) for the format
specification.

string substr(string src, integer from)


Return the string starting at the byte offset specified in from.

string substr(string src, integer from, integer to)


Return a sub-string specified with the starting and ending positions as byte offsets from the beginning of the
string.

string type(unknown arg)


Return the type of the variable, which can be boolean, integer, string, datetime, ipaddr, regexp, or
binary. For values with the unknown type, it returns undef.

string uc(string arg)


Convert the string to upper case.

string unescape_html(string html)


Return the HTML unescaped html string.

string unescape_json(string jsonstr)


Unescape and return jsonstr according to the JSON specification.

string unescape_url(string url)


Return the URL decoded string for url.

string unescape_xml(string xmlstr)


Return the XML unescaped xmlstr string.

integer year(datetime datetime)


Return the year part of the datetime value.

119.8. Procedures
The following procedures are exported by core.

add_stat(string statname, integer value);


Add value to the statistical counter using the current time.

add_stat(string statname, integer value, datetime time);


Add value to the statistical counter using the time specified in the argument named time.

759
add_to_route(string routename);
Copy the currently processed event data to the route specified. This procedure makes a copy of the data. The
original will be processed normally. Note that flow control is explicitly disabled when moving data with
add_to_route() and the data will not be added if the queue of the target module(s) is full.

create_stat(string statname, string type);


Create a module statistical counter with the specified name using the current time. The statistical counter will
be created with an infinite lifetime. The type argument must be one of the following to select the required
algorithm for calculating the value of the statistical counter: COUNT, COUNTMIN, COUNTMAX, AVG, AVGMIN,
AVGMAX, RATE, RATEMIN, RATEMAX, GRAD, GRADMIN, or GRADMAX (see Statistical Counters).

This procedure with two parameters can only be used with COUNT, otherwise the interval parameter must be
specified (see below). This procedure will do nothing if a counter with the specified name already exists.

create_stat(string statname, string type, integer interval);


Create a module statistical counter with the specified name to be calculated over interval seconds and using
the current time. The statistical counter will be created with an infinite lifetime.

create_stat(string statname, string type, integer interval, datetime time);


Create a module statistical counter with the specified name to be calculated over interval seconds and the
time value specified in the time argument. The statistical counter will be created with an infinite lifetime.

create_stat(string statname, string type, integer interval, datetime time, integer lifetime);
Create a module statistical counter with the specified name to be calculated over interval seconds and the
time value specified in the time argument. The statistical counter will expire after lifetime seconds.

create_stat(string statname, string type, integer interval, datetime time, datetime expiry);
Create a module statistical counter with the specified name to be calculated over interval seconds and the
time value specified in the time argument. The statistical counter will expire at expiry.

create_var(string varname);
Create a module variable with the specified name. The variable will be created with an infinite lifetime.

create_var(string varname, integer lifetime);


Create a module variable with the specified name and the lifetime given in seconds. When the lifetime expires,
the variable will be deleted automatically and get_var(name) will return undef.

create_var(string varname, datetime expiry);


Create a module variable with the specified name. The expiry specifies when the variable should be deleted
automatically.

debug(unknown arg, varargs args);


Print the argument(s) at DEBUG log level. Same as log_debug().

delete(unknown arg);
Delete the field from the event. For example, delete($field). Note that $field = undef is not the same,
though after both operations the field will be undefined.

delete(string arg);
Delete the field from the event. For example, delete("field").

delete_all();
Delete all of the fields from the event except raw_event field.

760
delete_stat(string statname);
Delete a module statistical counter with the specified name. This procedure will do nothing if a counter with
the specified name does not exist (e.g. was already deleted).

delete_var(string varname);
Delete the module variable with the specified name if it exists.

drop();
Drop the event record that is currently being processed. Any further action on the event record will result in a
"missing record" error.

duplicate_guard();
Guard against event duplication.

log_debug(unknown arg, varargs args);


Print the argument(s) at DEBUG log level. Same as debug().

log_error(unknown arg, varargs args);


Print the argument(s) at ERROR log level.

log_info(unknown arg, varargs args);


Print the argument(s) at INFO log level.

log_warning(unknown arg, varargs args);


Print the argument(s) at WARNING log level.

module_restart();
Issue module_stop and then a module_start events for the calling module.

module_start();
Issue a module_start event for the calling module.

module_stop();
Issue a module_stop event for the calling module.

rename_field(unknown old, unknown new);


Rename a field. For example, rename_field($old, $new).

rename_field(string old, string new);


Rename a field. For example, rename_field("old", "new").

reroute(string routename);
Move the currently processed event data to the route specified. The event data will enter the route as if it was
received by an input module there. Note that flow control is explicitly disabled when moving data with
reroute() and the data will be dropped if the queue of the target module(s) is full.

set_var(string varname, unknown value);


Set the value of a module variable. If the variable does not exist, it will be created with an infinite lifetime.

sleep(integer interval);
Sleep the specified number of microseconds. This procedure is provided for testing purposes primarily. It can
be used as a poor man’s rate limiting tool, though this use is not recommended.

761
Chapter 120. Extension Modules
Extension modules do not process log messages directly, and for this reason their instances cannot be part of a
route. These modules enhance the features of NXLog in various ways, such as exporting new functions and
procedures or registering additional I/O reader and writer functions (to be used with modules supporting the
InputType and OutputType directives). There are many ways to hook an extension module into the NXLog
engine, as the following modules illustrate.

120.1. Remote Management (xm_admin)


This module provides secure remote administration capabilities for the NXLog engine using either JSON or SOAP
over HTTP/HTTPS (also known as web services). Both, the SOAP protocol and the JSON format are widespread
and can be used from many different programming languages, consequently it easy to implement administration
scripts or create plugins for system monitoring tools such as Nagios, Munin or Cacti. Using the xm_admin
module, NXLog can accept and initiate connections over TCP, SSL, and Unix domain sockets depending on its
configuration.

Note that though the module can both initiate and accept connections, the direction of the HTTP requests is
always the same: requests are sent to the module and it returns HTTP responses.

See the list of installer packages that provide the xm_admin module in the Available Modules chapter of the
NXLog User Guide.

120.1.1. Configuration
The xm_admin module accepts the following directives in addition to the common module directives.

Connect
This directive instructs the module to connect to a remote socket. The argument must be an IP address when
SocketType is set to TCP or SSL. Otherwise it must be a name of a socket for UDS. Connect cannot be used
together with the Listen directive. Multiple xm_admin instances may be configured if multiple administration
ports are required.

Listen
This directive instructs the module to accept connections. The argument must be an IP address when
SocketType is TCP or SSL. Otherwise it must be the name of a socket for UDS. Listen cannot be used together
with the Connect directive. Multiple xm_admin instances may be configured if multiple administration ports
are required. If neither Listen nor Connect are specified, the module will listen with SocketType TCP on
127.0.0.1.

Port
This specifies the port number used with the Listen or Connect modes. The default port is 8080.

ACL
This block defines directories which can be used with the GetFile and PutFile web service requests. The name
of the ACL is used in these requests together with the filename. The filename can contain only characters [a-
zA-Z0-9-._], so these file operations will only work within the directory. Example of usage is in the Examples
section.

AllowRead
If set to TRUE, GetFile requests are allowed.

AllowWrite
If set to TRUE, PutFile requests are allowed.

762
Directory
The name of the directory where the files are saved to or loaded from.

AllowIP
This optional directive can be used to specify an IP address or a network that is allowed to connect. The
directive can be specified more than once to add different IPs or networks to the whitelist. The following
formats can be used:

• 0.0.0.0 (IPv4 address)

• 0.0.0.0/32 (IPv4 network with subnet bits)

• 0.0.0.0/0.0.0.0 (IPv4 network with subnet address)

• aa::1 (IPv6 address)

• aa::12/64 (IPv6 network with subnet bits)

AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with unknown and self-signed certificates. The default value
is FALSE: all connections must present a trusted certificate. This directive is only valid if SocketType is set to
SSL.

CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. This directive is only valid if SocketType is set to SSL. A remote’s self-signed certificate (which
is not signed by a CA) can also be trusted by including a copy of the certificate in this directory.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. This directive is only valid if SocketType is set to SSL. To trust a self-signed certificate
presented by the remote (which is not signed by a CA), provide that certificate instead.

CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive and the CADir and CAFile directives are mutually exclusive.

CertFile
This specifies the path of the certificate file to be used for SSL connections. This directive is only valid if
SocketType is set to SSL.

CertKeyFile
This specifies the path of the certificate key file to be used for SSL connections. This directive is only valid if
SocketType is set to SSL.

CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.

CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the

763
OpenSSL hashed format. This directive is only valid if SocketType is set to SSL.

CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket. This directive is only valid if SocketType is set to SSL.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is only valid if SocketType is set to SSL. This directive is not needed for password-less private keys.

Labels
This directive allows custom key value pairs to be defined with static or dynamic values. The directive is very
useful for providing supplementary details about agents (hostname, operating system, local contact
information, etc.). Labels are returned as part of the ServerInfo response from agents.

Label values can be set in a few ways: statically (with a string in the <labels> block, or a defined string, or an
environment variable); dynamically at start-up with a script run with the include_stdout directive, or at run-
time before each response is sent. Setting labels is demonstrated in the Examples section.

Reconnect
This directive has been deprecated as of version 2.4. The module will try to reconnect automatically at
increasing intervals on all errors.

RequireCert
This boolean directive specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: each connection must use a certificate. This directive is only valid if SocketType is set to SSL.

SocketType
This directive sets the connection type. It can be one of the following:

SSL
TLS/SSL for secure network connections

TCP
TCP, the default if SocketType is not explicitly specified

UDS
Unix domain socket

SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.

SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (compression is disabled).

764
Some Linux packages (for example, Debian) use the OpenSSL library and may not support
the zlib compression mechanism. The module will emit a warning on startup if the
NOTE
compression support is missing. The generic deb/RPM packages are bundled with a zlib-
enabled libssl library.

SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

120.1.2. Exported SOAP Methods and JSON Objects


The xm_admin module exports the following SOAP methods (web services) which can be called remotely. There is
a WSDL file which can be used by different developer tools to easily hook into the exported WS API to reduce
development time.

GetFile
Download a file from the NXLog agent. This will only work if the specified ACL exists.

GetLog
Download the log file from the NXLog agent.

ModuleInfo
Request information about a module instance.

ModuleRestart
Restart a module instance.

ModuleStart
Start a module instance.

ModuleStop
Stop a module instance.

PutFile
Upload a file to the NXLog agent. This will only work if the specified ACL exists. A file can be a configuration
file, certificate or certificate key, pattern database, correlation rule file, etc. Using this method enables NXLog
to be reconfigured from a remote host.

ServerInfo
Request information about the server. This will also return info about all module instances.

ServerRestart
Restart the server.

ServerStart
Start all modules of the server, the opposite of ServerStop.

ServerStop
Stop all modules of the server. Note that the NXLog process will not exit, otherwise it would be impossible to
make it come back online remotely. Extension modules are not stopped for the same reason.

The same SOAP methods where used to create an equivalent JSON Schema, so JSON Objects can be used instead
of SOAP methods. This is illustrated better in the following examples.

765
120.1.3. Request - Response Examples
This is a typical SOAP ServerInfo request and its response.

<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
  <SOAP-ENV:Header/>
  <SOAP-ENV:Body>
  <adm:serverInfo xmlns:adm="http://log4ensics.com/2010/AdminInterface"/>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

<SOAP-ENV:Envelope xmlns:SOAP-ENV='http://schemas.xmlsoap.org/soap/envelope/'>
  <SOAP-ENV:Header/>
  <SOAP-ENV:Body>
  <adm:serverInfoReply xmlns:adm='http://log4ensics.com/2010/AdminInterface'>
  <started>1508401312424622</started>
  <load>0.2000</load>
  <pid>15519</pid>
  <mem>12709888</mem>
  <version>3.99.2802</version>
  <os>Linux</os>
  <systeminfo>OS: Linux, Hostname: voyager, Release: 4.4.0-96-generic, Version: #119-Ubuntu SMP
Tue Sep 12 14:59:54 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory</systeminfo>
  <hostname>voyager</hostname>
  <servertime>1508405764586118</servertime>
  </adm:serverInfoReply>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

The equivalent request and response using JSON.

{
  "msg": {
  "command": "serverInfo"
  }
}

{
  "response": "serverInfoReply",
  "status": "success",
  "data": {
  "server-info": {
  "started": 1508401312424622,
  "load": 0.05999999865889549,
  "pid": 15519,
  "mem": 12742656,
  "os": "Linux",
  "version": "3.99.2802",
  "systeminfo": "OS: Linux, Hostname: voyager, Release: 4.4.0-96-generic, Version: #119-Ubuntu
SMP Tue Sep 12 14:59:54 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory",
  "hostname": "voyager",
  "servertime": 1508406647673758,
  "modules": {}
  }
  }
}

An example of a SOAP PutFile request and its response.

766
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
  <SOAP-ENV:Header/>
  <SOAP-ENV:Body>
  <adm:putFile xmlns:adm="http://log4ensics.com/2010/AdminInterface">
  <filetype>tmp</filetype>
  <filename>test.tmp</filename>
  <file>File Content
A newline
  </file>
  </adm:putFile>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

<SOAP-ENV:Envelope xmlns:SOAP-ENV='http://schemas.xmlsoap.org/soap/envelope/'>
  <SOAP-ENV:Header/>
  <SOAP-ENV:Body>
  <adm:putFileReply xmlns:adm='http://log4ensics.com/2010/AdminInterface'/>
  </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

The equivalent request and response using JSON.

{
  "msg": {
  "command": "putFile",
  "params": {
  "filetype": "tmp",
  "filename": "test.tmp",
  "file": "File content\nA newline\n"
  }
  }
}

{
  "response": "putFileReply",
  "status": "success",
  "data": {}
}

120.1.4. Examples
Example 538. ACL Block Allowing Read and Write on Files in the Directory

This ACL is named "conf" and allows both GetFile and PutFile requests on the specified directory.

nxlog.conf
1 <ACL conf>
2 Directory /var/run/nxlog/configs
3 AllowRead TRUE
4 AllowWrite TRUE
5 </ACL>

767
Example 539. Setting Values for Labels

This example provides static and dynamic configuration of labels.

Static configuration is set with the define string, environment variable envvar, and describing key value
pairs inside the <labels> block.

Dynamic configuration is achieved via the start-up script of the include_stdout directive and run-time
function set with the host label.

nxlog.conf
 1 define BASE /opt/nxlog_new
 2 envvar NXLOG_OS
 3
 4 <Extension admin>
 5 Module xm_admin
 6 ...
 7 <labels>
 8 os_name "Debian"
 9 agent_base %BASE%
10 os %NXLOG_OS%
11 include_stdout /path/to/labels.sh
12 host hostname_fqdn()
13 </labels>
14 </Extension>

768
Example 540. Configuration for Multiple Admin Ports

This configuration specifies two additional administration ports on localhost.

nxlog.conf (truncated)
 1 <Extension ssl_connect>
 2 Module xm_admin
 3 Connect 192.168.1.1
 4 Port 4041
 5 SocketType SSL
 6 CAFile %CERTDIR%/ca.pem
 7 CertFile %CERTDIR%/client-cert.pem
 8 CertKeyFile %CERTDIR%/client-key.pem
 9 KeyPass secret
10 AllowUntrusted FALSE
11 RequireCert TRUE
12 Reconnect 60
13 <ACL conf>
14 Directory %CONFDIR%
15 AllowRead TRUE
16 AllowWrite TRUE
17 </ACL>
18 <ACL cert>
19 Directory %CERTDIR%
20 AllowRead TRUE
21 AllowWrite TRUE
22 </ACL>
23 </Extension>
24
25 <Extension tcp_listen>
26 Module xm_admin
27 Listen localhost
28 Port 8080
29 [...]

120.2. AIX Auditing (xm_aixaudit)


This module parses events in the AIX Audit format. This module is normally used in combination with the im_file
module to read events from a log file. An InputType is registered using the name of the module instance. See
also im_aixaudit, which reads audit events directly from the kernel as it is recommended instead in cases where
NXLog is running as a local agent on the system.

See the list of installer packages that provide the xm_aixaudit module in the Available Modules chapter of the
NXLog User Guide.

120.2.1. Configuration
The xm_aixaudit module accepts the following directive in addition to the common module directives.

EventsConfigFile
This optional directive contains the path to the file with a list of audit events. This file should contain events in
AuditEvent = FormatCommand format. The AuditEvent is a reference to the audit object which is defined
under the /etc/security/audit/objects path. The FormatCommand defines the auditpr output for the
object. For more information, see the The Audit Subsystem in AIX section on the IBM website.

769
120.2.2. Fields
The following fields are used by xm_aixaudit.

$raw_event (type: string)


A list of event fields in key-value pairs.

$Command (type: string)


The command executed.

$EventTime (type: datetime)


The timestamp of the event.

$EventType (type: string)


The type of event (for example, login).

$Login (type: string)


Login name

$LoginUID (type: integer)


Login UID

$ParentPID (type: integer)


The parent process ID (PID).

$PID (type: integer)


The process ID (PID).

$Real (type: string)


Real user name

$RealUID (type: integer)


Real user ID

$Status (type: integer)


The status ID of the event.

$Thread (type: integer)


The kernel thread ID, local to the process.

$Verbose (type: string)


The audit record verbose description

$WPARkey (type: string)


Worlload Partition key

$WPARname (type: string)


Worlload Partition name

120.2.3. Examples

770
Example 541. Parsing AIX Audit Events

This configuration reads AIX audit logs from file and parses them with the InputType registered by
xm_aixaudit.

nxlog.conf
 1 <Extension aixaudit>
 2 Module xm_aixaudit
 3 EventsConfigFile modules/extension/aixaudit/events
 4 </Extension>
 5
 6 <Input in>
 7 Module im_file
 8 File "/audit/audit3.bin"
 9 InputType aixaudit
10 </Input>

120.3. Apple System Logs (xm_asl)


This module provides support for parsing Apple System Log (ASL) files. It registers an InputType using the name
of the module instance. This module can be used with the im_file module.

See the list of installer packages that provide the xm_asl module in the Available Modules chapter of the NXLog
User Guide.

120.3.1. Configuration
The xm_asl module accepts only the common module directives.

120.3.2. Fields
The following fields are used by xm_asl.

$raw_event (type: string)


The raw log message.

$EventTime (type: datetime)


A timestamp for when the event was created by the ASL daemon.

$Facility (type: string)


The sender’s facility.

$GroupAccess (type: integer)


The GID of the group that has permission to read the message (-1 for "all groups").

$Level (type: string)


The ASL record level string. See $Severity.

$LevelValue (type: integer)


The ASL record level value corresponding to the $Level.

$RecordId (type: integer)


A numeric ID for this record.

771
$Sender (type: string)
The name of the process that sent the message.

$SenderGid (type: integer)


The group ID (GID) of the process that generated the event (-1 or -2 may indicate the nobody or nogroup
groups; see /etc/group on the source system).

$SenderHost (type: string)


The host that the sender belongs to (usually the name of the device).

$SenderPid (type: integer)


The ID of the process that generated the event.

$SenderUid (type: integer)


The user ID (UID) of the process that generated the event (-2 may indicate the nobody group; see /etc/group
on the source system).

$Severity (type: string)


The normalized severity of the event, mapped as follows.

ASL Level Normalized


Severity
0/EMERGENCY 5/CRITICAL

1/ALERT 5/CRITICAL

2/CRITICAL 5/CRITICAL

3/ERROR 4/ERROR

4/WARNING 3/WARNING

5/NOTICE 2/INFO

6/INFO 2/INFO

7/DEBUG 1/DEBUG

$SeverityValue (type: integer)


The normalized severity number of the event. See $Severity.

$UserAccess (type: integer)


The UID of the user that has permission to read the message (-1 for "all users").

120.3.3. Examples

772
Example 542. Parsing Apple System Logs With xm_asl

This example uses an im_file module instance to read an ASL log file and the InputType provided by xm_asl
to parse the events. The various Fields are added to the event record.

nxlog.conf
1 <Extension asl_parser>
2 Module xm_asl
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "tmp/input.asl"
8 InputType asl_parser
9 </Input>

120.4. Basic Security Module Auditing (xm_bsm)


This module provides support for parsing events logged to file using Sun’s Basic Security Module (BSM) Auditing
API. This module is normally used in combination with the im_file module to read events from a log file. An
InputType is registered using the name of the module instance. See also im_bsm, which reads audit events
directly from the kernel—it is recommended instead in cases where NXLog is running as a local agent on the
system and the device file is available for reading.

On Solaris, the device file is not available and the BSM log files must be read and parsed with im_file and
xm_bsm as shown in the example below.

To properly read BSM Audit Logs from a device file, such as /dev/auditpipe, the im_bsm
WARNING module must be used. Do not use the xm_bsm module in combination with im_file to read
BSM logs from a device file.

See the list of installer packages that provide the xm_bsm module in the Available Modules chapter of the NXLog
User Guide.

120.4.1. Setup
For information about setting up BSM Auditing, see the corresponding documentation:

• For FreeBSD, see Audit Configuration in the FreeBSD Handbook.


• For Solaris 10, see Enabling and Using BSM Auditing in the Logical Domains 1.2 Administration Guide.
• For Solaris 11, see Managing the BSM Service (Tasks) in the System Administration Guide.

120.4.2. Configuration
The xm_bsm module accepts the following directives in addition to the common module directives.

EventFile
This optional directive can be used to specify the path to the audit event database containing a mapping
between event names and numeric identifiers. The default location is /etc/security/audit_event which is
used when the directive is not specified.

120.4.3. Fields
The following fields are used by xm_bsm.

773
$Arbitrary (type: string)
Arbitrary data token associated with the event, if any

$Arg00.Description (type: string)


The description of argument 0 (there may be additional arguments; for example, Arg01)

$Arg00.Value (type: string)


The value of argument 0

$AttributeDevID (type: string)


The device ID the file might represent

$AttributeFsID (type: string)


The file system ID

$AttributeGID (type: string)


The file owner group ID (GID)

$AttributeMode (type: string)


The file access mode and type

$AttributeNodeID (type: string)


The file inode ID

$AttributeUID (type: string)


The file owner user ID (UID)

$CertHash (type: string)


certificate hash string set

$Cmd (type: string)


The command, with arguments and environment, executed within the zone

$EventHost (type: string)


The host name of the machine corresponding to the event

$EventModifier (type: string)


The ID modifier that identifies special characteristics of the event

$EventName (type: string)


The name of audit event that the record represents

$EventTime (type: datetime)


The time at which the event occurred

$EventType (type: string)


The type of audit event that the record represents

$ExecArgs (type: string)


The list of arguments to an exec() system call

$ExecEnv (type: string)


The list of the current environment variables to an exec() system call

774
$ExitErrno (type: string)
The exit status as passed to the exit() system call

$ExitRetval (type: string)


The exit return value that describes the exit status

$FileModificationTime (type: datetime)


The last modification time of the file corresponding to the event (if applicable)

$FileName (type: string)


The name of the file corresponding to the event (if applicable)

$Hostname (type: string)


The IP address or hostname where the event originated

$Identity.CDHash (type: string)


Apple Identity CDHash hex

$Identity.SignerId (type: string)


Apple Identity signer ID

$Identity.SignerIdTruncated (type: string)


Apple Identity signer ID truncated flag

$Identity.SignerType (type: string)


Apple Identity signer type

$Identity.TeamId (type: string)


Apple Identity Team ID

$Identity.TeamIdTruncated (type: string)


Apple Identity Team ID truncated flag

$IPAddress (type: string)


The IP address as part of the IP token

$IPC (type: string)


The IPC handle that is used by the caller to identify a particular IPC object

$IPChecksum (type: string)


The checksum of the IP header

$IPCPermCreatorGID (type: string)


The IPC creator group ID (GID)

$IPCPermCreatorUID (type: string)


The IPC creator user ID (UID)

$IPCPermGID (type: string)


The IPC owner group ID (GID)

$IPCPermKey (type: string)


The IPC permission key

775
$IPCPermMode (type: string)
The IPC access mode

$IPCPermSeqID (type: string)


The IPC slot sequence

$IPCPermUID (type: string)


The IPC owner user ID (UID)

$IPDestAddr (type: string)


The destination address in the IP header

$IPFragmentOffset (type: string)


The fragment offset field of the IP header

$IPHeaderLen (type: string)


The total length of the IP header

$IPIdent (type: string)


The ID of the IP header

$IPProto (type: string)


The IP protocol

$IPServiceType (type: string)


The IP type of service (TOS)

$IPSrcAddr (type: string)


The source address in the IP header

$IPTTL (type: string)


The time-to-live (TTL) of the IP header

$IPVer (type: string)


The version for the Internet Protocol

$KRB5Principal (type: string)


KRB5Principal strings set

$Opaque (type: string)


The opaque field (unformatted, hexadecimal)

$Path (type: string)


Access path information for an object

$Privilege (type: string)


The privilege token

$ProcessAuditID (type: string)


The audit ID in the Process section

$ProcessGID (type: string)


The effective group ID (GID) in the Process section

776
$ProcessPID (type: string)
The process ID (PID) in the Process section

$ProcessRealGID (type: string)


The real group ID (GID) in the Process section

$ProcessRealUID (type: string)


The real user ID (UID) in the Process section

$ProcessSID (type: string)


The session ID (SID) in the Process section

$ProcessTerminal.Host (type: string)


The terminal IP address in the Process section

$ProcessTerminal.Port (type: string)


The terminal port in the Process section

$ProcessUID (type: string)


The effective user ID (UID) in the Process section

$ReturnErrno (type: string)


The error status of the system call in the Return section

$ReturnRetval (type: string)


The return value of the system call in the Return section

$Sequence (type: string)


The sequence number

$SocketAddress (type: string)


The remote socket address

$SocketPort (type: string)


The remote socket port

$SocketType (type: string)


The socket type field that indicates the type of socket referenced (TCP/UDP/UNIX)

$SubjectAuditID (type: string)


The invariant audit ID in the Subject section

$SubjectGID (type: string)


The effective group ID (GID) in the Subject section

$SubjectPID (type: string)


The process ID (PID) in the Subject section

$SubjectRealGID (type: string)


The real group ID (GID) in the Subject section

$SubjectRealUID (type: string)


The real user ID (UID) in the Subject section

777
$SubjectSID (type: string)
The session ID (SID) in the Subject section

$SubjectTerminal.Host (type: string)


The terminal IP address in the Subject section

$SubjectTerminal.Port (type: string)


The terminal port in the Subject section

$SubjectUID (type: string)


The effective user ID (UID) in the Subject section

$TerminalAddress (type: string)


The terminal address as found in a Subject and/or Process token

$TerminalLocalPort (type: string)


The terminal local port as found in a Subject and/or Process token

$TerminalRemotePort (type: string)


The terminal remote port as found in a Subject and/or Process token

$Text (type: string)


A text string associated with the event

$TokenVersion (type: string)


A number that identifies the version of the record structure

$Zone (type: string)


The zone name to which the audit event pertains

120.4.4. Examples
Example 543. Parsing BSM Events With xm_bsm

This configuration reads BSM audit logs from file and parses them with the InputType registered by
xm_bsm.

nxlog.conf
1 <Extension bsm_parser>
2 Module xm_bsm
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/audit/*'
8 InputType bsm_parser
9 </input>

120.5. Common Event Format (xm_cef)


This module provides functions for generating and parsing data in the ArcSight Common Event Format (CEF). For
more information about the format, see Implementing ArcSight Common Event Format (CEF).

778
CEF uses Syslog as a transport. For this reason the xm_syslog module must be used in
NOTE conjunction with xm_cef in order to parse or generate the additional Syslog header, unless the
CEF data is used without Syslog. See examples for both cases below.

See the list of installer packages that provide the xm_cef module in the Available Modules chapter of the NXLog
User Guide.

120.5.1. Configuration
The xm_cef module accepts the following directive in addition to the common module directives.

IncludeHiddenFields
This boolean directive specifies that the to_cef() function or the to_cef() procedure should inlude fields having
a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE,
then generated CEF text will contain these otherwise excluded fields as extension fields.

120.5.2. Functions
The following functions are exported by xm_cef.

string to_cef()
Convert the specified fields to a single CEF formatted string.

Note that directive IncludeHiddenFields has an effect on extension fields in the output.

120.5.3. Procedures
The following procedures are exported by xm_cef.

parse_cef();
Parse the $raw_event field as CEF input.

parse_cef(string source);
Parse the given string as CEF format.

to_cef();
Format the specified fields as CEF and put this into the $raw_event field. The CEF header fields can be
overridden by values contained in the following NXLog fields: $CEFVersion, $CEFDeviceVendor,
$CEFDeviceProduct, $CEFDeviceVersion, $CEFSignatureID, $CEFName, and $CEFSeverity.

Note that directive IncludeHiddenFields has an effect on extension fields in the output.

120.5.4. Fields
The following fields are used by xm_cef.

In addition to the fields listed below, the parse_cef() procedure will create a field for every key-value pair
contained in the Extension CEF field, such as $act, $cnt, $dhost, etc.

$CEFDeviceProduct (type: string)


The name of the software or appliance that sent the CEF-formatted event log. This field takes the value of the
Device Product CEF header field.

$CEFDeviceVendor (type: string)

779
The vendor or manufacturer of the device that sent the CEF-formatted event log. This field takes the value of
the Device Vendor CEF header field.

$CEFDeviceVersion (type: string)


The version of the software or appliance that sent the CEF-formatted event log. This field takes the value of
the Device Version CEF header field.

$CEFName (type: string)


A human-readable description of the event. This field takes the value of the Name CEF header field.

$CEFSeverity (type: integer)


A numeric value between 1 and 10 that indicates the severity of the event, where:

• 1 is the lowest event severity,


• 10 is the highest event severity.

This field takes the value of the Severity CEF header field.

$CEFSignatureID (type: string)


A unique identifier (unique per event type) used to determine the type of the reported event. This field takes
the value of the Signature ID CEF header field.

$CEFVersion (type: integer)


The version of the CEF format. This field takes the value of the Version CEF header field.

120.5.5. Examples

780
Example 544. Sending Windows EventLog as CEF over UDP

This configuration collects both Windows EventLog and NXLog internal messages, converts to CEF with
Syslog headers, and forwards via UDP.

nxlog.conf
 1 <Extension cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input internal>
10 Module im_internal
11 </Input>
12
13 <Input eventlog>
14 Module im_msvistalog
15 </Input>
16
17 <Output udp>
18 Module om_udp
19 Host 192.168.168.2
20 Port 1514
21 Exec $Message = to_cef(); to_syslog_bsd();
22 </Output>
23
24 <Route arcsight>
25 Path internal, eventlog => udp
26 </Route>

781
Example 545. Parsing CEF

The following configuration receives CEF over UDP and converts the parsed data into JSON.

nxlog.conf
 1 <Extension cef>
 2 Module xm_cef
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Extension json>
10 Module xm_json
11 </Extension>
12
13 <Input udp>
14 Module im_udp
15 Host 0.0.0.0
16 Exec parse_syslog(); parse_cef($Message);
17 </Input>
18
19 <Output file>
20 Module om_file
21 File "cef2json.log"
22 Exec to_json();
23 </Output>
24
25 <Route cef2json>
26 Path udp => file
27 </Route>

120.6. Character Set Conversion (xm_charconv)


This module provides tools for converting strings between different character sets (codepages). All the encodings
available to iconv are supported. See iconv -l for a list of encoding names.

See the list of installer packages that provide the xm_charconv module in the Available Modules chapter of the
NXLog User Guide.

120.6.1. Configuration
The xm_charconv module accepts the following directives in addition to the common module directives.

AutodetectCharsets
This optional directive accepts a comma-separated list of character set names. When auto is specified as the
source encoding for convert() or convert_fields(), these character sets will be tried for conversion. This
directive is not related to the LineReader directive or the corresponding InputType registered by the module.

BigEndian
This optional boolean directive specifies the endianness to use during the encoding conversion. If this
directive is not specified, it defaults to the host’s endianness. This directive only affects the registered
InputType and is only applicable if the LineReader directive is set to a non-Unicode encoding and CharBytes is
set to 2 or 4.

782
CharBytes
This optional integer directive specifies the byte-width of the encoding to use during conversion. Acceptable
values are 1 (the default), 2, and 4. Most variable width encodings will work with the default value. This
directive only affects the registered InputType and is only applicable if the LineReader directive is set to a
non-Unicode encoding.

LineReader
If this optional directive is specified with an encoding, an InputType will be registered using the name of the
xm_charconv module instance. The following Unicode encodings are supported: UTF-8, UCS-2, UCS-2BE, UCS-
2LE, UCS-4, UCS-4BE, UCS-4LE, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE, and UTF-7. For
other encodings, it may be necessary to also set BigEndian and/or CharBytes.

120.6.2. Functions
The following functions are exported by xm_charconv.

string convert(string source, string srcencoding, string dstencoding)


Convert the source string to the encoding specified in dstencoding from srcencoding. The srcencoding argument
can be set to auto to request auto detection.

120.6.3. Procedures
The following procedures are exported by xm_charconv.

convert_fields(string srcencoding, string dstencoding);


Convert all string type fields of a log message from srcencoding to dstencoding. The srcencoding argument can
be set to auto to request auto detection.

120.6.4. Examples
Example 546. Character set auto-detection of various input encodings

This configuration shows an example of character set auto-detection. The input file can contain differently
encoded lines, and the module normalizes output to UTF-8.

nxlog.conf
 1 <Extension charconv>
 2 Module xm_charconv
 3 AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
 4 </Extension>
 5
 6 <Input filein>
 7 Module im_file
 8 File "tmp/input"
 9 Exec convert_fields("auto", "utf-8");
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File "tmp/output"
15 </Output>
16
17 <Route r>
18 Path filein => fileout
19 </Route>

783
Example 547. Registering and Using an InputType

This configuration uses the InputType registered via the LineReader directive to read a file with the ISO-
8859-2 encoding.

nxlog.conf
 1 <Extension charconv>
 2 Module xm_charconv
 3 LineReader ISO-8859-2
 4 </Extension>
 5
 6 <Input in>
 7 Module im_file
 8 File 'modules/extension/charconv/iso-8859-2.in'
 9 InputType charconv
10 </Input>

120.7. Delimiter-Separated Values (xm_csv)


This module provides functions and procedures for working with data formatted as comma-separated values
(CSV). CSV input can be parsed into fields and CSV output can be generated. Delimiters other than the comma
can be used also.

The pm_transformer module provides a simple interface to parse and generate CSV format, but the xm_csv
module exports an API that can be used to solve more complex tasks involving CSV formatted data.

It is possible to use more than one xm_csv module instance with different options in order to
NOTE support different CSV formats at the same time. For this reason, functions and procedures
exported by the module are public and must be referenced by the module instance name.

See the list of installer packages that provide the xm_csv module in the Available Modules chapter of the NXLog
User Guide.

120.7.1. Configuration
The xm_csv module accepts the following directives in addition to the common module directives. The Fields
directive is required.

Fields
This mandatory directive accepts a comma-separated list of fields which will be filled from the input parsed.
Field names with or without the dollar sign ($) are accepted. The fields will be stored as strings unless their
types are explicitly specified with the FieldTypes directive.

Delimiter
This optional directive takes a single character (see below) as argument to specify the delimiter character
used to separate fields. The default delimiter character is the comma (,). Note that there is no delimiter after
the last field.

EscapeChar
This optional directive takes a single character (see below) as argument to specify the escape character used
to escape special characters. The escape character is used to prefix the following characters: the escape
character itself, the quote character, and the delimiter character. If EscapeControl is TRUE, the newline (\n),
carriage return (\r), tab (\t), and backspace (\b) control characters are also escaped. The default escape
character is the backslash character (\).

784
EscapeControl
If this optional boolean directive is set to TRUE, control characters are also escaped. See the EscapeChar
directive for details. The default is TRUE: control characters are escaped. Note that this is necessary to allow
single line CSV field lists which contain line-breaks.

FieldTypes
This optional directive specifies the list of types corresponding to the field names defined in Fields. If
specified, the number of types must match the number of field names specified with Fields. If this directive is
omitted, all fields will be stored as strings. This directive has no effect on the fields-to-CSV conversion.

QuoteChar
This optional directive takes a single character (see below) as argument to specify the quote character used to
enclose fields. If QuoteOptional is TRUE, then only string type fields are quoted. The default is the double-
quote character (").

QuoteMethod
This optional directive can take the following values:

All
All fields will be quoted.

None
Nothing will be quoted. This can be problematic if a field value (typically text that can contain any
character) contains the delimiter character. Make sure that this is escaped or replaced with something
else.

String
Only string type fields will be quoted. This has the same effect as QuoteOptional set to TRUE and is the
default behavior if the QuoteMethod directive is not specified.

Note that this directive only effects CSV generation when using to_csv(). The CSV parser can automatically
detect the quotation.

QuoteOptional
This directive has been deprecated in favor of QuoteMethod, which should be used instead.

StrictMode
If this optional boolean directive is set to TRUE, the CSV parser will fail to parse CSV lines that do not contain
the required number of fields. When this is set to FALSE and the input contains fewer fields than specified in
Field, the rest of the fields will be unset. The default value is FALSE.

UndefValue
This optional directive specifies a string which will be treated as an undefined value. This is particularly useful
when parsing the W3C format where the dash (-) marks an omitted field.

120.7.1.1. Specifying Quote, Escape, and Delimiter Characters


The QuoteChar, EscapeChar, and Delimiter directives can be specified in several ways.

Unquoted single character


Any printable character can be specified as an unquoted character, except for the backslash (\):

Delimiter ;

Control characters
The following non-printable characters can be specified with escape sequences:

785
\a
audible alert (bell)

\b
backspace

\t
horizontal tab

\n
newline

\v
vertical tab

\f
formfeed

\r
carriage return

For example, to use TAB delimiting:

Delimiter \t

A character in single quotes


The configuration parser strips whitespace, so it is not possible to define a space as the delimiter unless it is
enclosed within quotes:

Delimiter ' '

Printable characters can also be enclosed:

Delimiter ';'

The backslash can be specified when enclosed within quotes:

Delimiter '\'

A character in double quotes


Double quotes can be used like single quotes:

Delimiter " "

The backslash can be specified when enclosed within double quotes:

Delimiter "\"

A hexadecimal ASCII code


Hexadecimal ASCII character codes can also be used by prepending 0x. For example, the space can be
specified as:

Delimiter 0x20

This is equivalent to:

Delimiter " "

786
120.7.2. Functions
The following functions are exported by xm_csv.

string to_csv()
Convert the specified fields to a single CSV formatted string.

120.7.3. Procedures
The following procedures are exported by xm_csv.

parse_csv();
Parse the $raw_event field as CSV input.

parse_csv(string source);
Parse the given string as CSV format.

to_csv();
Format the specified fields as CSV and put this into the $raw_event field.

120.7.4. Examples

787
Example 548. Complex CSV Format Conversion

This example shows that the xm_csv module can not only parse and create CSV formatted input and output,
but with multiple xm_csv modules it is also possible to reorder, add, remove, or modify fields before
outputting to a different CSV format.

nxlog.conf
 1 <Extension csv1>
 2 Module xm_csv
 3 Fields $id, $name, $number
 4 FieldTypes integer, string, integer
 5 Delimiter ,
 6 </Extension>
 7
 8 <Extension csv2>
 9 Module xm_csv
10 Fields $id, $number, $name, $date
11 Delimiter ;
12 </Extension>
13
14 <Input in>
15 Module im_file
16 File "tmp/input"
17 <Exec>
18 csv1->parse_csv();
19 $date = now();
20 if not defined $number $number = 0;
21 csv2->to_csv();
22 </Exec>
23 </Input>
24
25 <Output out>
26 Module om_file
27 File "tmp/output"
28 </Output>

Input Sample
1, "John K.", 42
2, "Joe F.", 43

Output Sample
1;42;"John K.";2011-01-15 23:45:20
2;43;"Joe F.";2011-01-15 23:45:20

120.8. Encryption (xm_crypto)


This module provides encryption and decryption of logs by using stream processors which implement the AES
symmetric encryption algorithm. Stream processors are applied with the im_file module to process data at the
input and with the om_file module before writing to the output. The functionality of this module can be
combined with other stream modules like the xm_zlib.

120.8.1. Configuration
The xm_crypto module accepts the following directives in addition to the common module directives.

Password
This optional directive defines the password which can be used by the aes_encrypt and aes_decrypt stream

788
processors while processing data. This directive is mutually exclusive with the PasswordFile directive.

PasswordFile
This optional directive specifies the file to read the password from. This directive is mutually exclusive with
the Password directive.

UseSalt
This optional boolean directive defines the randomly-generated combination of characters which will be used
for encryption and decryption. The default value is TRUE.

Iter
This optional directive enhances security by enabling the PBKDF2 algorithm and setting the iteration count
during the key generation. For more details, see the EVP_BytesToKey section on the OpenSSL website.

The default value is 0.

120.8.2. Stream Processors


The xm_crypto module implements the following stream processors for encrypting and decrypting data with the
im_file and om_file modules.

aes_encrypt
This stream processor implements encryption of the log data. It is specified in the OutputType directive after
the specification of the output writer function. The encryption result is similar to running the following
OpenSSL command:

openssl enc -aes256 -md sha256 -pass pass:password -in input_filename -out output_encrypted_filename

Rotation of files is done automatically when encrypting log data with the aes_encrypt processor.
NOTE The rotation pattern is original_file -→ original_file.1 -→ original_file.2 -→
original_file.n. There is no built-in removal or cleanup.

aes_decrypt
This stream processor implements decryption of the log data. It is specified in the InputType directive before
the specification of the input reader function. The decryption result is similar to running the following
OpenSSL command:

openssl enc -aes256 -d -md sha256 -pass pass:password -in encrypted_filename -out
output_decrypted_filename

120.8.3. Examples
The examples below describe various ways for processing logs with the xm_crypto module.

789
Example 549. Encryption of Logs

The following configuration uses the im_file to read log messages. The aes_encrypt stream processor
encrypts data at the output. The result is saved to a file.

nxlog.conf
 1 <Extension crypto>
 2 Module xm_crypto
 3 Password ThePassword123!!
 4 </Extension>
 5
 6 <Input in>
 7 Module im_file
 8 File 'tmp/input'
 9 </Input>
10
11 <Output out>
12 Module om_file
13 File 'tmp/output'
14 OutputType LineBased, crypto.aes_encrypt
15 </Output>

Example 550. Decryption of Logs

The following configuration uses the aes_decrypt stream processor to decrypt log messages at the input.
The result is saved to a file.

nxlog.conf
 1 <Extension crypto>
 2 Module xm_crypto
 3 UseSalt TRUE
 4 PasswordFile /tmp/passwordfile
 5 </Extension>
 6
 7 <Input in>
 8 Module im_file
 9 File '/tmp/input'
10 InputType crypto.aes_decrypt, LineBased
11 </Input>
12
13 <Output out>
14 Module om_file
15 File '/tmp/output'
16 </Output>

790
Example 551. Decryption and Encryption of Logs

The configuration below uses the aes_decrypt stream processor to decrypt input data. The Exec directive
directive runs the to_syslog_ietf() procedure to convert messages to the IETF Syslog format. At the output,
the result is encrypted with the aes_encrypt processor and saved to a file.

<Extension crypto>
  Module xm_crypto
  UseSalt TRUE
  PasswordFile /tmp/passwordfile
</Extension>

<Extension syslog>
  Module xm_syslog
</Extension>

<Input in>
  Module im_file
  File 'tmp/input'
  InputType crypto.aes_decrypt, LineBased
  Exec to_syslog_ietf();
</Input>

<Output out>
  Module om_file
  File 'tmp/output'
  OutputType LineBased, crypto.aes_encrypt
</Output>

The InputType and OutputType directives provide sequential usage of multiple stream processors to create
workflows. For example, the xm_zlib module functionality can be combined with the xm_crypto module to
provide compression and encryption of logs at the same time.

While configuring stream processors, compression should always precede encryption. In the opposite process,
decryption should occur before decompression.

791
Example 552. Processing Data With Various Stream Processors

The configuration below utilizes the aes_decrypt processor of the xm_crypto module to decrypt log
messages and the decompress stream processor of the xm_zlib module to decompress the data. Using the
Exec directive, messages with the info string in their body are selected. Then the selected messages are
compressed and encrypted using the compress and aes_encrypt stream processors. The result is saved to a
file.

<Extension zlib>
  Module xm_zlib
  Format zlib
  CompressionLevel 9
  CompBufSize 16384
  DecompBufSize 16384
</Extension>

<Extension crypto>
  Module xm_crypto
  UseSalt TRUE
  Password ThePassword123!!
</Extension>

<Input in>
  Module im_file
  File '/tmp/input'
  InputType crypto.aes_decrypt, zlib.decompress, LineBased
  Exec if not ($raw_event =~ /info/) drop();
</Input>

<Output out>
  Module om_file
  File 'tmp/output'
  OutputType LineBased, zlib.compress, crypto.aes_encrypt
</Output>

120.9. External Programs (xm_exec)


This module provides two procedures which make it possible to execute external scripts or programs. These two
procedures are provided through this extension module in order to keep the NXLog core small. Also, without this
module loaded an administrator is not able to execute arbitrary scripts.

The im_exec and om_exec modules also provide support for running external programs, though
the purpose of these is to pipe data to and read data from programs. The procedures provided
NOTE
by the xm_exec module do not pipe log message data, but are intended for multiple invocations
(though data can be still passed to the executed script/program as command line arguments).

See the list of installer packages that provide the xm_exec module in the Available Modules chapter of the NXLog
User Guide.

120.9.1. Configuration
The xm_exec module accepts only the common module directives.

120.9.2. Functions
The following functions are exported by xm_exec.

792
string exec(string command, varargs args)
Execute command, passing it the supplied arguments, and wait for it to terminate. The command is executed
in the caller module’s context. Returns a string from stdout. Note that the module calling this function will
block for at most 10 seconds or until the process terminates. Use the exec_async() procedure to avoid this
problem. All output written to standard error by the spawned process is discarded.

120.9.3. Procedures
The following procedures are exported by xm_exec.

exec(string command, varargs args);


Execute command, passing it the supplied arguments, and wait for it to terminate. The command is executed
in the caller module’s context. Note that the module calling this procedure will block until the process
terminates. Use the exec_async() procedure to avoid this problem. All output written to standard output and
standard error by the spawned process is discarded.

exec_async(string command, varargs args);


This procedure executes the command passing it the supplied arguments and does not wait for it to
terminate.

120.9.4. Examples
Example 553. NXLog Acting as a Cron Daemon

This xm_exec module instance will run the command every second without waiting for it to terminate.

nxlog.conf
1 <Extension exec>
2 Module xm_exec
3 <Schedule>
4 Every 1 sec
5 Exec exec_async("/bin/true");
6 </Schedule>
7 </Extension>

793
Example 554. Sending Email Alerts

If the $raw_event field matches the regular expression, an email will be sent.

nxlog.conf
 1 <Extension exec>
 2 Module xm_exec
 3 </Extension>
 4
 5 <Input tcp>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 1514
 9 <Exec>
10 if $raw_event =~ /alertcondition/
11 {
12 exec_async("/bin/sh", "-c", 'echo "' + $Hostname +
13 '\n\nRawEvent:\n' + $raw_event +
14 '"|/usr/bin/mail -a "Content-Type: text/plain; charset=UTF-8" -s "ALERT" ' +
15 'user@domain.com');
16 }
17 </Exec>
18 </Input>
19
20 <Output file>
21 Module om_file
22 File "/var/log/messages"
23 </Output>
24
25 <Route tcp_to_file>
26 Path tcp => file
27 </Route>

For another example, see File Rotation Based on Size.

120.10. File Lists (xm_filelist)


The xm_filelist module can be used to implement file-based blacklisting or whitelisting. This extension module
provides two functions, contains() and matches(), that can be invoked to check whether the string argument is
present in the file. This can be a username, IP address, or similar. Each referenced file is cached in memory and
any modifications are automatically loaded without the need to reload NXLog.

See the list of installer packages that provide the xm_filelist module in the Available Modules chapter of the
NXLog User Guide.

120.10.1. Configuration
The xm_filelist module accepts the following directives in addition to the common module directives. The File
directive is required.

File
The mandatory File directive specifies the name of the file that will be read into memory. This directive may
be specified more than once if multiple files need to be operated on.

794
CheckInterval
This optional directive specifies the frequency with which the files are checked for modifications, in seconds.
The default value is 5 seconds. File checks are disabled if CheckInterval is set to 0.

120.10.2. Functions
The following functions are exported by xm_filelist.

boolean contains(string str)


Check if line in the file(s) contains the string str.

boolean contains(string str, boolean caseinsensitive)


Check if line in the file(s) contains the string str. May be case insensitive according to caseinsensitive.

boolean matches(string str)


Check if a line in the file(s) matches the string str.

boolean matches(string str, boolean caseinsensitive)


Check if a line in the file(s) matches the string str. May be case insensitive according to caseinsensitive.

120.11. File Operations (xm_fileop)


This module provides functions and procedures to manipulate files. Coupled with a Schedule block, this module
allows various log rotation and retention policies to be implemented, including:

• log file retention based on file size,


• log file retention based on file age, and
• cyclic log file rotation and retention.

Rotating, renaming, or removing the file written by om_file is also supported with the help of the
NOTE
om_file reopen() procedure.

See the list of installer packages that provide the xm_fileop module in the Available Modules chapter of the NXLog
User Guide.

120.11.1. Configuration
The xm_fileop module accepts only the common module directives.

120.11.2. Functions
The following functions are exported by xm_fileop.

boolean dir_exists(string path)


Return TRUE if path exists and is a directory. On error undef is returned and an error is logged.

string dir_temp_get()
Return the name of a directory suitable as a temporary storage location.

string file_basename(string file)


Strip the directory name from the full file path. For example, file_basename('/var/log/app.log') will
return app.log.

795
datetime file_ctime(string file)
Return the creation or inode-changed time of file. On error undef is returned and an error is logged.

string file_dirname(string file)


Return the directory name of the full file path. For example, file_dirname('/var/log/app.log') will return
/var/log. Returns an empty string if file does not contain any directory separators.

boolean file_exists(string file)


Return TRUE if file exists and is a regular file.

binary file_hash(string file, string digest)


Return the calculated hash of file using digest algorithm. Available digest values are blake2b512,
blake2s256, gost, md4, md5, rmd160, sha1, sha224, sha256, sha384, sha512 (see openssl dgst
command in openssl’s manual). On error undef is returned and an error is logged.

integer file_inode(string file)


Return the inode number of file. On error undef is returned and an error is logged.

datetime file_mtime(string file)


Return the last modification time of file. On error undef is returned and an error is logged.

string file_read(string file)


Return the contents of file as a string value. On error undef is returned and an error is logged.

integer file_size(string file)


Return the size of file, in bytes. On error undef is returned and an error is logged.

string file_type(string file)


Return the type of file. The following string values can be returned: FILE, DIR, CHAR, BLOCK, PIPE, LINK,
SOCKET, and UNKNOWN. On error undef is returned and an error is logged.

120.11.3. Procedures
The following procedures are exported by xm_fileop.

dir_make(string path);
Create a directory recursively (like mkdir -p). It succeeds if the directory already exists. An error is logged if
the operation fails.

dir_remove(string file);
Remove the directory from the filesystem.

file_append(string src, string dst);


Append the contents of the file src to dst. The dst file will be created if it does not exist. An error is logged if
the operation fails.

file_chmod(string file, integer mode);


Change the permissions of file. This function is only implemented on POSIX systems where chmod() is
available in the underlying operating system. An error is logged if the operation fails.

file_chown(string file, integer uid, integer gid);


Change the ownership of file. This function is only implemented on POSIX systems where chown() is available
in the underlying operating system. An error is logged if the operation fails.

796
file_chown(string file, string user, string group);
Change the ownership of file. This function is only implemented on POSIX systems where chown() is available
in the underlying operating system. An error is logged if the operation fails.

file_copy(string src, string dst);


Copy the file src to dst. If file dst already exists, its contents will be overwritten. An error is logged if the
operation fails.

file_cycle(string file);
Do a cyclic rotation on file. The file will be moved to "file.1". If "file.1" already exists it will be moved to "file.2",
and so on. Wildcards are supported in the file path and filename. The backslash (\) must be escaped if used
as the directory separator with wildcards (for example, C:\\test\\*.log). This procedure will reopen the
LogFile if it is cycled. An error is logged if the operation fails.

file_cycle(string file, integer max);


Do a cyclic rotation on file as described above. The max argument specifies the maximum number of files to
keep. For example, if max is 5, "file.6" will be deleted.

file_link(string src, string dst);


Create a hardlink from src to dst. An error is logged if the operation fails.

file_remove(string file);
Remove file. It is possible to specify a wildcard in the filename (but not in the path). The backslash (\) must be
escaped if used as the directory separator with wildcards (for example, C:\\test\\*.log). This procedure
will reopen the LogFile if it is removed. An error is logged if the operation fails.

file_remove(string file, datetime older);


Remove file if its creation time is older than the value specified in older. It is possible to specify a wildcard in
the filename (but not in the path). The backslash (\) must be escaped if used as the directory separator with
wildcards (for example, C:\\test\\*.log). This procedure will reopen the LogFile if it is removed. An error is
logged if the operation fails.

file_rename(string old, string new);


Rename the file old to new. If the file new exists, it will be overwritten. Moving files or directories across
devices may not be possible. This procedure will reopen the LogFile if it is renamed. An error is logged if the
operation fails.

file_touch(string file);
Update the last modification time of file or create the file if it does not exist. An error is logged if the operation
fails.

file_truncate(string file);
Truncate file to zero length. If the file does not exist, it will be created. An error is logged if the operation fails.

file_truncate(string file, integer offset);


Truncate file to the size specified in offset. If the file does not exist, it will be created. An error is logged if the
operation fails.

file_write(string file, string value);


Write value into file. The file will be created if it does not exist. An error is logged if the operation fails.

797
120.11.4. Examples
Example 555. Rotation of the Internal LogFile

In this example, the internal log file is rotated based on time and size.

nxlog.conf
 1 #define LOGFILE C:\Program Files (x86)\nxlog\data\nxlog.log
 2 define LOGFILE /var/log/nxlog/nxlog.log
 3
 4 <Extension fileop>
 5 Module xm_fileop
 6
 7 # Check the log file size every hour and rotate if larger than 1 MB
 8 <Schedule>
 9 Every 1 hour
10 Exec if (file_size('%LOGFILE%') >= 1M) \
11 file_cycle('%LOGFILE%', 2);
12 </Schedule>
13
14 # Rotate log file every week on Sunday at midnight
15 <Schedule>
16 When @weekly
17 Exec file_cycle('%LOGFILE%', 2);
18 </Schedule>
19 </Extension>

120.12. GELF (xm_gelf)


This module provides reader and writer functions which can be used for processing log data in the Graylog
Extended Log Format (GELF) for Graylog2 or GELF compliant tools.

Unlike Syslog format (with Snare Agent, for example), the GELF format contains structured data in JSON so that
the fields are available for analysis. This is especially convenient with sources such as the Windows EventLog
which already generate logs in a structured format.

The xm_gelf module provides the following reader and writer functions.

InputType GELF_TCP
This input reader generates GELF for use with TCP (use with the im_tcp input module).

InputType GELF_UDP
This input reader generates GELF for use with UDP (use with the im_udp input module).

InputType GELF
This type is equivalent to the GELF_UDP reader.

OutputType GELF_TCP
This output writer generates GELF for use with TCP (use with the om_tcp output module).

OutputType GELF_UDP
This output writer generates GELF for use with UDP (use with the om_udp output module).

OutputType GELF
This type is equivalent to the GELF_UDP writer.

798
Configuring NXLog to process GELF input or output requires loading the xm_gelf extension module and then
setting the corresponding InputType or OutputType in the Input or Output module instance. See the examples
below.

The GELF output generated by this module includes all fields, except for the $raw_event field and any field
having a leading dot (.) or underscore (_).

See the list of installer packages that provide the xm_gelf module in the Available Modules chapter of the NXLog
User Guide.

120.12.1. Configuration
The xm_gelf module accepts the following directives in addition to the common module directives.

IncludeHiddenFields
This boolean directive specifies that the GELF output should include fields having a leading dot (.) or
underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE, then the generated
GELF JSON will contain these otherwise excluded fields. In this case field name _fld1 will become __fld1 and
.fld2 will become _.fld2 in GELF JSON.

ShortMessageLength
This optional directive can be used to specify the length of the short_message field for the GELF output writers.
This defaults to 64 if the directive is not explicitly specified. If the field short_message or ShortMessage is
present, it will not be truncated.

UseNullDelimiter
If this optional boolean directive is TRUE, the GELF_TCP output writer will use the NUL delimiter. If this
directive is FALSE, it will use the newline delimiter. The default is TRUE.

120.12.2. Fields
The following fields are used by xm_gelf.

In addition to the fields listed below, if the GELF input contains custom user fields (those prefixed with the
character), those fields will be available without the prefix. For example, the GELF record
{"_foo": "bar"} will generate the field $foo containing the value "bar".

$EventTime (type: datetime)


The time when the GELF message was created. This is called timestamp in the GELF specification.

$FullMessage (type: string)


A long message that might contain a backtrace, a listing of environment variables, and so on.

$Hostname (type: string)


The name of the host that sent the GELF message. This is called host in the GELF specification.

$SeverityValue (type: integer)


The standard syslog severity level. This is called level in the GELF specification.

$ShortMessage (type: string)


A short message with a brief description of the event. If the "short_message" JSON field is not present in the
incoming GELF message, the module uses the truncated value of $Message or $raw_event.

799
$SourceLine (type: integer)
The line in a file that caused the event. This is called line in the GELF specification.

$SyslogFacility (type: string)


The syslog facility that created the event. This is called facility in the GELF specification.

$version (type: string)


The GELF specification version as present in the input, e.g. 1.1.

120.12.3. Examples
Example 556. Parsing GELF Logs Collected via TCP

This configuration uses the im_tcp module to collect logs over TCP port 12201 and the xm_gelf module to
parse them.

nxlog.conf
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input tcpin>
 6 Module im_tcp
 7 Host 0.0.0.0
 8 Port 12001
 9 InputType GELF_TCP
10 </Input>

800
Example 557. Sending Windows EventLog to Graylog2 in GELF

The following configuration reads the Windows EventLog and sends it to a Graylog2 server in GELF format.

nxlog.conf
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input eventlog>
 6 # Use 'im_mseventlog' for Windows XP, 2000 and 2003
 7 Module im_msvistalog
 8 # Uncomment the following to collect specific event logs only
 9 # but make sure not to leave any `#` as only <!-- --> style comments
10 # are supported inside the XML.
11 #Query <QueryList>\
12 # <Query Id="0">\
13 # <Select Path="Application">*</Select>\
14 # <Select Path="System">*</Select>\
15 # <Select Path="Security">*</Select>\
16 # </Query>\
17 # </QueryList>
18 </Input>
19
20 <Output udp>
21 Module om_udp
22 Host 192.168.1.1
23 Port 12201
24 OutputType GELF_UDP
25 </Output>
26
27 <Route eventlog_to_udp>
28 Path eventlog => udp
29 </Route>

801
Example 558. Forwarding Custom Log Files to Graylog2 in GELF

In this example, custom application logs are collected and sent out in GELF, with custom fields set to make
the data more useful for the receiver.

nxlog.conf (truncated)
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Input file>
 6 Module im_file
 7 File "/var/log/app*.log"
 8
 9 <Exec>
10 # Set the $EventTime field usually found in the logs by
11 # extracting it with a regexp. If this is not set, the current
12 # system time will be used which might be a little off.
13 if $raw_event =~ /(\d\d\d\d\-\d\d-\d\d \d\d:\d\d:\d\d)/
14 $EventTime = parsedate($1);
15
16 # Explicitly set the Hostname. This defaults to the system's
17 # hostname if unset.
18 $Hostname = 'myhost';
19
20 # Now set the severity level to something custom. This defaults
21 # to 'INFO' if unset. We can use the following numeric values
22 # here which are the standard Syslog values: ALERT: 1, CRITICAL:
23 # 2, ERROR: 3, WARNING: 4, NOTICE: 5, INFO: 6, DEBUG: 7
24 if $raw_event =~ /ERROR/ $SyslogSeverityValue = 3;
25 else $SyslogSeverityValue = 6;
26
27 # Set a field to contain the name of the source file
28 $FileName = file_name();
29 [...]

802
Example 559. Parsing a CSV File and Sending it to Graylog2 in GELF

With this configuration, NXLog will read a CSV file containing three fields and forward the data in GELF so
that the fields will be available on the server.

nxlog.conf
 1 <Extension gelf>
 2 Module xm_gelf
 3 </Extension>
 4
 5 <Extension csv>
 6 Module xm_csv
 7 Fields $name, $number, $location
 8 FieldTypes string, integer, string
 9 Delimiter ,
10 </Extension>
11
12 <Input file>
13 Module im_file
14 File "/var/log/app/csv.log"
15 Exec csv->parse_csv();
16 </Input>
17
18 <Output udp>
19 Module om_udp
20 Host 192.168.1.1
21 Port 12201
22 OutputType GELF_UDP
23 </Output>
24
25 <Route csv_to_gelf>
26 Path file => udp
27 </Route>

120.13. Go (xm_go)
This module provides support for processing NXLog log data with methods written in the Go language. The file
specified by the ImportLib directive should contain one or more methods which can be called from the Exec
directive of any module. See also the im_go and om_go modules.

For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.

The Go script imports the NXLog module, and will have access to the following classes and functions.

class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).

nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.

nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.

803
nxmodule.AddEvent()
This function adds a READ event to NXLog and allows to call it later.

nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog and allows to call it later.

class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the method specified by the go_call()
function.

nxlogdata.Get(field string) (interface{}, bool)


This function returns the value/exists pair for the logdata field.

nxlogdata.GetString(field string) (string, bool)


This function returns the value/exists pair for the string representation of the logdata field.

nxlogdata.Set(field string, val interface{})


This function sets the logdata field value.

nxlogdata.Delete(field string)
This function removes the field from logdata.

nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.

module
This attribute is set to the module object associated with the event.

See the list of installer packages that provide the xm_go module in the Available Modules chapter of the NXLog
User Guide.

120.13.1. Installing the gonxlog.go File


For the Go environment to work with NXLog, the gonxlog.go file has to be installed.

NOTE This applies for Linux only.

1. Copy the gonxlog.go file from the


/opt/nxlog/lib/nxlog/modules/extension/go/gopkg/nxlog.co/gonxlog/ directory to the
$GOPATH/src/nxlog.co/gonxlog/ directory.

2. Change directory to $GOPATH/src/nxlog.co/gonxlog/.

3. Execute the go install gonxlog.go command to install the file.

120.13.2. Compiling the Go File


In order to be able to call Go functions, the Go file must be compiled into a shared object file that has the .so
extension. The syntax for compiling the Go file is the following.

go build -o /path/to/yoursofile.so -buildmode=c-shared /path/to/yourgofile.go

120.13.3. Configuration
The xm_go module accepts the following directives in addition to the common module directives.

804
ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.

Exec
This mandatory directive uses the go_call(function) which must accept an nxLogData() object as its only
argument. For this directive, any number of go_call(function) may be defined as displayed below.

Exec go_call("process", "arg1", "arg2", ..., "argN")

120.13.4. Configuration Templates

In this Go file template, a simple function is called via the go_call("process"); argument using the Exec
directive.

xm_go Simple Template


//export process
func process(ctx unsafe.Pointer) {
  // get logdata from the context
  if ld, ok := gonxlog.GetLogdata(ctx); ok {
  // llace your code here
  }
}

In this Go file template, a multi-argument function is called via the go_call("process", "arg1",
"arg2", …, "argN") argument using the Exec directive.

xm_go Multi-argument Template


//export process
func process(ptr unsafe.Pointer) {
  // get logdata from the context
  if data, ok := gonxlog.GetLogdata(ptr); ok {
  // place your code here
  // get additional arguments
  for i := 0; i < gonxlog.GetArgc(ptr); i++ {
  // iterate through additional args: "arg1", "arg2", ..., "argN"
  if arg, ok := gonxlog.GetArg(ptr, i); ok {
  // place your your additional argument here
  }
  }
}

120.13.5. Examples

805
Example 560. Using xm_go for Log Processing

This configuration calls the go_call process in the compiled external go language shared object file to
mask the IPv4 addresses in the input file, so all of them looks x.x.x.x in the output file.

nxlog.conf
 1 <Extension ext>
 2 Module xm_go
 3 ImportLib "file/process.so"
 4 </Extension>
 5
 6 <Input in1>
 7 Module im_file
 8 File "file/input.txt"
 9 </Input>
10
11 <Output out>
12 Module om_file
13 File "file/output.txt"
14 Exec go_call("process");
15 </Output>

xm_go file Sample


//export write
func process(ctx unsafe.Pointer) {
  if ld, ok := gonxlog.GetLogdata(ctx); ok {
  if rawEvent, ok := ld.GetString("raw_event"); ok {
  ld.Set("raw_event", re.ReplaceAllStringFunc(rawEvent, func(word string) \
  string {
  if wordIsIpv4Address(word) {
  return "x.x.x.x"
  }
  return word
  }))
  }
  }
}

Input Sample
Sep 30 14:20:24 mephisto vmnet-dhcpd: Configured subnet: 192.168.169.0↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Setting vmnet-dhcp IP address: 192.168.169.254↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Recving on VNet/vmnet1/192.168.169.0↵
Sep 30 14:20:24 mephisto kernel: /dev/vmnet: open called by PID 3243 (vmnet-dhcpd)↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Sending on VNet/vmnet1/192.168.169.0↵

Output Sample
Sep 30 14:20:24 mephisto vmnet-dhcpd: Configured subnet: x.x.x.x↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Setting vmnet-dhcp IP address: x.x.x.x↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Recving on VNet/vmnet1/x.x.x.x↵
Sep 30 14:20:24 mephisto kernel: /dev/vmnet: open called by PID 3243 (vmnet-dhcpd)↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Sending on VNet/vmnet1/x.x.x.x↵

120.14. Grok (xm_grok)


This module supports parsing events with Grok patterns. A field is added to the event record for each pattern
semantic. For more information about Grok, see the Logstash Grok filter plugin documentation.

806
See the list of installer packages that provide the xm_grok module in the Available Modules chapter of the NXLog
User Guide.

120.14.1. Configuration
The xm_grok module accepts the following directives in addition to the common module directives.

Pattern
This mandatory directive specifies a directory or file containing Grok patterns. Wildcards may be used to
specify multiple directories or files. This directive may be used more than once.

120.14.2. Functions
The following functions are exported by xm_grok.

boolean match_grok(string pattern)


Execute the match_grok() procedure with the specified pattern on the $raw_event field. If the event is
successfully matched, return TRUE, otherwise FALSE.

boolean match_grok(string field, string pattern)


Execute the match_grok() procedure with the specified pattern on the specified field. If the event is
successfully matched, return TRUE, otherwise FALSE.

120.14.3. Procedures
The following procedures are exported by xm_grok.

match_grok(string pattern);
Attempt to match and parse the $raw_event field of the current event with the specified pattern.

match_grok(string field, string pattern);


Attempt to match and parse the field of the current event with the specified pattern.

120.14.4. Examples

807
Example 561. Using Grok Patterns for Parsing

This configuration reads Syslog events from file and parses them with the parse_syslog() procedure (this
sets the $Message field). Then the match_grok() function is used to attempt a series of matches on the
$Message field until one is successful. If no patterns match, an internal message is logged.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension grok>
 6 Module xm_grok
 7 Pattern modules/extension/grok/patterns2.txt
 8 </Extension>
 9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 if match_grok($Message, "%{SSH_AUTHFAIL_WRONGUSER}") {}
16 else if match_grok($Message, "%{SSH_AUTHFAIL_WRONGCREDS}") {}
17 else if match_grok($Message, "%{SSH_AUTH_SUCCESS}") {}
18 else if match_grok($Message, "%{SSH_DISCONNECT}") {}
19 else
20 {
21 log_info('Event did not match any pattern');
22 }
23 </Exec>
24 </Input>

patterns2.txt
USERNAME [a-zA-Z0-9_-]+
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
WORD \b\w+\b
GREEDYDATA .*
IP (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-
9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-
9]{1,2}))(?![0-9])

SSH_AUTHFAIL_WRONGUSER Failed %{WORD:ssh_authmethod} for invalid user %{USERNAME:ssh_user} from


%{IP:ssh_client_ip} port %{NUMBER:ssh_client_port} (?<ssh_protocol>\w+\d+)
SSH_AUTHFAIL_WRONGCREDS Failed %{WORD:ssh_authmethod} for %{USERNAME:ssh_user} from
%{IP:ssh_client_ip} port %{NUMBER:ssh_client_port} (?<ssh_protocol>\w+\d+)
SSH_AUTH_SUCCESS Accepted %{WORD:ssh_authmethod} for %{USERNAME:ssh_user} from
%{IP:ssh_client_ip} port %{NUMBER:ssh_client_port} (?<ssh_protocol>\w+\d+)(?::
%{WORD:ssh_pubkey_type} %{GREEDYDATA:ssh_pubkey_fingerprint})?
SSH_DISCONNECT Received disconnect from %{IP:ssh_client_ip} port
%{INT:ssh_client_port}.*?:\s+%{GREEDYDATA:ssh_disconnect_reason}

120.15. Java (xm_java)


This module provides support for processing NXLog log data with methods written in the Java language. The Java
classes specified via the ClassPath directives may define one or more class methods which can be called from the
Exec directives of NXLog modules via the functions provided by the xm_java module. Such methods must be

808
declared with the public and static modifiers in the Java code to be accessible from NXLog, and the first
parameter must be of NXLog.Logdata type. See also the im_java and om_java modules.

For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.

The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.

class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:

getField(name)
This method returns the value of the field name in the event.

setField(name, value)
This method sets the value of field name to value.

deleteField(name)
This method removes the field name from the event record.

getFieldnames()
This method returns an array with the names of all the fields currently in the event record.

getFieldtype(name)
This method retrieves the field type using the value from the name field.

class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.

saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.

loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.

Below is the list of methods for sending messages to the internal logger.

NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.

NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.

NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.

NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.

809
120.15.1. Configuration
The NXLog process maintains only one JVM instance for all xm_java, im_java or om_java running instances. This
means all Java classes loaded by the ClassPath directive will be available for all running instances.

The xm_java module accepts the following directives in addition to the common module directives.

ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.

VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.

VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.

JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.

120.15.2. Procedures
The following procedures are exported by xm_java.

call(string method, varargs args);


Call the given Java static method.

java_call(string method, varargs args);


Call the given Java static method.

120.15.3. Example of Usage


Example 562. Using the xm_java Module for Processing Logs

Below is an example of module usage. The process1 and process2 methods of the Extension Java class
split log data into key-value pairs and add an additional field to each entry. The results are then converted
to JSON format.

810
nxlog.conf
 1 <Extension ext>
 2 Module xm_java
 3 # Path to the compiled Java class
 4 Classpath /tmp/Extension.jar
 5 </Extension>
 6
 7 <Output fileout>
 8 Module om_file
 9 File '/tmp/output'
10 # Calling the first method to split data into key-value pairs
11 Exec java_call("Extension.process1");
12 # Calling the second method and passing the additional parameter
13 Exec ext->call("Extension.process2", "test");
14 Exec to_json();
15 </Output>

Below is the Java class with comments.

811
Extension.java
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;

public class Extension {

  // The first method used by the NXLog module


  // The NXLog.Logdata ld is a mandatory parameter
  // This method should be public and static
  public static void process1(NXLog.Logdata ld) {
  // This method splits logdata into key-value pairs
  String rawEvent = (String) ld.getField("raw_event");
  String[] pairs = rawEvent.split(" ");

  for (String v : pairs) {


  if (v.isEmpty()) continue;
  String[] kv = v.split("=");
  // Adds new fields to the logdata
  ld.setField(kv[0], kv[1]);
  }
  }

  // The second method used by the NXLog module


  // The NXLog.Logdata ld is as mandatory parameter
  // This method should be public and static
  public static void process2(NXLog.Logdata ld, String stage) {
  String type = (String) ld.getField("type");
  // Deletes fields
  ld.deleteField("EventReceivedTime");
  ld.deleteField("SourceModuleName");
  ld.deleteField("SourceModuleType");
  // Creats the additional "Stage" field with a value
  ld.setField("Stage",stage);
  if (type == null) {
  return;
  }

  if (type.equals("CWD")) {
  try {
  NXLog.logInfo(String.format("type: %s", type));
  Files.write(
  Paths.get("tmp/processed"),
  ((String) ld.getField("raw_event") + "\n").getBytes(),
  StandardOpenOption.APPEND,
  StandardOpenOption.CREATE
  );
  } catch (IOException e) {
  e.printStackTrace();
  }
  }
  }
}

Below are the log samples before and after processing.

812
Input sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵
type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵
type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵

Output Sample
{
  "type":"CWD",
  "msg":"audit(1489999368.711:35724):",
  "cwd":"\"/root/nxlog\"",
  "Stage":"test"
}
{
  "type":"PATH",
  "msg":"audit(1489999368.711:35724):",
  "item":"0",
  "name":"\"/root/test\"",
  "inode":"528869",
  "dev":"08:01",
  "mode":"040755",
  "ouid":"0",
  "ogid":"0",
  "rdev":"00:00",
  "Stage":"test"
}
{
  "type":"SYSCALL",
  "msg":"audit(1489999368.711:35725):",
  "arch":"c000003e",
  "syscall":"2",
  "success":"yes",
  "exit":"3",
  "a0":"12dcc40",
  "a1":"90800",
  "a2":"0",
  "a3":"0",
  "items":"1",
  "ppid":"15391",
  "pid":"12309",
  "auid":"0",
  "uid":"0",
  "gid":"0",
  "euid":"0",
  "suid":"0",
  "fsuid":"0",
  "egid":"0",
  "sgid":"0",
  "fsgid":"0",
  "tty":"pts4",
  "ses":"583",
  "comm":"\"ls\"",
  "exe":"\"/bin/ls\"",
  "key":"(null)",
  "Stage":"test"
}

813
120.16. JSON (xm_json)
This module provides functions and procedures for processing data formatted as JSON. JSON can be generated
from log data, or JSON can be parsed into fields. Unfortunately, the JSON specification does not define a type for
datetime values so these are represented as JSON strings. The JSON parser in xm_json can automatically detect
datetime values, so it is not necessary to explicitly use parsedate().

See the list of installer packages that provide the xm_json module in the Available Modules chapter of the NXLog
User Guide.

120.16.1. Configuration
The xm_json module accepts the following directives in addition to the common module directives.

DateFormat
This optional directive can be used to set the format of the datetime strings in the generated JSON. This
directive is similar to the global DateFormat, but is independent of it: this directive is defined separately and
has its own default. If this directive is not specified, the default is YYYY-MM-DDThh:mm:ss.sTZ.

DetectNestedJSON
This optional directive can be used to disable the autodetection of nested JSON strings when calling the
to_json() function or the to_json() procedure. For example, consider a field $key which contains the string
value of {"subkey":42}. If DetectNestedJSON is set to FALSE, to_json() will produce
{"key":"{\"subkey\":42}"}. If DetectNestedJSON is set to TRUE (the default), the result is
{"key":{"subkey":42}}—a valid nested JSON record.

Flatten
This optional boolean directive specifies that the parse_json() procedure should flatten nested JSON, creating
field names with dot notation. The default is FALSE. If Flatten is set to TRUE, the following JSON will populate
the fields $event.time and $event.severity:

{"event":{"time":"2015-01-01T00:00:00.000Z","severity":"ERROR"}}

ForceUTF8
This optional boolean directive specifies whether the generated JSON should be valid UTF-8. The JSON
specification requires JSON records to be UTF-8 encoded, and some tools fail to parse JSON if it is not valid
UTF-8. If ForceUTF8 is set to TRUE, the generated JSON will be validated and any invalid character will be
replaced with a question mark (?). The default is FALSE.

IncludeHiddenFields
This boolean directive specifies that the to_json() function or the to_json() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated JSON will contain these otherwise excluded fields.

ParseDate
If this boolean directive is set to TRUE, xm_json will attempt to parse as a timestamp any string that appears
to begin with a 4-digit year (as a regular expression, ^[12][0-9]{3}-). If this directive is set to FALSE, xm_json
will not attempt to parse these strings. The default is TRUE.

PrettyPrint
If set to TRUE, this optional boolean directive specifies that the generated JSON should be pretty-printed,
where each key-value is printed on a new indented line. Note that this adds line-breaks to the JSON records,
which can cause parser errors in some tools that expect single-line JSON. If this directive is not specified, the
default is FALSE.

814
UnFlatten
This optional boolean directive specifies that the to_json() procedure should generate nested JSON when field
names exist containing the dot (.). For example, if UnFlatten is set to TRUE, the two fields $event.time and
$event.severity will be converted to JSON as follows:

{"event":{"time":"2015-01-01T00:00:00.000Z","severity":"ERROR"}}

When UnFlatten is set to FALSE (the default if not specified), the following JSON would result:

{"event.time":"2015-01-01T00:00:00.000Z","event.severity":"ERROR"}

120.16.2. Functions
The following functions are exported by xm_json.

string to_json()
Convert the fields to JSON and return this as a string value. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded unless IncludeHiddenFields directive is set to
TRUE.

120.16.3. Procedures
The following procedures are exported by xm_json.

parse_json();
Parse the $raw_event field as JSON input.

parse_json(string source);
Parse the given string as JSON format.

to_json();
Convert the fields to JSON and put this into the $raw_event field. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded unless IncludeHiddenFields directive is set to
TRUE.

120.16.4. Examples

815
Example 563. Syslog to JSON Format Conversion

The following configuration accepts Syslog (both BSD and IETF) via TCP and converts it to JSON.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog(); to_json();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/json.txt"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => file
23 </Route>

Input Sample
<30>Sep 30 15:45:43 host44.localdomain.hu acpid: 1 client rule loaded↵

Output Sample
{
  "MessageSourceAddress":"127.0.0.1",
  "EventReceivedTime":"2011-03-08 14:22:41",
  "SyslogFacilityValue":1,
  "SyslogFacility":"DAEMON",
  "SyslogSeverityValue":5,
  "SyslogSeverity":"INFO",
  "SeverityValue":2,
  "Severity":"INFO",
  "Hostname":"host44.localdomain.hu",
  "EventTime":"2011-09-30 14:45:43",
  "SourceName":"acpid",
  "Message":"1 client rule loaded "
}

816
Example 564. Converting Windows EventLog to Syslog-Encapsulated JSON

The following configuration reads the Windows EventLog and converts it to the BSD Syslog format, with the
message part containing the fields in JSON.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_json(); to_syslog_bsd();
12 </Input>
13
14 <Output tcp>
15 Module om_tcp
16 Host 192.168.1.1
17 Port 1514
18 </Output>
19
20 <Route eventlog_json_tcp>
21 Path eventlog => tcp
22 </Route>

Output Sample
<14>Mar 8 14:40:11 WIN-OUNNPISDHIG Service_Control_Manager: {"EventTime":"2012-03-08
14:40:11","EventTimeWritten":"2012-03-08 14:40:11","Hostname":"WIN-
OUNNPISDHIG","EventType":"INFO","SeverityValue":2,"Severity":"INFO","SourceName":"Service
Control
Manager","FileName":"System","EventID":7036,"CategoryNumber":0,"RecordNumber":6788,"Message":"T
he nxlog service entered the running state. ","EventReceivedTime":"2012-03-08 14:40:12"}↵

120.17. Key-Value Pairs (xm_kvp)


This module provides functions and procedures for processing data formatted as key-value pairs (KVPs), also
commonly called "name-value pairs". The module can both parse and generate key-value formatted data.

It is quite common to have a different set of keys in each log line when accepting key-value formatted input
messages. Extracting values from such logs using regular expressions can be quite cumbersome. The xm_kvp
extension module automates this process.

Log messages containing key-value pairs typically look like one the following:

• key1: value1, key2: value2, key42: value42

• key1="value 1"; key2="value 2"

• Application=smtp, Event='Protocol Conversation', status='Client Request',


ClientRequest='HELO 1.2.3.4'

Keys are usually separated from the value using an equal sign (=) or a colon (:); and the key-value pairs are
delimited with a comma (,), a semicolon (;), or a space. In addition, values and keys may be quoted and may
contain escaping. The module will try to guess the format, or the format can be explicitly specified using the

817
configuration directives below.

It is possible to use more than one xm_kvp module instance with different options in order to
NOTE support different KVP formats at the same time. For this reason, functions and procedures
exported by the module are public and must be referenced by the module instance name.

See the list of installer packages that provide the xm_kvp module in the Available Modules chapter of the NXLog
User Guide.

120.17.1. Configuration
The xm_kvp module accepts the following directives in addition to the common module directives.

DetectNumericValues
If this optional boolean directive is set to TRUE, the parse_kvp() procedure will try to parse numeric values as
integers first. The default is TRUE (numeric values will be parsed as integers and unquoted in the output).
Note that floating-point numbers will not be handled.

EscapeChar
This optional directive takes a single character (see below) as argument. It specifies the character used for
escaping special characters. The escape character is used to prefix the following characters: the EscapeChar
itself, the KeyQuoteChar, and the ValueQuoteChar. If EscapeControl is TRUE, the newline (\n), carriage return
(\r), tab (\t), and backspace (\b) control characters are also escaped. The default escape character is the
backslash (\).

EscapeControl
If this optional boolean directive is set to TRUE, control characters are also escaped. See the EscapeChar
directive for details. The default is TRUE (control characters are escaped). Note that this is necessary in order
to support single-line KVP field lists containing line-breaks.

IncludeHiddenFields
This boolean directive specifies that the to_kvp() function or the to_kvp() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated text will contain these otherwise excluded fields.

KeyQuoteChar
This optional directive takes a single character (see below) as argument. It specifies the quote character for
enclosing key names. If this directive is not specified, the module will accept single-quoted keys, double-
quoted keys, and unquoted keys.

KVDelimiter
This optional directive takes a single character (see below) as argument. It specifies the delimiter character
used to separate the key from the value. If this directive is not set and the parse_kvp() procedure is used, the
module will try to guess the delimiter from the following: the colon (:) or the equal-sign (=).

KVPDelimiter
This optional directive takes a single character (see below) as argument. It specifies the delimiter character
used to separate the key-value pairs. If this directive is not set and the parse_kvp() procedure is used, the
module will try to guess the delimiter from the following: the comma (,), the semicolon (;), or the space.

QuoteMethod
This directive can be used to specify the quote method used for the values by to_kvp().

All
The values will be always quoted. This is the default.

818
Delimiter
The value will be only enclosed in quotes if it contains the delimiter character.

None
The values will not be quoted.

ValueQuoteChar
This optional directive takes a single character (see below) as argument. It specifies the quote character for
enclosing key values. If this directive is not specified, the module will accept single-quoted values, double-
quoted values, and unquoted values. Normally, quotation is used when the value contains a space or the
KVDelimiter character.

120.17.1.1. Specifying Quote, Escape, and Delimiter Characters


The KeyQuoteChar, ValueQuoteChar, EscapeChar, KVDelimiter, and KVPDelimiter directives can be specified in
several ways.

Unquoted single character


Any printable character can be specified as an unquoted character, except for the backslash (\):

Delimiter ;

Control characters
The following non-printable characters can be specified with escape sequences:

\a
audible alert (bell)

\b
backspace

\t
horizontal tab

\n
newline

\v
vertical tab

\f
formfeed

\r
carriage return

For example, to use TAB delimiting:

Delimiter \t

A character in single quotes


The configuration parser strips whitespace, so it is not possible to define a space as the delimiter unless it is
enclosed within quotes:

Delimiter ' '

Printable characters can also be enclosed:

819
Delimiter ';'

The backslash can be specified when enclosed within quotes:

Delimiter '\'

A character in double quotes


Double quotes can be used like single quotes:

Delimiter " "

The backslash can be specified when enclosed within double quotes:

Delimiter "\"

A hexadecimal ASCII code


Hexadecimal ASCII character codes can also be used by prepending 0x. For example, the space can be
specified as:

Delimiter 0x20

This is equivalent to:

Delimiter " "

120.17.2. Functions
The following functions are exported by xm_kvp.

string to_kvp()
Convert the internal fields to a single key-value pair formatted string.

120.17.3. Procedures
The following procedures are exported by xm_kvp.

parse_kvp();
Parse the $raw_event field as key-value pairs and populate the internal fields using the key names.

parse_kvp(string source);
Parse the given string key-value pairs and populate the internal fields using the key names.

parse_kvp(string source, string prefix);


Parse the given string key-value pairs and populate the internal fields using the key names prefixed with the
value of the second parameter.

reset_kvp();
Reset the KVP parser so that the autodetected KeyQuoteChar, ValueQuoteChar, KVDelimiter, and
KVPDelimiter characters can be detected again.

to_kvp();
Format the internal fields as key-value pairs and put this into the $raw_event field.

Note that directive IncludeHiddenFields has an effect on fields included in the output.

820
120.17.4. Examples
The following examples illustrate various scenarios for parsing KVPs, whether embedded, encapsulated (in
Syslog, for example), or alone. In each case, the logs are converted from KVP input files to JSON output files,
though obviously there are many other possibilities.

Example 565. Simple KVP Parsing

The following two lines of input are in a simple KVP format where each line consists of various keys with
values assigned to them.

Input Sample
Name=John, Age=42, Weight=84, Height=142
Name=Mike, Weight=64, Age=24, Pet=dog, Height=172

This input can be parsed with the following configuration. The parsed fields can be used in NXLog
expressions: a new field named $Overweight is added and set to TRUE if the conditions are met. Finally a
few automatically added fields are removed, and the log is then converted to JSON.

nxlog.conf (truncated)
 1 <Extension kvp>
 2 Module xm_kvp
 3 KVPDelimiter ,
 4 KVDelimiter =
 5 EscapeChar \\
 6 </Extension>
 7
 8 <Extension json>
 9 Module xm_json
10 </Extension>
11
12 <Input filein>
13 Module im_file
14 File "modules/extension/kvp/xm_kvp5.in"
15 <Exec>
16 if $raw_event =~ /^#/ drop();
17 else
18 {
19 kvp->parse_kvp();
20 delete($EventReceivedTime);
21 delete($SourceModuleName);
22 delete($SourceModuleType);
23 if ( integer($Weight) > integer($Height) - 100 ) $Overweight = TRUE;
24 to_json();
25 }
26 </Exec>
27 </Input>
28 [...]

Output Sample
{"Name":"John","Age":42,"Weight":84,"Height":142,"Overweight":true}
{"Name":"Mike","Weight":64,"Age":24,"Pet":"dog","Height":172}

821
Example 566. Parsing KVPs in Cisco ACS Syslog

The following lines are from a Cisco ACS source.

Input Sample
<38>2010-10-12 21:01:29 10.0.1.1 CisACS_02_FailedAuth 1k1fg93nk 1 0 Message-Type=Authen
failed,User-Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵
<38>2010-10-12 21:01:31 10.0.1.1 CisACS_02_FailedAuth 2k1fg63nk 1 0 Message-Type=Authen
failed,User-Name=Foo,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵

These logs are in Syslog format with a set of values present in each record and an additional set of KVPs.
The following configuration can be used to process this and convert it to JSON.

nxlog.conf (truncated)
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Extension kvp>
10 Module xm_kvp
11 KVDelimiter =
12 KVPDelimiter ,
13 </Extension>
14
15 <Input cisco>
16 Module im_file
17 File "modules/extension/kvp/cisco_acs.in"
18 <Exec>
19 parse_syslog_bsd();
20 if ( $Message =~ /^CisACS_(\d\d)_(\S+) (\S+) (\d+) (\d+) (.*)$/ )
21 {
22 $ACSCategoryNumber = $1;
23 $ACSCategoryName = $2;
24 $ACSMessageId = $3;
25 $ACSTotalSegments = $4;
26 $ACSSegmentNumber = $5;
27 $Message = $6;
28 kvp->parse_kvp($Message);
29 [...]

Output Sample
{"SourceModuleName":"cisco","SourceModuleType":"im_file","SyslogFacilityValue":4,"SyslogFacilit
y":"AUTH","SyslogSeverityValue":6,"SyslogSeverity":"INFO","SeverityValue":2,"Severity":"INFO","
Hostname":"10.0.1.1","EventTime":"2010-10-12 21:01:29","Message":"Message-Type=Authen
failed,User-Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01","ACSCategoryNumber":"02"
,"ACSCategoryName":"FailedAuth","ACSMessageId":"1k1fg93nk","ACSTotalSegments":"1","ACSSegmentNu
mber":"0","Message-Type":"Authen failed","User-Name":"John","NAS-IP-Address":"10.0.1.2","AAA
Server":"acs01"}
{"SourceModuleName":"cisco","SourceModuleType":"im_file","SyslogFacilityValue":4,"SyslogFacilit
y":"AUTH","SyslogSeverityValue":6,"SyslogSeverity":"INFO","SeverityValue":2,"Severity":"INFO","
Hostname":"10.0.1.1","EventTime":"2010-10-12 21:01:31","Message":"Message-Type=Authen
failed,User-Name=Foo,NAS-IP-Address=10.0.1.2,AAA Server=acs01","ACSCategoryNumber":"02"
,"ACSCategoryName":"FailedAuth","ACSMessageId":"2k1fg63nk","ACSTotalSegments":"1","ACSSegmentNu
mber":"0","Message-Type":"Authen failed","User-Name":"Foo","NAS-IP-Address":"10.0.1.2","AAA
Server":"acs01"}

822
Example 567. Parsing KVPs in Sidewinder Logs

The following line is from a Sidewinder log source.

Input Sample
date="May 5 14:34:40 2009
MDT",fac=f_mail_filter,area=a_kmvfilter,type=t_mimevirus_reject,pri=p_major,pid=10174,ruid=0,eu
id=0,pgid=10174,logid=0,cmd=kmvfilter,domain=MMF1,edomain=MMF1,message_id=(null),srcip=66.74.18
4.9,mail_sender=<habuzeid6@…>,virus_name=W32/Netsky.c@MM!zip,reason="Message scan detected a
Virus in msg Unknown, message being Discarded, and not quarantined"↵

This can be parsed and converted to JSON with the following configuration.

nxlog.conf
 1 <Extension kvp>
 2 Module xm_kvp
 3 KVPDelimiter ,
 4 KVDelimiter =
 5 EscapeChar \\
 6 ValueQuoteChar "
 7 </Extension>
 8
 9 <Extension json>
10 Module xm_json
11 </Extension>
12
13 <Input sidewinder>
14 Module im_file
15 File "modules/extension/kvp/sidewinder.in"
16 Exec kvp->parse_kvp(); delete($EventReceivedTime); to_json();
17 </Input>
18
19 <Output file>
20 Module om_file
21 File 'tmp/output'
22 </Output>
23
24 <Route sidewinder_to_file>
25 Path sidewinder => file
26 </Route>

Output Sample
{"SourceModuleName":"sidewinder","SourceModuleType":"im_file","date":"May 5 14:34:40 2009 MDT"
,"fac":"f_mail_filter","area":"a_kmvfilter","type":"t_mimevirus_reject","pri":"p_major","pid":1
0174,"ruid":0,"euid":0,"pgid":10174,"logid":0,"cmd":"kmvfilter","domain":"MMF1","edomain":"MMF1
","message_id":"(null)","srcip":"66.74.184.9","mail_sender":"<habuzeid6@…>","virus_name":"W32/N
etsky.c@MM!zip","reason":"Message scan detected a Virus in msg Unknown, message being
Discarded, and not quarantined"}

Example 568. Parsing URL Request Parameters in Apache Access Logs

URLs in HTTP requests frequently contain URL parameters which are a special kind of key-value pairs
delimited by the ampersand (&). Here is an example of two HTTP requests logged by the Apache web server
in the Combined Log Format.

823
Input Sample
192.168.1.1 - foo [11/Jun/2013:15:44:34 +0200] "GET /do?action=view&obj_id=2 HTTP/1.1" 200 1514
"https://localhost" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0"↵
192.168.1.1 - - [11/Jun/2013:15:44:44 +0200] "GET /do?action=delete&obj_id=42 HTTP/1.1" 401 788
"https://localhost" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0"↵

The following configuration file parses the access log and extracts all the fields. The request parameters are
extracted into the $HTTPParams field using a regular expression, and then this field is further parsed using
the KVP parser. At the end of the processing all fields are converted to KVP format using the to_kvp()
procedure of the kvp2 instance.

nxlog.conf (truncated)
 1 <Extension kvp>
 2 Module xm_kvp
 3 KVPDelimiter &
 4 KVDelimiter =
 5 </Extension>
 6
 7 <Extension kvp2>
 8 Module xm_kvp
 9 KVPDelimiter ;
10 KVDelimiter =
11 #QuoteMethod None
12 </Extension>
13
14 <Input apache>
15 Module im_file
16 File "modules/extension/kvp/apache_url.in"
17 <Exec>
18 if $raw_event =~ /(?x)^(\S+)\ (\S+)\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
19 \ HTTP.\d\.\d\"\ (\d+)\ (\d+)\ \"([^\"]+)\"\ \"([^\"]+)\"/
20 {
21 $Hostname = $1;
22 if $3 != '-' $AccountName = $3;
23 $EventTime = parsedate($4);
24 $HTTPMethod = $5;
25 $HTTPURL = $6;
26 $HTTPResponseStatus = $7;
27 $FileSize = $8;
28 $HTTPReferer = $9;
29 [...]

The two request parameters action and obj_id then appear at the end of the KVP formatted lines.

Output Sample
SourceModuleName=apache;SourceModuleType=im_file;Hostname=192.168.1.1;AccountName=foo;EventTime
=2013-06-11
15:44:34;HTTPMethod=GET;HTTPURL=/do?action=view&obj_id=2;HTTPResponseStatus=200;FileSize=1514;H
TTPReferer=https://localhost;HTTPUserAgent='Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0
Firefox/17.0';HTTPParams=action=view&obj_id=2;action=view;obj_id=2;↵
SourceModuleName=apache;SourceModuleType=im_file;Hostname=192.168.1.1;EventTime=2013-06-11
15:44:44;HTTPMethod=GET;HTTPURL=/do?action=delete&obj_id=42;HTTPResponseStatus=401;FileSize=788
;HTTPReferer=https://localhost;HTTPUserAgent='Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
Gecko/17.0 Firefox/17.0';HTTPParams=action=delete&obj_id=42;action=delete;obj_id=42;↵

NOTE URL escaping is not handled.

824
120.18. LEEF (xm_leef)
This module provides two functions to generate and parse data in the Log Event Extended Format (LEEF), which
is used by IBM Security QRadar products. For more information about the format see the Log Event Extended
Format (LEEF) Version 2 specification.

See the list of installer packages that provide the xm_leef module in the Available Modules chapter of the NXLog
User Guide.

120.18.1. Configuration
The xm_leef module accepts the following directives in addition to the common module directives.

AddSyslogHeader
This optional boolean directive specifies whether a RFC 3164 (BSD-style) Syslog header should be prepended
to the output. This defaults to TRUE (a Syslog header will be added by the to_leef() procedure).

IncludeHiddenFields
This boolean directive specifies that the to_leef() function or the to_leef() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated LEEF text will contain these otherwise excluded fields.

LEEFHeader
This optional directive takes a string type expression and only has an effect on how to_leef() formats the
result. It should evaluate to the following format:

LEEF:1.0|Microsoft|MSExchange|2013 SP1|15345|

It should typically be used as follows:

LEEFHeader 'LEEF:1.0|Microsoft|MSExchange|2013 SP1|' + $EventID + '|'

When this directive is not specified, the LEEF header is constructed using the $Vendor, $SourceName (or
$SourceModuleName), $Version, and $EventID fields.

120.18.2. Functions
The following functions are exported by xm_leef.

string to_leef()
Convert the internal fields to a single LEEF formatted string.

Note that directive IncludeHiddenFields has an effect on fields included in the output.

120.18.3. Procedures
The following procedures are exported by xm_leef.

parse_leef();
Parse the $raw_event field as key-value pairs and create the following NXLog fields (if possible): $Category,
$AccountName, $AccountType, $Domain, $EventTime, $Hostname, $MessageSourceAddress, $SeverityValue
(mapped from the sev attribute), $SourceName, $devTimeFormat, $LEEFVersion, $Vendor, $Version,
$EventID, $DelimiterCharacter.

parse_leef(string source);
Parse the the given string as key-value pairs and create the following NXLog fields (if possible): $Category,

825
$AccountName, $AccountType, $Domain, $EventTime, $Hostname, $MessageSourceAddress, $SeverityValue
(mapped from the sev attribute), $SourceName, $devTimeFormat, $LEEFVersion, $Vendor, $Version,
$EventID, $DelimiterCharacter.

to_leef();
Format the internal fields as LEEF and put this into the $raw_event field. to_leef() will automatically map the
following fields to event attributes, if available:

NXLog field LEEF attribute

$AccountName accountName

$AccountType role

$Category cat

$Domain domain

$EventTime devTime

$Hostname identHostName

$MessageSourceAddress src

$SeverityValue (mapped) sev

$SourceName vSrcName

120.18.4. Fields
The following fields are used by xm_leef.

In addition to the fields listed below, the parse_leef() procedure will create a field for every LEEF attribute
contained in the source LEEF message such as $srcPort, $cat, $identHostName, etc.

$AccountName (type: string)


The name of the user account that created the event.

$AccountType (type: string)


The type of the user account (e.g. Administrator, User, Domain Admin) that created the event. This field
takes the value of the role LEEF attribute.

$Category (type: string)


A text string that extends the LEEF EventID field with more specific information about the LEEF event. This
field takes the value of the cat LEEF attribute.

$DelimiterCharacter (type: string)


The character specified as a delimiter in the LEEF header.

$devTimeFormat (type: string)


A string that defines the date format of the LEEF event, contained in the devTimeFormat LEEF attribute, for
example, "yyyy-MM-dd HH:mm:ss".

$Domain (type: string)


The name of the domain the user account belongs to.

$EventID (type: string)


The ID of the event. This field takes the value of the EventID LEEF header.

826
$EventTime (type: datetime)
The time when the event occurred. This field takes the value of the devTime LEEF attribute.

$Hostname (type: string)


The name of the host that created the event. This field takes the value of the identHostname LEEF attribute.

$LEEFVersion (type: string)


The LEEF format version contained in the LEEF header, for example, LEEF:1.0.

$MessageSourceAddress (type: ipaddr)


The IP address of the device that created the event. This field takes the value of the src LEEF attribute.

$SeverityValue (type: string)


A numeric value between 1 and 5 that indicates the severity of the event. This value is mapped to or from the
value of the sev LEEF attribute:

LEEF sev $SeverityValu


attribute e
≤2 1

3 1

4 2

5 2

6 3

7 3

8 4

9 4

≥10 5

$SourceName (type: string)


The name of the subsystem or application that generated the event. This field takes the value of the Product
LEEF header field.

$Vendor (type: string)


A text string that identifies the vendor or manufacturer of the device sending the syslog event in the LEEF
format. This field takes the value of the Vendor LEEF header field.

$Version (type: string)


A string that identifies the version of the software or appliance that sent the event log. This field takes the
value of the Product version LEEF header field.

120.18.5. Examples

827
Example 569. Sending Windows EventLog as LEEF over UDP

This configuration will collect Windows EventLog and NXLog internal messages, convert them to LEEF, and
forward via UDP.

nxlog.conf
 1 <Extension leef>
 2 Module xm_leef
 3 </Extension>
 4
 5 <Input internal>
 6 Module im_internal
 7 </Input>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 </Input>
12
13 <Output udp>
14 Module om_udp
15 Host 192.168.168.2
16 Port 1514
17 Exec to_leef();
18 </Output>
19
20 <Route qradar>
21 Path internal, eventlog => udp
22 </Route>

120.19. Microsoft DNS Server (xm_msdns)


This module provides support for parsing Windows DNS Server logs. An InputType is registered using the name
of the extension module instance. For special cases, the parse_msdns() procedure can be used instead for
parsing individual events or strings.

The xm_msdns module does not support the detailed format enabled via the Details option
WARNING in the DNS Server Debug Logging configuration. NXLog could be configured to parse this
format with the xm_multiline module.

See the list of installer packages that provide the xm_msdns module in the Available Modules chapter of the
NXLog User Guide.

120.19.1. Configuration
The xm_msdns module accepts the following directives in addition to the common module directives.

DateFormat
This optional directive allows you to define the format of the date field when parsing DNS Server logs. The
directive’s argument must be a format string compatiable with the C strptime(3) function. This directive works
similarly to the global DateFormat directive, and if not specified, the default format [D|DD]/[M|MM]/YYYY
[H|HH]:MM:SS [AM|PM] is used.

EventLine
This boolean directive specifies EVENT lines in the input should be parsed. If set to FALSE, EVENT lines will be
discarded. The default is TRUE.

828
NoteLine
This boolean directive specifies that Note: lines in the input should be parsed. If set to FALSE, Note: lines will
be discarded. The default is TRUE.

PacketLine
This boolean directive specifies that PACKET lines in the input should be parsed. If set to FALSE, PACKET lines
will be discarded. The default is TRUE.

120.19.2. Procedures
The following procedures are exported by xm_msdns.

parse_msdns();
Parse the $raw_event field and populate the DNS log fields.

parse_msdns(string source);
Parse the given string and populate the DNS log fields.

120.19.3. Fields
The following fields are used by xm_msdns.

$raw_event (type: string)


The raw string from the event.

$AuthoritativeAnswer (type: boolean)


For PACKET events, set to TRUE if the "Authoritative Answer" flag is set.

$Context (type: string)


The event type, one of PACKET, EVENT, or Note.

$EventDescription (type: string)


The description for EVENT type events.

$EventTime (type: datetime)


The timestamp of the event.

$FlagsHex (type: string)


The flags in hexadecimal, for PACKET events only.

$InternalPacketIdentifier (type: string)


For PACKET events, an internal ID corresponding with the event.

$Message (type: string)


The event message in certain PACKET events that include a free-form message contrary to the normal Debug
Logging format. In particular, this is for PACKET events that have a message such as Response packet
000001D1B80209E0 does not match any outstanding query.

$Note (type: string)


For "Note" type events, this field contains the note.

$Opcode (type: string)


One of Standard Query, Notify, Update, and Unknown; for PACKET events.

829
$ParseFailure (type: string)
The remaining unparsed portion of a log message which does not match an expected format.

$Protocol (type: string)


The protocol being used; one of TCP or UDP. This field is added for the PACKET type only.

$QueryResponseIndicator (type: string)


This field indicates whether a PACKET event corresponds with a query or a response, and is set to either
Query or Response.

$QuestionName (type: string)


The lookup value for PACKET; for example example.com.

$QuestionType (type: string)


The lookup type for PACKET events; for example, A or AAAA.

$RecursionAvailable (type: boolean)


For PACKET events, set to TRUE if the "Recursion Available" flag is set.

$RecursionDesired (type: boolean)


For PACKET events, set to TRUE if the "Recursion Desired" flag is set.

$RemoteIP (type: string)


The IP address of the requesting client, for PACKET events only.

$ResponseCode (type: string)


For PACKET events, the DNS Server response code.

$SendReceiveIndicator (type: string)


This field indicates the direction for a PACKET event, and is set to either Snd or Rcv.

$ThreadId (type: string)


The ID of the thread that produced the event.

$TruncatedResponse (type: boolean)


For PACKET events, set to TRUE if the "Truncated Response" flag is set.

$Xid (type: string)


For PACKET events, the hexadecimal XID.

120.19.4. Examples

830
Example 570. Parsing DNS Logs With InputType

In this configuration, the DNS log file at C:\dns.log is parsed using the InputType provided by the
xm_msdns module. Any Note: lines in the input are discarded (the NoteLine directive is set to FALSE).

nxlog.conf
 1 <Extension dns_parser>
 2 Module xm_msdns
 3 EventLine TRUE
 4 PacketLine TRUE
 5 NoteLine FALSE
 6 </Extension>
 7
 8 <Input in>
 9 Module im_file
10 File 'modules/extension/msdns/xm_msdns1.in'
11 InputType dns_parser
12 </Input>

Example 571. Parsing DNS Logs With parse_msdns()

For cases where parsing via InputType is not possible, individual events can be parsed with the
parse_msdns() procedure.

nxlog.conf
1 <Extension dns_parser>
2 Module xm_msdns
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File 'modules/extension/msdns/xm_msdns1.out'
8 Exec dns_parser->parse_msdns();
9 </Input>

120.20. Multi-Line Parser (xm_multiline)


This module can be used for parsing log messages that span multiple lines. All lines in an event are joined to
form a single NXLog event record, which can be further processed as required. Each multi-line event is detected
through some combination of header lines, footer lines, and fixed line counts, as configured. The name of the
xm_multiline module instance is specified by the input module’s InputType directive.

The module maintains a separate context for each input source, allowing multi-line messages to be processed
correctly even when coming from multiple sources (specifically, multiple files or multiple network connections).

UDP is treated as a single source and all logs are processed under the same context. It is
WARNING therefore not recommended to use this module with im_udp if messages will be received
by multiple UDP senders (such as Syslog).

See the list of installer packages that provide the xm_multiline module in the Available Modules chapter of the
NXLog User Guide.

120.20.1. Configuration
The xm_multiline module accepts the following directives in addition to the common module directives. One of

831
FixedLineCount and HeaderLine must be specified.

FixedLineCount
This directive takes a positive integer number defining the number of lines to concatenate. This is useful
when receiving log messages spanning a fixed number of lines. When this number is defined, the module
knows where the event message ends and will not hold a message in the buffers until the next message
arrives.

HeaderLine
This directive takes a string or a regular expression literal. This will be matched against each line. When the
match is successful, the successive lines are appended until the next header line is read. This directive is
mandatory unless FixedLineCount is used.

Until a new message arrives with its associated header, the previous message is stored in
the buffers because the module does not know where the message ends. The im_file
NOTE module will forcibly flush this buffer after the configured PollInterval timeout. If this behavior
is unacceptable, disable AutoFlush, use an end marker with EndLine, or switch to an
encapsulation method (such as JSON).

The /s and /m regular expression modifiers may be used here, but they have no meaning,
NOTE
because HeaderLine is only checked against one input line at a time.

AutoFlush
If set to TRUE, this boolean directive specifies that the corresponding im_file module should forcibly flush the
buffer after its configured PollInterval timeout. The default is TRUE. If EndLine is used, AutoFlush is
automatically set to FALSE to disable this behavior. AutoFlush has no effect if xm_multiline is used with an
input module other than im_file.

EndLine
This is similar to the HeaderLine directive. This optional directive also takes a string or a regular expression
literal to be matched against each line. When the match is successful the message is considered complete.

Exec
This directive is almost identical to the behavior of the Exec directive used by the other modules with the
following differences:

• each line is passed in $raw_event as it is read, and the line terminator in included; and

• other fields cannot be used, and captured strings can not be stored as separate fields.

This is mostly useful for rewriting lines or filtering out certain lines with the drop() procedure.

120.20.2. Examples
Example 572. Parsing multi-line XML logs and converting to JSON

XML is commonly formatted as indented multi-line to make it more readable. In the following configuration
file the HeaderLine and EndLine directives are used to parse the events. The events are then converted to
JSON after some timestamp normalization.

832
nxlog.conf (truncated)
 1 <Extension multiline>
 2 Module xm_multiline
 3 HeaderLine /^<event>/
 4 EndLine /^<\/event>/
 5 </Extension>
 6
 7 <Extension xmlparser>
 8 Module xm_xml
 9 </Extension>
10
11 <Extension json>
12 Module xm_json
13 </Extension>
14
15 <Input filein>
16 Module im_file
17 File "modules/extension/multiline/xm_multiline5.in"
18 InputType multiline
19 <Exec>
20 # Discard everything that doesn't seem to be an xml event
21 if $raw_event !~ /^<event>/ drop();
22
23 # Parse the xml event
24 parse_xml();
25
26 # Rewrite some fields
27 $EventTime = parsedate($timestamp);
28 delete($timestamp);
29 [...]

Input Sample
<?xml version="1.0" encoding="UTF-8">
<event>
  <timestamp>2012-11-23 23:00:00</timestamp>
  <severity>ERROR</severity>
  <message>
  Something bad happened.
  Please check the system.
  </message>
</event>
<event>
  <timestamp>2012-11-23 23:00:12</timestamp>
  <severity>INFO</severity>
  <message>
  System state is now back to normal.
  </message>
</event>

Output Sample
{"SourceModuleName":"filein","SourceModuleType":"im_file","severity":"ERROR","message":"\n
Something bad happened.\n Please check the system.\n ","EventTime":"2012-11-23 23:00:00"}
{"SourceModuleName":"filein","SourceModuleType":"im_file","severity":"INFO","message":"\n
System state is now back to normal.\n ","EventTime":"2012-11-23 23:00:12"}

Example 573. Parsing DICOM Logs

Each log message has a header (TIMESTAMP INTEGER SEVERITY) which is used as the message boundary. A

833
regular expression is defined for this with the HeaderLine directive. Each log message is prepended with an
additional line containing dashes and is written to a file.

nxlog.conf
 1 <Extension dicom_multi>
 2 Module xm_multiline
 3 HeaderLine /^\d\d\d\d-\d\d-\d\d\d\d:\d\d:\d\d\.\d+\s+\d+\s+\S+\s+/
 4 </Extension>
 5
 6 <Input filein>
 7 Module im_file
 8 File "modules/extension/multiline/xm_multiline4.in"
 9 InputType dicom_multi
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File 'tmp/output'
15 Exec $raw_event = "--------------------------------------\n" + $raw_event;
16 </Output>
17
18 <Route parse_dicom>
19 Path filein => fileout
20 </Route>

Input Sample
2011-12-1512:22:51.000000 4296 INFO Association Request Parameters:↵
Our Implementation Class UID: 2.16.124.113543.6021.2↵
Our Implementation Version Name: RZDCX_2_0_1_8↵
Their Implementation Class UID:↵
Their Implementation Version Name:↵
Application Context Name: 1.2.840.10008.3.1.1.1↵
Requested Extended Negotiation: none↵
Accepted Extended Negotiation: none↵
2011-12-1512:22:51.000000 4296 DEBUG Constructing Associate RQ PDU↵
2011-12-1512:22:51.000000 4296 DEBUG WriteToConnection, length: 310, bytes written: 310,
loop no: 1↵
2011-12-1512:22:51.015000 4296 DEBUG PDU Type: Associate Accept, PDU Length: 216 + 6 bytes
PDU header↵
  02 00 00 00 00 d8 00 01 00 00 50 41 43 53 20 20↵
  20 20 20 20 20 20 20 20 20 20 52 5a 44 43 58 20↵
  20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00↵
2011-12-1512:22:51.031000 4296 DEBUG DIMSE sendDcmDataset: sending 146 bytes↵

834
Output Sample
--------------------------------------↵
2011-12-1512:22:51.000000 4296 INFO Association Request Parameters:↵
Our Implementation Class UID: 2.16.124.113543.6021.2↵
Our Implementation Version Name: RZDCX_2_0_1_8↵
Their Implementation Class UID:↵
Their Implementation Version Name:↵
Application Context Name: 1.2.840.10008.3.1.1.1↵
Requested Extended Negotiation: none↵
Accepted Extended Negotiation: none↵
--------------------------------------↵
2011-12-1512:22:51.000000 4296 DEBUG Constructing Associate RQ PDU↵
--------------------------------------↵
2011-12-1512:22:51.000000 4296 DEBUG WriteToConnection, length: 310, bytes written: 310,
loop no: 1↵
--------------------------------------↵
2011-12-1512:22:51.015000 4296 DEBUG PDU Type: Associate Accept, PDU Length: 216 + 6 bytes
PDU header↵
  02 00 00 00 00 d8 00 01 00 00 50 41 43 53 20 20↵
  20 20 20 20 20 20 20 20 20 20 52 5a 44 43 58 20↵
  20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00↵
--------------------------------------↵
2011-12-1512:22:51.031000 4296 DEBUG DIMSE sendDcmDataset: sending 146 bytes↵

835
Example 574. Multi-line messages with a fixed string header

The following configuration will process messages having a fixed string header containing dashes. Each
event is then prepended with a hash mark (#) and written to a file.

nxlog.conf
 1 <Extension multiline>
 2 Module xm_multiline
 3 HeaderLine "---------------"
 4 </Extension>
 5
 6 <Input filein>
 7 Module im_file
 8 File "modules/extension/multiline/xm_multiline1.in"
 9 InputType multiline
10 Exec $raw_event = "#" + $raw_event;
11 </Input>
12
13 <Output fileout>
14 Module om_file
15 File 'tmp/output'
16 </Output>
17
18 <Route parse_multiline>
19 Path filein => fileout
20 </Route>

Input Sample
---------------↵
1↵
---------------↵
1↵
2↵
---------------↵
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa↵
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb↵
ccccccccccccccccccccccccccccccccccccc↵
dddd↵
---------------↵

Output Sample
#---------------↵
1↵
#---------------↵
1↵
2↵
#---------------↵
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa↵
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb↵
ccccccccccccccccccccccccccccccccccccc↵
dddd↵
#---------------↵

836
Example 575. Multi-line messages with fixed line count

The following configuration will process messages having a fixed line count of four. Lines containing only
whitespace are ignored and removed. Each event is then prepended with a hash mark (#) and written to a
file.

nxlog.conf
 1 <Extension multiline>
 2 Module xm_multiline
 3 FixedLineCount 4
 4 Exec if $raw_event =~ /^\s*$/ drop();
 5 </Extension>
 6
 7 <Input filein>
 8 Module im_file
 9 File "modules/extension/multiline/xm_multiline2.in"
10 InputType multiline
11 </Input>
12
13 <Output fileout>
14 Module om_file
15 File 'tmp/output'
16 Exec $raw_event = "#" + $raw_event;
17 </Output>
18
19 <Route parse_multiline>
20 Path filein => fileout
21 </Route>

Input Sample
1↵
2↵
3↵
4↵
1asd↵

2asdassad↵
3ewrwerew↵
4xcbccvbc↵

1dsfsdfsd↵
2sfsdfsdrewrwe↵

3sdfsdfsew↵
4werwerwrwe↵

Output Sample
#1↵
2↵
3↵
4↵
#1asd↵
2asdassad↵
3ewrwerew↵
4xcbccvbc↵
#1dsfsdfsd↵
2sfsdfsdrewrwe↵
3sdfsdfsew↵
4werwerwrwe↵

837
Example 576. Multi-line messages with a Syslog header

Often, multi-line messages are logged over Syslog and each line is processed as an event, with its own
Syslog header. It is commonly necessary to merge these back into a single event message.

Input Sample
Nov 21 11:40:27 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:27 hostname app[26459]: eth2 1500 0 16936814 0 0 0 30486067
0 8 0 BMRU↵
Nov 21 11:40:27 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:27 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵
Nov 21 11:40:28 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:28 hostname app[26459]: eth2 1500 0 16945117 0 0 0 30493583
0 8 0 BMRU↵
Nov 21 11:40:28 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:28 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵
Nov 21 11:40:29 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:29 hostname app[26459]: eth2 1500 0 16945270 0 0 0 30493735
0 8 0 BMRU↵
Nov 21 11:40:29 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:29 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵

The following configuration strips the Syslog header from the netstat output stored in the traditional Syslog
formatted file, and each message is then printed again with a line of dashes used as a separator.

838
nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension netstat>
 6 Module xm_multiline
 7 FixedLineCount 4
 8 <Exec>
 9 parse_syslog_bsd();
10 $raw_event = $Message + "\n";
11 </Exec>
12 </Extension>
13
14 <Input filein>
15 Module im_file
16 File "modules/extension/multiline/xm_multiline3.in"
17 InputType netstat
18 </Input>
19
20 <Output fileout>
21 Module om_file
22 File 'tmp/output'
23 <Exec>
24 $raw_event = "-------------------------------------------------------" +
25 "-----------------------------\n" + $raw_event;
26 </Exec>
27 </Output>
28
29 <Route parse_multiline>
30 Path filein => fileout
31 </Route>

Output Sample
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16936814 0 0 0 30486067 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16945117 0 0 0 30493583 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16945270 0 0 0 30493735 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵

120.21. NetFlow (xm_netflow)


This module provides a parser for NetFlow payload collected over UDP using im_udp. It supports the following
NetFlow protocol versions: v1, v5, v7, v9, and IPFIX.

This module only supports parsing NetFlow data received as UDP datagrams and does not
NOTE
support TCP.

839
xm_netflow uses the IP address of the exporter device to distinguish between different devices
NOTE
so that templates with the same name would not conflict.

The module exports an input parser which can be referenced in the UDP input instance with the InputType
directive:

InputType netflow
This input reader function parses the payload and extracts NetFlow specific fields.

See the list of installer packages that provide the xm_netflow module in the Available Modules chapter of the
NXLog User Guide.

120.21.1. Configuration
The xm_netflow module accepts only the common module directives.

120.21.2. Fields
The fields generated by xm_netflow are provided separately. Please refer to the documentation available online or
in the NXLog package.

120.21.3. Examples
Example 577. Parsing UDP NetFlow Data

The following configuration receives NetFlow data over UDP and converts the parsed data into JSON.

nxlog.conf
 1 <Extension netflow>
 2 Module xm_netflow
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input udpin>
10 Module im_udp
11 Host 0.0.0.0
12 Port 2162
13 InputType netflow
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "netflow.log"
19 Exec to_json();
20 </Output>
21
22 <Route nf>
23 Path udpin => out
24 </Route>

840
120.22. Radius NPS (xm_nps)
This module provides functions and procedures for processing data in NPS Database Format stored in files by
Microsoft Radius services. Internet Authentication Service (IAS) is the Microsoft implementation of a RADIUS
server and proxy. Internet Authentication Service (IAS) was renamed to Network Policy Server (NPS) starting with
Windows Server 2008. This module is capable of parsing both IAS and NPS formatted data.

NPS formatted data typically looks like the following:

"RasBox","RAS",10/22/2006,09:13:09,1,"DOMAIN\user","DOMAIN\user",,,,,,"192.168.132.45",12,,"192.168.
132.45",,,,0,"CONNECT 24000",1,2,4,,0,"311 1 192.168.132.45 07/31/2006 21:35:14
749",,,,,,,,,,,,,,,,,,,,,,,,,,,,"MSRASV5.00",311,,,,
"RasBox","RAS",10/22/2006,09:13:09,3,,"DOMAIN\user",,,,,,,,,,,,,,,,,4,,36,"311 1 192.168.132.45
07/31/2006 21:35:14 749",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"0x00453D36393120523D3020563D33",,,
"RasBox","RAS",10/22/2006,09:13:13,1,"DOMAIN\user","DOMAIN\user",,,,,,"192.168.132.45",12,,"192.168.
132.45",,,,0,"CONNECT 24000",1,2,4,,0,"311 1 192.168.132.45 07/31/2006 21:35:14
750",,,,,,,,,,,,,,,,,,,,,,,,,,,,"MSRASV5.00",311,,,,

For more information on the NPS format see the Interpret NPS Database Format Log Files article on Microsoft
TechNet.

See the list of installer packages that provide the xm_nps module in the Available Modules chapter of the NXLog
User Guide.

120.22.1. Configuration
The xm_nps module accepts only the common module directives.

120.22.2. Procedures
The following procedures are exported by xm_nps.

parse_nps();
Parse the $raw_event field as NPS input.

parse_nps(string source);
Parse the given string as NPS format.

120.22.3. Examples

841
Example 578. Parsing NPS Data

The following configuration reads NPS formatted files and converts the parsed data into JSON.

nxlog.conf
 1 <Extension nps>
 2 Module xm_nps
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input filein>
10 Module im_file
11 File 'C:\logs\IAS.log'
12 Exec parse_nps();
13 </Input>
14
15 <Output fileout>
16 Module om_file
17 File 'C:\out.json'
18 Exec to_json();
19 </Output>
20
21 <Route nps_to_json>
22 Path filein => fileout
23 </Route>

120.23. Pattern Matcher (xm_pattern)


This module makes it possible to execute pattern matching with a pattern database file in XML format. Using
xm_pattern is more efficient than having NXLog regular expression rules listed in Exec directives, because it was
designed in such a way that patterns do not need to be matched linearly. Regular expression sub-capturing can
be used to set additional fields in the event record and arbitrary fields can be added under the scope of a pattern
match for message classification. In addition, the module does an automatic on-the-fly pattern reordering
internally for further speed improvements.

There are other techniques such as the radix tree which solve the linearity problem; the drawback is that usually
these require the user to learn a special syntax for specifying patterns. If the log message is already parsed and
is not treated as single line of message, then it is possible to process only a subset of the patterns which partially
solves the linearity problem. With other performance improvements employed within the xm_pattern module, its
speed can compare to the other techniques. Yet the xm_pattern module uses regular expressions which are
familiar to users and can easily be migrated from other tools.

Traditionally, pattern matching on log messages has employed a technique where the log message was one
string and the pattern (regular expression or radix tree based pattern) was executed against it. To match patterns
against logs which contain structured data (such as the Windows EventLog), this structured data (the fields of the
log) must be converted to a single string. This is a simple but inefficient method used by many tools.

The NXLog patterns defined in the XML pattern database file can contain more than one field. This allows multi-
dimensional pattern matching. Thus with NXLog’s xm_pattern module there is no need to convert all fields into a
single string as it can work with multiple fields.

Patterns can be grouped together under pattern groups. Pattern groups serve an optimization purpose. The
group can have an optional matchfield block which can check a condition. If the condition (such as $SourceName
matches sshd) is satisfied, the xm_pattern module will descend into the group and check each pattern against the

842
log. If the pattern group’s condition did not match ($SourceName was not sshd), the module can skip all patterns
in the group without having to check each pattern individually.

When the xm_pattern module finds a matching pattern, the $PatternID and $PatternName fields are set on the
log message. These can be used later in conditional processing and correlation rules of the pm_evcorr module,
for example.

The xm_pattern module does not process all patterns. It exits after the first matching pattern is
found. This means that at most one pattern can match a log message. Multiple patterns that
can match the same subset of logs should be avoided. For example, with two regular expression
NOTE patterns ^\d+ and ^\d\d, only one will be matched but not consistently because the internal
order of patterns and pattern groups is changed dynamically by xm_pattern (patterns with the
highest match count are placed and tried first). For a strictly linearly executing pattern matcher,
see the Exec directive.

See the list of installer packages that provide the xm_pattern module in the Available Modules chapter of the
NXLog User Guide.

120.23.1. Configuration
The xm_pattern module accepts the following directives in addition to the common module directives.

PatternFile
This mandatory directive specifies the name of the pattern database file.

120.23.2. Functions
The following functions are exported by xm_pattern.

boolean match_pattern()
Execute the match_pattern() procedure. If the event is successfully matched, return TRUE, otherwise FALSE.

120.23.3. Procedures
The following procedures are exported by xm_pattern.

match_pattern();
Attempt to match the current event according to the PatternFile. Execute statements and add fields as
specified.

120.23.4. Fields
The following fields are used by xm_pattern.

$PatternID (type: integer)


The ID of the pattern that matched the event.

$PatternName (type: string)


The name of the pattern that matched the event.

120.23.5. Examples
Example 579. Using the match_pattern() Procedure

This configuration reads Syslog messages from file and parses them with parse_syslog(). The events are

843
then further processed with a pattern file and the corresponding match_pattern() procedure to add
additional fields to SSH authentication success or failure events. The matching is done against the
$SourceName and $Message fields, so the Syslog parsing must be performed before the pattern matching
will work.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension pattern>
 6 Module xm_pattern
 7 PatternFile 'modules/extension/pattern/patterndb2-3.xml'
 8 </Extension>
 9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 match_pattern();
16 </Exec>
17 </Input>

The following pattern database contains two patterns to match SSH authentication messages. The patterns
are under a group named ssh which checks whether the $SourceName field is sshd and only tries to match
the patterns if the logs are indeed from sshd. The patterns both extract $AuthMethod, $AccountName, and
$SourceIP4Address fields from the log message when the pattern matches the log. Additionally
$TaxonomyStatus and $TaxonomyAction are set. The second pattern shows an Exec block example, which
is evaluated when the pattern matches.

patterndb2-3.xml
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
  <created>2018-01-01 01:02:03</created>
  <version>4</version>

  <group>
  <name>ssh</name>
  <id>1</id>
  <matchfield>
  <name>SourceName</name>
  <type>exact</type>
  <value>sshd</value>
  </matchfield>

  <pattern>
  <id>1</id>
  <name>ssh auth success</name>

  <matchfield>
  <name>Message</name>
  <type>regexp</type>
  <value>^Accepted (\S+) for (\S+) from (\S+) port \d+ ssh2</value>
  <capturedfield>
  <name>AuthMethod</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>

844
  <name>AccountName</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>SourceIP4Address</name>
  <type>ipaddr</type>
  </capturedfield>
  </matchfield>

  <set>
  <field>
  <name>TaxonomyStatus</name>
  <value>success</value>
  <type>string</type>
  </field>
  <field>
  <name>TaxonomyAction</name>
  <value>authenticate</value>
  <type>string</type>
  </field>
  </set>
  </pattern>

  <pattern>
  <id>2</id>
  <name>ssh auth failure</name>

  <matchfield>
  <name>Message</name>
  <type>regexp</type>
  <value>^Failed (\S+) for invalid user (\S+) from (\S+) port \d+ ssh2</value>

  <capturedfield>
  <name>AuthMethod</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>AccountName</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>SourceIP4Address</name>
  <type>ipaddr</type>
  </capturedfield>
  </matchfield>

  <set>
  <field>
  <name>TaxonomyStatus</name>
  <value>failure</value>
  <type>string</type>
  </field>
  <field>
  <name>TaxonomyAction</name>
  <value>authenticate</value>
  <type>string</type>
  </field>
  </set>

  <exec>

845
  $TestField = 'test';
  $TestField = $Testfield + 'value';
  </exec>
  </pattern>

  </group>

</patterndb>

Example 580. Using the match_pattern() Function

This example is the same as the previous one, and uses the same pattern file, but it uses the
match_pattern() function to discard any event that is not matched by the pattern file.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension pattern>
 6 Module xm_pattern
 7 PatternFile modules/extension/pattern/patterndb2-3.xml
 8 </Extension>
 9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 if not match_pattern() drop();
16 </Exec>
17 </Input>

120.24. Perl (xm_perl)


The Perl programming language is widely used for log processing and comes with a broad set of modules
bundled or available from CPAN. Code can be written more quickly in Perl than in C, and code execution is safer
because exceptions (croak/die) are handled properly and will only result in an unfinished attempt at log
processing rather than taking down the whole NXLog process.

While the NXLog language is already a powerful framework, it is not intended to be a fully featured programming
language and does not provide lists, arrays, hashes, and other features available in many high-level languages.
With this module, Perl can be used to process event data via a built-in Perl interpreter. See also the im_perl and
om_perl modules.

The Perl interpreter is only loaded if the module is declared in the configuration. The module will parse the file
specified in the PerlCode directive when NXLog starts the module. This file should contain one or more methods
which can be called from the Exec directive of any module that will use Perl for log processing. See the example
below.

Perl code defined via this module must not be called from the im_perl and om_perl
WARNING
modules as that would involve two Perl interpreters and will likely result in a crash.

To use the xm_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the xm_perl module on Windows requires Strawberry Perl 5.28.0.1.

846
To access event data, the Log::Nxlog module must be included, which provides the following methods.

log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.

log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.

log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.

log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.

delete_field(event, key)
Delete the value associated with the field named key.

field_names(event)
Return a list of the field names contained in the event data. This method can be used to iterate over all of the
fields.

field_type(event, key)
Return a string representing the type of the value associated with the field named key.

get_field(event, key)
Retrieve the value associated with the field named key. This method returns a scalar value if the key exists
and the value is defined, otherwise it returns undef.

set_field_boolean(event, key, value)


Set the boolean value in the field named key.

set_field_integer(event, key, value)


Set the integer value in the field named key.

set_field_string(event, key, value)


Set the string value in the field named key.

For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.

See the list of installer packages that provide the xm_perl module in the Available Modules chapter of the NXLog
User Guide.

120.24.1. Configuration
The xm_perl module accepts the following directives in addition to the common module directives.

PerlCode
This mandatory directive expects a file containing valid Perl code. This file is read and parsed by the Perl
interpreter. Methods defined in this file can be called with the call() procedure.

847
On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.

nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';

Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.

If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.

120.24.2. Procedures
The following procedures are exported by xm_perl.

call(string subroutine);
Call the given Perl subroutine.

perl_call(string subroutine, varargs args);


Call the given Perl subroutine.

120.24.3. Examples

848
Example 581. Using the built-in Perl interpreter

In this example, logs are parsed as Syslog and then are passed to a Perl method which does a GeoIP lookup
on the source address of the incoming message.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension perl>
 6 Module xm_perl
 7 PerlCode modules/extension/perl/processlogs.pl
 8 </Extension>
 9
10 <Output fileout>
11 Module om_file
12 File 'tmp/output'
13
14 # First we parse the input natively from nxlog
15 Exec parse_syslog_bsd();
16
17 # Now call the 'process' subroutine defined in 'processlogs.pl'
18 Exec perl_call("process");
19
20 # You can also invoke this public procedure 'call' in case
21 # of multiple xm_perl instances like this:
22 # Exec perl->call("process");
23 </Output>

processlogs.pl (truncated)
use lib "$FindBin::Bin/../../../../src/modules/extension/perl";

use strict;
use warnings;

# Without Log::Nxlog you cannot access (read or modify) the event data
use Log::Nxlog;

use Geo::IP;

my $geoip;

BEGIN
{
  # This will be called once when nxlog starts so you can use this to
  # initialize stuff here
  #$geoip = Geo::IP->new(GEOIP_MEMORY_CACHE);
  $geoip = Geo::IP->open('modules/extension/perl/GeoIP.dat', GEOIP_MEMORY_CACHE);
[...]

120.25. Python (xm_python)


This module provides support for processing NXLog log data with methods written in the Python language. The
file specified by the PythonCode directive should contain one or more methods which can be called from the
Exec directive of any module. See also the im_python and om_python modules.

The Python script should import the nxlog module, and will have access to the following classes and functions.

849
nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.

nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.

nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.

nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.

class nxlog.Module
This class is instantiated by NXLog and can be accessed via the LogData.module attribute. This can be used to
set or access variables associated with the module (see the example below).

class nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the method specified by the
python_call() procedure.

delete_field(name)
This method removes the field name from the event record.

field_names()
This method returns a list with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

set_field(name, value)
This method sets the value of field name to value.

module
This attribute is set to the Module object associated with the event.

See the list of installer packages that provide the xm_python module in the Available Modules chapter of the
NXLog User Guide.

120.25.1. Configuration
The xm_python module accepts the following directives in addition to the common module directives.

PythonCode
This mandatory directive specifies a file containing Python code. The python_call() procedure can be used to
call a Python function defined in the file. The function must accept an nxlog.LogData object as its argument.

120.25.2. Procedures
The following procedures are exported by xm_python.

call(string subroutine);
Call the given Python subroutine.

850
python_call(string function);
Call the specified function, which must accept an nxlog.LogData() object as its only argument.

120.25.3. Examples
Example 582. Using Python for Log Processing

This configuration calls two Python functions to modify each event record. The add_checksum() uses
Python’s hashlib module to add a $ChecksumSHA1 field to the event; the add_counter() function adds a
$Counter field for non-DEBUG events.

The pm_hmac module offers a more complete implementation for checksumming. See
NOTE
Statistical Counters for a native way to add counters.

nxlog.conf (truncated)
 1 </Input>
 2
 3 <Extension _json>
 4 Module xm_json
 5 DateFormat YYYY-MM-DD hh:mm:ss
 6 </Extension>
 7
 8 <Extension _syslog>
 9 Module xm_syslog
10 </Extension>
11
12 <Extension python>
13 Module xm_python
14 PythonCode modules/extension/python/py/processlogs2.py
15 </Extension>
16
17 <Output out>
18 Module om_file
19 File 'tmp/output'
20 <Exec>
21 # The $SeverityValue field is added by this procedure.
22 # Most other parsers also add a normalized severity value.
23 parse_syslog();
24
25 # Add a counter for each event with log level above DEBUG.
26 python_call('add_counter');
27
28 # Calculate a checksum (after the counter field is added).
29 [...]

851
processlogs2.py (truncated)
import hashlib

import nxlog

def add_checksum(event):
  # Convert field list to dictionary
  all = {}
  for field in event.field_names():
  all.update({field: event.get_field(field)})

  # Calculate checksum and add to event record


  checksum = hashlib.sha1(repr(sorted(all.items()))).hexdigest()
  event.set_field('ChecksumSHA1', checksum)
  nxlog.log_debug('Added checksum field')

def add_counter(event):
  # Get module object and initialize counter
  module = event.module
[...]

120.26. Resolver (xm_resolver)


This module provides provides functions for resolving (converting between) IP addresses and names, and
between group/user ids and names. The module uses an internal cache in order to minimize the number of DNS
lookup queries.

See the list of installer packages that provide the xm_resolver module in the Available Modules chapter of the
NXLog User Guide.

120.26.1. Configuration
The xm_resolver module accepts the following directives in addition to the common module directives.

CacheExpiry
Specifies the time in seconds after entries in the cache are considered invalid and are refreshed by issuing a
DNS lookup. The default expiry is 3600 seconds.

CacheLimit
This directive can be used to specify an upper limit on the number of entries in the cache in order to prevent
the cache from becoming arbitrary large and potentially exhausting memory. When the number of entries in
the cache reaches this value, no more items will be inserted into the cache. The default is 100,000 entries.

120.26.2. Functions
The following functions are exported by xm_resolver.

string ad_guid_to_name(string guid)


This function is available on Windows only. Return the object name corresponding to the Active Directory
object’s GUID. This function takes a guid string in the format %{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
(where x is a hexadecimal digit). If guid cannot be looked up, undef is returned.

string gid_to_name(integer gid)


Return the group name assigned to the group ID. If gid cannot be looked up, undef is returned.

852
string gid_to_name(string gid)
Return the group name assigned to the string gid on Unix. If gid cannot be looked up, undef is returned.

integer group_get_gid(string groupname)


Return the group ID assigned to the group name.

string ipaddr_to_name(unknown ipaddr)


Resolve and return the DNS name assigned to the IP address. The ipaddr argument can be either a string or
an ipaddr type.

ipaddr name_to_ipaddr(string name)


Resolve and return the first IPv4 address assigned to name.

string uid_to_name(integer uid)


Return the username corresponding to the user ID. If uid cannot be looked up, undef is returned.

string uid_to_name(string uid)


Return the username corresponding to the user ID or SID. This function takes a string which is normally a SID
on Windows or an integer UID on Unix. On Windows this function will convert the SID to a string in the format
of DOMAIN\USER. If uid cannot be looked up, undef is returned.

integer user_get_gid(string username)


Return the user’s group ID (the group ID assigned to username).

integer user_get_uid(string username)


Return the user ID assigned to username.

120.26.3. Examples

853
Example 583. Using Functions Provided by xm_resolver

It is common for devices to send Syslog messages containing the IP address of the device instead of a real
hostname. In this example, Syslog messages are parsed and the hostname field of each Syslog header is
converted to a hostname if it looks like an IP address.

nxlog.conf (truncated)
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _resolver>
 6 Module xm_resolver
 7 </Extension>
 8
 9 <Input tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog();
15 if $Hostname =~ /^\d+\.\d+\.\d+\.\d+/
16 {
17 $HostIP = $Hostname;
18 $Hostname = ipaddr_to_name($HostIP);
19 if not defined $Hostname $Hostname = $HostIP;
20 #WIN
21 if ($Hostname == ipaddr_to_name("127.0.0.1"))
22 {
23 $Hostname = "localhost";
24 }
25 #END
26 }
27 </Exec>
28 </Input>
29 [...]

Input Sample
<38>2014-11-11 11:40:27 127.0.0.1 sshd[3436]: Failed none for invalid user asdf from 127.0.0.1
port 51824 ssh2↵
<38>2014-11-12 12:42:37 127.0.0.1 sshd[3436]: Failed password for invalid user fdsa from
127.0.0.1 port 51824 ssh2↵

Output Sample
<38>Nov 11 11:40:27 localhost sshd[3436]: Failed none for invalid user asdf from 127.0.0.1 port
51824 ssh2↵
<38>Nov 12 12:42:37 localhost sshd[3436]: Failed password for invalid user fdsa from 127.0.0.1
port 51824 ssh2↵

120.27. Rewrite (xm_rewrite)


This module can be used to transform event records by:

• renaming fields,
• deleting specified fields (blacklist),
• keeping only a list of specified fields (whitelist), and

854
• evaluating additional statements.

The xm_rewrite module provides Delete, Keep, and Rename directives for modifying event records. With the Exec
directive of this module, it is possible to invoke functions and procedures from other modules. This allows all
data transformation to be configured in a single module instance in order to simplify the configuration. Then the
transformation can be referenced from another module by adding:

Exec rewrite->process();

This same statement can be used by more than one module instance if necessary, rather than duplicating
configuration.

See the list of installer packages that provide the xm_rewrite module in the Available Modules chapter of the
NXLog User Guide.

120.27.1. Configuration
The xm_rewrite module accepts the following directives in addition to the common module directives.

The order of the action directives is significant as the module executes them in the order of appearance. It is
possible to configure an xm_rewrite instance with no directives (other than the Module directive). In this case, the
corresponding process() procedure will do nothing.

Delete
This directive takes a field name or a list of fields. The fields specified will be removed from the event record.
This can be used to blacklist specific fields that are not wanted in the event record. This is equivalent to using
delete() in Exec.

Exec
This directive works the same way as the Exec directive in other modules: the statement(s) provided in the
argument/block will be evaluated in the context of the module that called process() (i.e., as though the
statement(s) from this Exec directive/block were inserted into the caller’s Exec directive/block, at the location
of the process() call).

Keep
This directive takes a field name or a list of fields. The fields specified will be kept and all other fields not
appearing in the list will be removed from the event record. This can be used to whitelist specific fields.

To retain only the $raw_event field, use Keep raw_event (it is not possible to delete the $raw_event field).
This can be helpful for discarding extra event fields after $raw_event has been set (with to_json(), for
example) and before an output module that operates on all fields in the event record (such as
om_batchcompress).

Rename
This directive takes two fields. The field in the first argument will be renamed to the name in the second. This
is equivalent to using rename_field() in Exec.

120.27.2. Procedures
The following procedures are exported by xm_rewrite.

process();
This procedure invokes the data processing as specified in the configuration of the xm_rewrite module
instance.

855
120.27.3. Examples
Example 584. Using xm_rewrite to Transform Syslog Data Read from File

The following configuration parses Syslog data from a file, invokes the process() procedure of the xm_rewrite
instance to keep and rename whitelisted fields, then writes JSON-formatted output to a file.

nxlog.conf (truncated)
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension rewrite>
 6 Module xm_rewrite
 7 Keep EventTime, Severity, Hostname, SourceName, Message
 8 Rename EventTime, timestamp
 9 Rename Hostname, host
10 Rename SourceName, src
11 Rename Message, msg
12 Rename Severity, sev
13 Exec if $msg =~ /error/ $sev = 'ERROR';
14 </Extension>
15
16 <Extension json>
17 Module xm_json
18 </Extension>
19
20 <Input syslogfile>
21 Module im_file
22 File "modules/extension/rewrite/xm_rewrite.in"
23 Exec parse_syslog();
24 Exec rewrite->process();
25 </Input>
26
27 <Output fileout>
28 Module om_file
29 [...]

Input Sample
<0>2010-10-12 12:49:06 mybox app[12345]: kernel message↵
<30>2010-10-12 12:49:06 mybox app[12345]: daemon - info↵
<27>2010-10-12 12:49:06 mybox app[12345]: daemon - error↵
<30>2010-10-12 13:19:11 mybox app[12345]: There was an error↵

Output Sample
{"sev":"CRITICAL","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"kernel
message"}
{"sev":"INFO","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"daemon -
info"}
{"sev":"ERROR","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"daemon -
error"}
{"sev":"ERROR","host":"mybox","timestamp":"2010-10-12 13:19:11","src":"app","msg":"There was an
error"}

856
Example 585. Performing Additional Parsing in an xm_rewrite Module Instance

The following configuration does the exact same processing. In this case, however, the Syslog parsing is
moved into the xm_rewrite module instance so the input module only needs to invoke the process()
procedure.

nxlog.conf (truncated)
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension rewrite>
 6 Module xm_rewrite
 7 Exec parse_syslog();
 8 Keep EventTime, Severity, Hostname, SourceName, Message
 9 Rename EventTime, timestamp
10 Rename Hostname, host
11 Rename SourceName, src
12 Rename Message, msg
13 Rename Severity, sev
14 Exec if $msg =~ /error/ $sev = 'ERROR';
15 </Extension>
16
17 <Extension json>
18 Module xm_json
19 </Extension>
20
21 <Input syslogfile>
22 Module im_file
23 File "modules/extension/rewrite/xm_rewrite.in"
24 Exec rewrite->process();
25 </Input>
26
27 <Output fileout>
28 Module om_file
29 [...]

120.28. Ruby (xm_ruby)


This module provides support for processing NXLog log data with methods written in the Ruby language. Ruby
methods can be defined in a script and then called from the Exec directive of any module that will use Ruby for
log processing. See the example below. See also the im_ruby and om_ruby modules.

The Nxlog module provides the following classes and methods.

Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.

Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.

Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.

857
Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.

class Nxlog.LogData
This class represents an event.

field_names()
This method returns an array with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

set_field(name, value)
This method sets the value of field name to value.

See the list of installer packages that provide the xm_ruby module in the Available Modules chapter of the NXLog
User Guide.

120.28.1. Configuration
The xm_ruby module accepts the following directives in addition to the common module directives.

RubyCode
This mandatory directive expects a file containing valid Ruby code. Methods defined in this file can be called
with the ruby_call() procedure.

120.28.2. Procedures
The following procedures are exported by xm_ruby.

call(string subroutine);
Calls the Ruby method provided in the first argument.

ruby_call(string subroutine);
Calls the Ruby method provided in the first argument.

120.28.3. Examples

858
Example 586. Processing Logs With Ruby

In this example logs are parsed as Syslog, then the data is passed to a Ruby method which adds an
incrementing $AlertCounter field for any event with a normalized $SeverityValue of at least 4.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension ruby>
 6 Module xm_ruby
 7 RubyCode ./modules/extension/ruby/processlogs2.rb
 8 </Extension>
 9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 ruby->call('add_alert_counter');
16 </Exec>
17 </Input>

processlogs2.rb
$counter = 0

def add_alert_counter(event)
  if event.get_field('SeverityValue') >= 4
  Nxlog.log_debug('Adding AlertCounter field')
  $counter += 1
  event.set_field('AlertCounter', $counter)
  end
end

120.29. SNMP Traps (xm_snmp)


This module provides support for parsing SNMP v1, v2c, and v3 trap messages. For SNMP v3, the user-based
security model (USM) is supported, providing both authentication and encryption functionality. Instead of
parsing log files or piping in input from snmptrapd, this module provides convenient and efficient trap variables
easily accessible in NXLog fields. There is no need to manually parse the output of external tools, thus it provides
a better all-in-one solution for SNMP trap reception.

Like the xm_syslog module, the xm_snmp module does not provide support for the network transport layer. Since
traps are sent primarily over UDP (typically to port 162), the im_udp module should be used together with this
module. This module registers an input reader function under the name "snmp" which can be used in the
InputType directive to parse UDP message payloads.

The module supports MIB definitions in order to resolve OID numbers to names. In addition to the standard
xm_snmp module fields and the im_udp module fields, each variable in the trap message will be available as an
internal NXLog field. If the OID cannot be resolved to a name, the string OID. will be prepended to the dotted
OID number representation in the field name. For example, if a trap contains a string variable with OID
1.3.6.1.4.1.311.1.13.1.9999.3.0, this field can be accessed as the NXLog field
$OID.1.3.6.1.4.1.311.1.13.1.9999.3.0. If the object identifier can be resolved to a name called FIELDNAME,
the value will be available in the NXLog field $SNMP.FIELDNAME. The SNMP trap variables are also put in the
$raw_event field and are listed as name="value" pairs there in the order they appear in the trap. The following
is an example of the contents of the $raw_event field (line breaks added):

859
2011-12-15 18:10:35 192.1.1.114 INFO \
OID.1.3.6.1.4.1.311.1.13.1.9999.1.0="test msg" \
OID.1.3.6.1.4.1.311.1.13.1.9999.2.0="Administrator" \
OID.1.3.6.1.4.1.311.1.13.1.9999.3.0="WIN-OUNNPISDHIG" \
OID.1.3.6.1.4.1.311.1.13.1.9999.4.0="1" \
OID.1.3.6.1.4.1.311.1.13.1.9999.5.0="0" \
OID.1.3.6.1.4.1.311.1.13.1.9999.6.0="test msg"

To convert the output to Syslog format, consider using one of the to_syslog() procedures
NOTE provided by the xm_syslog module. However, note that the resulting format will not be in
accordance with RFC 5675.

Microsoft Windows can convert and forward EventLog messages as SNMPv1 traps. The evntwin utility can be
used to configure which events are sent as traps. See How to Generate SNMP traps from Windows Events for
more information about setting up this feature.

The Net-SNMP toolkit (available for Unix/Linux and Windows) provides the snmptrap command line utility which
can be used for sending test SNMP traps. Create the following MIB definition file and put it in a directory
specified by the MIBDir directive:

MIB Definition File


TRAP-TEST-MIB DEFINITIONS ::= BEGIN
IMPORTS ucdExperimental FROM UCD-SNMP-MIB;

demotraps OBJECT IDENTIFIER ::= { ucdExperimental 990 }

demo-trap TRAP-TYPE
STATUS current
ENTERPRISE demotraps
VARIABLES { sysLocation }
DESCRIPTION "This is just a demo"
::= 17

END

Here is an example for invoking the snmptrap utility (line break added):

snmptrap -v 1 -c public localhost TRAP-TEST-MIB::demotraps \


  "" 6 17 "" sysLocation s "Test message"

The received trap should look like this in the $raw_event field:

2011-12-15 18:21:46 192.168.168.2 INFO SNMP.sysLocation="Test message"

If the MIB definition can not be loaded or parsed, the unresolved OID number will be seen in the message:

2011-12-15 19:43:54 192.168.168.2 INFO OID.1.3.6.1.2.1.1.6="Test message"

See the list of installer packages that provide the xm_snmp module in the Available Modules chapter of the NXLog
User Guide.

120.29.1. Configuration
The xm_snmp module accepts the following directives in addition to the common module directives.

AllowAuthenticatedOnly
This boolean directive specifies whether only authenticated SNMP v3 traps should be accepted. If set to TRUE,
the User block must also be defined, and unauthenticated SNMP traps are not accepted. The default is FALSE:
all SNMP traps are accepted.

860
MIBDir
This optional directive can be used to define a directory which contains MIB definition files. Multiple MIBDir
directives can be specified.

User
This directive is specified as a block (see Parsing Authenticated and Encrypted SNMP Traps) and provides the
authentication details for an SNMP v3 user. The block must be named with the corresponding user. This
block can be specified more than once to provide authentication details for multiple users.

AuthPasswd
This required directive specifies the authentication password.

AuthProto
This optional directive specifies the authentication protocol to use. Supported values are md5 and sha1. If
this directive is not specified, the default is md5.

EncryptPasswd
This directive specifies the encryption password to use for encrypted traps.

EncryptProto
This optional directive specifies the encryption protocol to use. Supported values are des and aes. The
default, if encryption is in use and this directive is not specified, is des.

120.29.2. Fields
The following fields are used by xm_snmp.

$raw_event (type: string)


For SNMP v1, a string containing the $EventTime, $SNMP.MessageSourceAddress, $Severity, and
$SNMP.TrapNameGeneric fields and a list of key-value pairs. For SNMP v2c and v3, a string containing the
$EventTime and $Severity fields and a list of key-value pairs.

$EventTime (type: datetime)


The reception time of the trap.

$Severity (type: string)


The severity name: INFO (there is no severity in SNMP traps).

$SeverityValue (type: integer)


The INFO severity level value: 2 (because there is no severity in SNMP traps).

$SNMP.CommunityString (type: string)


The community string within the SNMP message.

$SNMP.MessageSourceAddress (type: string)


The IP address of the sender as provided in the trap message. Note that there is a $MessageSourceAddress
field set by the im_udp module. Available in SNMP v1 only.

$SNMP.RequestID (type: integer)


An integer associating the SNMP response with a particular SNMP request.

$SNMP.TrapCodeGeneric (type: integer)


Indicates one of a number of generic trap types. Available in SNMP v1 only.

861
$SNMP.TrapCodeSpecific (type: integer)
A code value indicating an implementation-specific trap type. Available in SNMP v1 only.

$SNMP.TrapName (type: string)


The resolved name of the object identifier in SNMP.TrapOID. The field will be unset if the OID cannot be
resolved. Available in SNMP v1 only.

$SNMP.TrapNameGeneric (type: string)


The textual representation of SNMP.TrapCodeGeneric, one of: coldStart(0), warmStart(1), linkDown(2),
linkUp(3), authenticationFailure(4), egpNeighborLoss(5), or enterpriseSpecific(6). Available in
SNMP v1 only.

$SNMP.TrapOID (type: string)


The object identifier of the TRAP message. Available in SNMP v1 only.

$sysUptime (type: integer)


The amount of time that has elapsed between the last network reinitialization and generation of the trap.
This name is chosen in accordance with RFC 5424. Available in SNMP v1 only.

120.29.3. Examples
Example 587. Using MIB Definitions to Parse SNMP Traps

The InputType snmp directive in the im_udp module block is required to parse the SNMP payload in the
UDP message.

nxlog.conf
 1 <Extension snmp>
 2 Module xm_snmp
 3 MIBDir /usr/share/mibs/iana
 4 MIBDir /usr/share/mibs/ietf
 5 MIBDir /usr/share/mibs/site
 6 </Extension>
 7
 8 <Input udp>
 9 Module im_udp
10 Host 0.0.0.0
11 Port 162
12 InputType snmp
13 </Input>

862
Example 588. Parsing Authenticated and Encrypted SNMP Traps

This configuration parses SNMP v3 traps. Only authenticated traps are parsed; a warning is printed for each
non-authenticated source that sends a trap. The User block provides authentication and encryption
settings for the switch1 user.

nxlog.conf
 1 <Extension snmp>
 2 Module xm_snmp
 3 MIBDir /usr/share/mibs/iana
 4 MIBDir /usr/share/mibs/ietf
 5 AllowAuthenticatedOnly TRUE
 6 <User switch1>
 7 AuthPasswd secret
 8 AuthProto sha1
 9 EncryptPasswd secret
10 EncryptProto aes
11 </User>
12 </Extension>
13
14 <Input udp>
15 Module im_udp
16 Host 0.0.0.0
17 Port 162
18 InputType snmp
19 </Input>

120.30. Remote Management (xm_soapadmin)


This module has been superseded by the xm_admin module, which should remain API compatible with the old
xm_soapadmin implementation. For compatibility reasons, the xm_soapadmin module is linked to xm_admin, so
old configuration files remain functional.

This module will be completely removed in a future release, please update your configuration
NOTE
files.

120.31. Syslog (xm_syslog)


This module provides support for the legacy BSD Syslog protocol as defined in RFC 3164 and the current IETF
standard defined by RFCs 5424-5426. This is achieved by exporting functions and procedures usable from the
NXLog language. The transport is handled by the respective input and output modules (such as im_udp), this
module only provides a parser and helper functions to create Syslog messages and handle facility and severity
values.

The older but still widespread BSD Syslog standard defines both the format and the transport protocol in RFC
3164. The transport protocol is UDP, but to provide reliability and security, this line-based format is also
commonly transferred over TCP and SSL. There is a newer standard defined in RFC 5424, also known as the IETF
Syslog format, which obsoletes the BSD Syslog format. This format overcomes most of the limitations of BSD
Syslog and allows multi-line messages and proper timestamps. The transport method is defined in RFC 5426 for
UDP and RFC 5425 for TLS/SSL.

Because the IETF Syslog format supports multi-line messages, RFC 5425 defines a special format to encapsulate
these by prepending the payload size in ASCII to the IETF Syslog message. Messages transferred in UDP packets
are self-contained and do not need this additional framing. The following input reader and output writer
functions are provided by the xm_syslog module to support this TLS transport defined in RFC 5425. While RFC
5425 explicitly defines that the TLS network transport protocol is to be used, pure TCP may be used if security is

863
not a requirement. Syslog messages can also be written to file with this framing format using these functions.

InputType Syslog_TLS
This input reader function parses the payload size and then reads the message according to this value. It is
required to support Syslog TLS transport defined in RFC 5425.

OutputType Syslog_TLS
This output writer function prepends the payload size to the message. It is required to support Syslog TLS
transport defined in RFC 5425.

The Syslog_TLS InputType/OutputType can work with any input/output such as im_tcp or im_file
NOTE and does not depend on SSL transport at all. The name Syslog_TLS was chosen to refer to the
octet-framing method described in RFC 5425 used for TLS transport.

The pm_transformer module can also parse and create BSD and IETF Syslog messages, but the
NOTE functions and procedures provided by this module make it possible to solve more complex tasks
which pm_transformer is not capable of on its own.

Structured data in IETF Syslog messages is parsed and put into NXLog fields. The SD-ID will be prepended to the
field name with a dot unless it is NXLOG@XXXX. Consider the following Syslog message:

<30>1 2011-12-04T21:16:10.000000+02:00 host app procid msgid [exampleSDID@32473


eventSource="Application" eventID="1011"] Message part↵

After this IETF-formatted Syslog message is parsed with parse_syslog_ietf(), there will be two additional fields:
$exampleSDID.eventID and $exampleSDID.eventSource. When SD-ID is NXLOG, the field name will be the
same as the SD-PARAM name. The two additional fields extracted from the structured data part of the following
IETF Syslog message are $eventID and $eventSource:

<30>1 2011-12-04T21:16:10.000000+02:00 host app procid msgid [NXLOG@32473 eventSource="Application"


eventID="1011"] Message part↵

All fields in the structured data part are parsed as strings.

See the list of installer packages that provide the xm_syslog module in the Available Modules chapter of the
NXLog User Guide.

120.31.1. Configuration
The xm_syslog module accepts the following directives in addition to the common module directives.

IETFTimestampInGMT
This is an alias for the UTCTimestamp directive below.

ReplaceLineBreaks
This optional directive specifies a character with which to replace line breaks in the Syslog message when
generating Syslog events with to_syslog_bsd(), to_syslog_ietf(), and to_syslog_snare(). The default is a space. To
retain line breaks in Syslog messages, set this to \n.

SnareDelimiter
This optional directive takes a single character (see below) as argument. This character is used by the
to_syslog_snare() procedure to separate fields. If this directive is not specified, the default delimiter character
is the tab (\t). In latter versions of Snare 4 this has changed to the hash mark (#); this directive can be used to
specify the alternative delimiter. Note that there is no delimiter after the last field.

SnareReplacement
This optional directive takes a single character (see below) as argument. This character is used by the

864
to_syslog_snare() procedure to replace occurrences of the delimiter character inside the $Message field. If this
directive is not specified, the default replacement character is the space.

UTCTimestamp
This optional boolean directive can be used to format the timestamps produced by to_syslog_ietf() in
UTC/GMT instead of local time. The default is FALSE: local time is used with a timezone indicator.

120.31.1.1. Specifying Quote, Escape, and Delimiter Characters


The SnareDelimiter and SnareReplacement directives can be specified in several ways.

Unquoted single character


Any printable character can be specified as an unquoted character, except for the backslash (\):

Delimiter ;

Control characters
The following non-printable characters can be specified with escape sequences:

\a
audible alert (bell)

\b
backspace

\t
horizontal tab

\n
newline

\v
vertical tab

\f
formfeed

\r
carriage return

For example, to use TAB delimiting:

Delimiter \t

A character in single quotes


The configuration parser strips whitespace, so it is not possible to define a space as the delimiter unless it is
enclosed within quotes:

Delimiter ' '

Printable characters can also be enclosed:

Delimiter ';'

The backslash can be specified when enclosed within quotes:

Delimiter '\'

865
A character in double quotes
Double quotes can be used like single quotes:

Delimiter " "

The backslash can be specified when enclosed within double quotes:

Delimiter "\"

A hexadecimal ASCII code


Hexadecimal ASCII character codes can also be used by prepending 0x. For example, the space can be
specified as:

Delimiter 0x20

This is equivalent to:

Delimiter " "

120.31.2. Functions
The following functions are exported by xm_syslog.

string syslog_facility_string(integer arg)


Convert a Syslog facility value to a string.

integer syslog_facility_value(string arg)


Convert a Syslog facility string to an integer.

string syslog_severity_string(integer arg)


Convert a Syslog severity value to a string.

integer syslog_severity_value(string arg)


Convert a Syslog severity string to an integer.

120.31.3. Procedures
The following procedures are exported by xm_syslog.

parse_syslog();
Parse the $raw_event field as either BSD Syslog (RFC 3164) or IETF Syslog (RFC 5424) format.

parse_syslog(string source);
Parse the given string as either BSD Syslog (RFC 3164) or IETF Syslog (RFC 5424) format.

parse_syslog_bsd();
Parse the $raw_event field as BSD Syslog (RFC 3164) format.

parse_syslog_bsd(string source);
Parse the given string as BSD Syslog (RFC 3164) format.

parse_syslog_ietf();
Parse the $raw_event field as IETF Syslog (RFC 5424) format.

866
parse_syslog_ietf(string source);
Parse the given string as IETF Syslog (RFC 5424) format.

to_syslog_bsd();
Create a BSD Syslog formatted log message in $raw_event from the fields of the event. The following fields
are used to construct the $raw_event field: $EventTime; $Hostname; $SourceName; $ProcessID or
$ExecutionProcessID; $Message or $raw_event; $SyslogSeverity, $SyslogSeverityValue, $Severity, or
$SeverityValue; and $SyslogFacility or $SyslogFacilityValue. If the fields are not present, a sensible default is
used.

to_syslog_ietf();
Create an IETF Syslog (RFC 5424) formatted log message in $raw_event from the fields of the event. The
following fields are used to construct the $raw_event field: $EventTime; $Hostname; $SourceName;
$ProcessID or $ExecutionProcessID; $Message or $raw_event; $SyslogSeverity, $SyslogSeverityValue,
$Severity, or $SeverityValue; and $SyslogFacility or $SyslogFacilityValue. If the fields are not present, a
sensible default is used.

to_syslog_snare();
Create a SNARE Syslog formatted log message in $raw_event. The following fields are used to construct the
$raw_event field: $EventTime, $Hostname, $SeverityValue, $FileName, $Channel, $SourceName,
$AccountName, $AccountType, $EventType, $Category, $RecordNumber, and $Message.

120.31.4. Fields
The following fields are used by xm_syslog.

In addition to the fields listed below, the parse_syslog() and parse_syslog_ietf() procedures will create fields from
the Structured Data part of an IETF Syslog message. If the SD-ID in this case is not "NXLOG", these fields will be
prefixed by the SD-ID (for example, $mySDID.CustomField).

$raw_event (type: string)


A Syslog formatted string, set after to_syslog_bsd(), to_syslog_ietf(), or to_syslog_snare() is invoked.

$EventTime (type: datetime)


The timestamp found in the Syslog message, set after parse_syslog(), parse_syslog_bsd(), or
parse_syslog_ietf() is called. If the year value is missing, it is set as described in the core fix_year() function.

$Hostname (type: string)


The hostname part of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is
called.

$Message (type: string)


The message part of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is
called.

$MessageID (type: string)


The MSGID part of the syslog message, set after parse_syslog_ietf() is called.

$ProcessID (type: string)


The process ID in the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is called.

$Severity (type: string)


The normalized severity name of the event. See $SeverityValue.

867
$SeverityValue (type: integer)
The normalized severity number of the event, mapped as follows.

Syslog Normalized
Severity Severity
0/emerg 5/critical

1/alert 5/critical

2/crit 5/critical

3/err 4/error

4/warning 3/warning

5/notice 2/info

6/info 2/info

7/debug 1/debug

$SourceName (type: string)


The application/program part of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or
parse_syslog_ietf() is called.

$SyslogFacility (type: string)


The facility name of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is called.
The default facility is user.

$SyslogFacilityValue (type: integer)


The facility code of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is called.
The default facility is 1 (user).

$SyslogSeverity (type: string)


The severity name of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is
called. The default severity is notice. See $SeverityValue.

$SyslogSeverityValue (type: integer)


The severity code of the Syslog line, set after parse_syslog(), parse_syslog_bsd(), or parse_syslog_ietf() is
called. The default severity is 5 (notice). See $SeverityValue.

120.31.5. Examples

868
Example 589. Sending a File as BSD Syslog over UDP

In this example, logs are collected from files, converted to BSD Syslog format with the to_syslog_bsd()
procedure, and sent over UDP with the om_udp module.

nxlog.conf (truncated)
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input file>
 6 Module im_file
 7
 8 # We monitor all files matching the wildcard.
 9 # Every line is read into the $raw_event field.
10 File "/var/log/app*.log"
11
12 <Exec>
13 # Set the $EventTime field usually found in the logs by
14 # extracting it with a regexp. If this is not set, the current
15 # system time will be used which might be a little off.
16 if $raw_event =~ /(\d\d\d\d\-\d\d-\d\d \d\d:\d\d:\d\d)/
17 {
18 $EventTime = parsedate($1);
19 }
20
21 # Now set the severity to something custom. This defaults to
22 # 'INFO' if unset.
23 if $raw_event =~ /ERROR/ $Severity = 'ERROR';
24 else $Severity = 'INFO';
25
26 # The facility can be also set, otherwise the default value is
27 # 'USER'.
28 $SyslogFacility = 'AUDIT';
29 [...]

869
Example 590. Collecting BSD Style Syslog Messages over UDP

To collect BSD Syslog messages over UDP, use the parse_syslog_bsd() procedure coupled with the im_udp
module as in the following example.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route syslog_to_file>
18 Path udp => file
19 </Route>

Example 591. Collecting IETF Style Syslog Messages over UDP

To collect IETF Syslog messages over UDP as defined by RFC 5424 and RFC 5426, use the parse_syslog_ietf()
procedure coupled with the im_udp module as in the following example. Note that, as for BSD Syslog, the
default port is 514 (as defined by RFC 5426).

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input ietf>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog_ietf();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route ietf_to_file>
18 Path ietf => file
19 </Route>

870
Example 592. Collecting Both IETF and BSD Syslog Messages over the Same UDP Port

To collect both IETF and BSD Syslog messages over UDP, use the parse_syslog() procedure coupled with the
im_udp module as in the following example. This procedure is capable of detecting and parsing both Syslog
formats. Since 514 is the default UDP port number for both BSD and IETF Syslog, this port can be useful to
collect both formats simultaneously. To accept both formats on different ports, the appropriate parsers can
be used as in the previous two examples.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 Exec parse_syslog();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route syslog_to_file>
18 Path udp => file
19 </Route>

871
Example 593. Collecting IETF Syslog Messages over TLS/SSL

To collect IETF Syslog messages over TLS/SSL as defined by RFC 5424 and RFC 5425, use the
parse_syslog_ietf() procedure coupled with the im_ssl module as in this example. Note that the default port
is 6514 in this case (as defined by RFC 5425). The payload format parser is handled by the Syslog_TLS input
reader.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input ssl>
 6 Module im_ssl
 7 Host localhost
 8 Port 6514
 9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 KeyPass secret
13 InputType Syslog_TLS
14 Exec parse_syslog_ietf();
15 </Input>
16
17 <Output file>
18 Module om_file
19 File "/var/log/logmsg.txt"
20 </Output>
21
22 <Route ssl_to_file>
23 Path ssl => file
24 </Route>

872
Example 594. Forwarding IETF Syslog over TCP

The following configuration uses the to_syslog_ietf() procedure to convert input to IETF Syslog and forward
it over TCP.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input file>
 6 Module im_file
 7 File "/var/log/input.txt"
 8 Exec $TestField = "test value"; $Message = $raw_event;
 9 </Input>
10
11 <Output tcp>
12 Module om_tcp
13 Host 127.0.0.1
14 Port 1514
15 Exec to_syslog_ietf();
16 OutputType Syslog_TLS
17 </Output>
18
19 <Route file_to_syslog>
20 Path file => tcp
21 </Route>

Because of the Syslog_TLS framing, the raw data sent over TCP will look like the following.

Output Sample
130 <13>1 2012-01-01T16:15:52.873750Z - - - [NXLOG@14506 EventReceivedTime="2012-01-01
17:15:52" TestField="test value"] test message↵

This example shows that all fields—except those which are filled by the Syslog parser—are added to the
structured data part.

873
Example 595. Conditional Rewrite of the Syslog Facility—Version 1

If the message part of the Syslog event matches the regular expression, the $SeverityValue field will be
set to the "error" Syslog severity integer value (which is provided by the syslog_severity_value() function).

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Port 514
 8 Host 0.0.0.0
 9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 Exec if $Message =~ /error/ $SeverityValue = syslog_severity_value("error");
16 Exec to_syslog_bsd();
17 </Output>
18
19 <Route syslog_to_file>
20 Path udp => file
21 </Route>

874
Example 596. Conditional Rewrite of the Syslog Facility—Version 2

The following example does almost the same thing as the previous example, except that the Syslog parsing
and rewrite is moved to a processor module and the rewrite only occurs if the facility was modified. This
can make processing faster on multi-core systems because the processor module runs in a separate
thread. This method can also minimize UDP packet loss because the input module does not need to parse
Syslog messages and therefore can process UDP packets faster.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input udp>
 6 Module im_udp
 7 Host 0.0.0.0
 8 Port 514
 9 </Input>
10
11 <Processor rewrite>
12 Module pm_null
13 <Exec>
14 parse_syslog_bsd();
15 if $Message =~ /error/
16 {
17 $SeverityValue = syslog_severity_value("error");
18 to_syslog_bsd();
19 }
20 </Exec>
21 </Processor>
22
23 <Output file>
24 Module om_file
25 File "/var/log/logmsg.txt"
26 </Output>
27
28 <Route syslog_to_file>
29 Path udp => rewrite => file
30 </Route>

120.32. W3C (xm_w3c)


This module provides a parser that can process data in the W3C Extended Log File Format. It also understands
the BRO format and Microsoft Exchange Message Tracking logs. While the xm_csv module can be used to parse
these formats, xm_w3c has the advantage of automatically extracting information from the headers. This makes
it much easier to parse such log files without the need to explicitly define the fields that appear in the input.

A common W3C log source is Microsoft IIS, which produces output like the following:

875
#Software: Microsoft Internet Information Services 7.0↵
#Version: 1.0↵
#Date: 2010-02-13 07:08:22↵
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-
username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-
win32-status sc-bytes cs-bytes time-taken↵
2010-02-13 07:08:21 W3SVC76 DNP1WEB1 174.120.30.2 GET / - 80 - 61.135.169.37 HTTP/1.1
Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+zh-CN;+rv:1.9.0.1)+Gecko/2008070208+Firefox/3.0.1 -
http://www.baidu.com/s?wd=QQ www.domain.com 200 0 0 29554 273 1452↵
2010-02-13 07:25:00 W3SVC76 DNP1WEB1 174.120.30.2 GET /index.htm - 80 - 119.63.198.110 HTTP/1.1
Baiduspider+(+http://www.baidu.jp/spider/) - - www.itcsoftware.com 200 0 0 17791 210 551↵

The format generated by BRO is similar, as it too defines the field names in the header. The field types and
separator characters are also specified in the header. This allows the parser to automatically process the data.
Below is a sample from BRO:

#separator \x09↵
#set_separator ,↵
#empty_field (empty)↵
#unset_field -↵
#path dns↵
#open 2013-04-09-21-01-43↵
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto
trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA
TC RD↵
  RA Z answers TTLs↵
#types time string addr port addr port enum count string count string
count string count string bool bool bool bool count vector[string]
vector[interval]↵
1210953058.350065 m2EJRWK7sCg 192.168.2.16 1920 192.168.2.1 53 udp
16995 ipv6.google.com 1 C_INTERNET 28 AAAA 0 NOERROR F F T
T 0↵
  ipv6.l.google.com,2001:4860:0:2001::68 8655.000000,300.000000↵
1210953058.350065 m2EJRWK7sCg 192.168.2.16 1920 192.168.2.1 53 udp
16995 ipv6.google.com 1 C_INTERNET 28 AAAA 0 NOERROR F F T
T 0↵
  ipv6.l.google.com,2001:4860:0:2001::68 8655.000000,300.000000↵

To use the parser in an input module, the InputType directive must reference the instance name of the xm_w3c
module. See the example below.

See the list of installer packages that provide the xm_w3c module in the Available Modules chapter of the NXLog
User Guide.

120.32.1. Configuration
The xm_w3c module accepts the following directives in addition to the common module directives.

Delimiter
This optional directive takes a single character (see below) as argument to specify the delimiter character
used to separate fields. If this directive is not specified, the default delimiter character is either the space or
tab character, as detected. For Microsoft Exchange Message Tracking logs the comma must be set as the
delimiter:

Delimiter ,

Note that there is no delimiter after the last field in W3C, but Microsoft Exchange Message Tracking logs can
contain a trailing comma.

876
FieldType
This optional directive can be used to specify a field type for a particular field. For example, to parse a
ByteSent field as an integer, use FieldType ByteSent integer. This directive can be used more than once
to provide types for multiple fields.

120.32.1.1. Specifying Quote, Escape, and Delimiter Characters


The Delimiter directive can be specified in several ways.

Unquoted single character


Any printable character can be specified as an unquoted character, except for the backslash (\):

Delimiter ;

Control characters
The following non-printable characters can be specified with escape sequences:

\a
audible alert (bell)

\b
backspace

\t
horizontal tab

\n
newline

\v
vertical tab

\f
formfeed

\r
carriage return

For example, to use TAB delimiting:

Delimiter \t

A character in single quotes


The configuration parser strips whitespace, so it is not possible to define a space as the delimiter unless it is
enclosed within quotes:

Delimiter ' '

Printable characters can also be enclosed:

Delimiter ';'

The backslash can be specified when enclosed within quotes:

Delimiter '\'

877
A character in double quotes
Double quotes can be used like single quotes:

Delimiter " "

The backslash can be specified when enclosed within double quotes:

Delimiter "\"

A hexadecimal ASCII code


Hexadecimal ASCII character codes can also be used by prepending 0x. For example, the space can be
specified as:

Delimiter 0x20

This is equivalent to:

Delimiter " "

120.32.2. Fields
The following fields are used by xm_w3c.

$EventTime (type: datetime)


Constructed from the date and time fields in the input, or from a date-time field.

$SourceName (type: string)


The string in the Software header, such as Microsoft Internet Information Services 7.0.

120.32.3. Examples

878
Example 597. Parsing Advanced IIS Logs

The following configuration parses logs from the IIS Advanced Logging Module using the pipe delimiter. The
logs are converted to JSON.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension w3cinput>
 6 Module xm_w3c
 7 Delimiter |
 8 </Extension>
 9
10 <Input w3c>
11 Module im_file
12 File 'C:\inetpub\logs\LogFiles\W3SVC\ex*.log'
13 InputType w3cinput
14 </Input>
15
16 <Output file>
17 Module om_file
18 File 'C:\test\IIS.json'
19 Exec to_json();
20 </Output>
21
22 <Route w3c_to_json>
23 Path w3c => file
24 </Route>

120.33. WTMP (xm_wtmp)


This module provides a parser function to process binary wtmp files. The module registers a parser function
using the name of the extension module instance. This parser can be used as a parameter for the InputType
directive in input modules such as im_file.

See the list of installer packages that provide the xm_wtmp module in the Available Modules chapter of the
NXLog User Guide.

120.33.1. Configuration
The xm_wtmp module accepts only the common module directives.

120.33.2. Examples

879
Example 598. WTMP to JSON Format Conversion

The following configuration accepts WTMP and converts it to JSON.

nxlog.conf
 1 <Extension wtmp>
 2 Module xm_wtmp
 3 </Extension>
 4
 5 <Extension json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/var/log/wtmp'
12 InputType wtmp
13 Exec to_json();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File '/var/log/wtmp.txt'
19 </Output>
20
21 <Route processwtmp>
22 Path in => out
23 </Route>

Output Sample
{
  "EventTime":"2013-10-01 09:39:59",
  "AccountName":"root",
  "Device":"pts/1",
  "LoginType":"login",
  "EventReceivedTime":"2013-10-10 15:40:20",
  "SourceModuleName":"input",
  "SourceModuleType":"im_file"
}
{
  "EventTime":"2013-10-01 23:23:38",
  "AccountName":"shutdown",
  "Device":"no device",
  "LoginType":"shutdown",
  "EventReceivedTime":"2013-10-11 10:58:00",
  "SourceModuleName":"input",
  "SourceModuleType":"im_file"
}

120.34. XML (xm_xml)


This module provides functions and procedures for working with data formatted as Extensible Markup Language
(XML). It can convert log messages to XML format and can parse XML into fields.

See the list of installer packages that provide the xm_xml module in the Available Modules chapter of the NXLog
User Guide.

880
120.34.1. Configuration
The xm_xml module accepts the following directives in addition to the common module directives.

IgnoreRootTag
This optional boolean directive causes parse_xml() to omit the root tag when setting field names. For
example, when this is set to TRUE and the RootTag is set to Event, a field might be named
$Event.timestamp. With this directive set to FALSE, that field name would be $timestamp. The default value
is TRUE.

IncludeHiddenFields
This boolean directive specifies that the to_xml() function or the to_xml() procedure should inlude fields
having a leading underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE,
then generated XML will contain these otherwise excluded fields.

Note that leading dot (.) is not allowed in XML attribute names thus field names having a leading dot (.) will
always be excluded from XML output.

ParseAttributes
When this optional boolean directive is set to TRUE, parse_xml() will also parse XML attributes. The default is
FALSE (attributes are not parsed). For example, if ParseAttributes is set to TRUE, the following would be
parsed into $Msg.time, $Msg.type, and $Msg:

<Msg time='2014-06-27T00:27:38' type='ERROR'>foo</Msg>

RootTag
This optional directive can be used to specify the name of the root tag that will be used by to_xml() to
generate XML. The default RootTag is Event.

PrefixWinEvent
When this optional boolean directive is set to TRUE, parse_windows_eventlog_xml() will create EventData.
prefiexed fields from <EventData> section of event xml and UserData. prefiexed fields from <UserData>
section. The default PrefixWinEvent is FALSE.

120.34.2. Functions
The following functions are exported by xm_xml.

string to_xml()
Convert the fields to XML and returns this as a string value. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded.

Note that directive IncludeHiddenFields has an effect on fields included in the output.

120.34.3. Procedures
The following procedures are exported by xm_xml.

parse_windows_eventlog_xml();
Parse the $raw_event field as windows eventlog XML input.

parse_windows_eventlog_xml(string source);
Parse the given string as windows eventlog XML format.

881
parse_xml();
Parse the $raw_event field as XML input.

parse_xml(string source);
Parse the given string as XML format.

to_xml();
Convert the fields to XML and put this into the $raw_event field. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded.

Note that directive IncludeHiddenFields has an effect on fields included in the output.

120.34.4. Examples

882
Example 599. Syslog to XML Format Conversion

The following configuration accepts Syslog (both BSD and IETF) and converts it to XML.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension xml>
 6 Module xm_xml
 7 </Extension>
 8
 9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog(); to_xml();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/log.xml"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => file
23 </Route>

Input Sample
<30>Sep 30 15:45:43 host44.localdomain.hu acpid: 1 client rule loaded↵

Output Sample
<Event>
  <MessageSourceAddress>127.0.0.1</MessageSourceAddress>
  <EventReceivedTime>2012-03-08 15:05:39</EventReceivedTime>
  <SyslogFacilityValue>3</SyslogFacilityValue>
  <SyslogFacility>DAEMON</SyslogFacility>
  <SyslogSeverityValue>6</SyslogSeverityValue>
  <SyslogSeverity>INFO</SyslogSeverity>
  <SeverityValue>2</SeverityValue>
  <Severity>INFO</Severity>
  <Hostname>host44.localdomain.hu</Hostname>
  <EventTime>2012-09-30 15:45:43</EventTime>
  <SourceName>acpid</SourceName>
  <Message>1 client rule loaded</Message>
</Event>

883
Example 600. Converting Windows EventLog to Syslog-Encapsulated XML

The following configuration reads the Windows EventLog and converts it to the BSD Syslog format where
the message part contains the fields in XML.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension xml>
 6 Module xm_xml
 7 </Extension>
 8
 9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_xml(); to_syslog_bsd();
12 </Input>
13
14 <Output tcp>
15 Module om_tcp
16 Host 192.168.1.1
17 Port 1514
18 </Output>
19
20 <Route eventlog_to_tcp>
21 Path eventlog => tcp
22 </Route>

Output Sample
<14>Mar 8 15:12:12 WIN-OUNNPISDHIG Service_Control_Manager: <Event><EventTime>2012-03-08
15:12:12</EventTime><EventTimeWritten>2012-03-08 15:12:12</EventTimeWritten><Hostname>WIN-
OUNNPISDHIG</Hostname><EventType>INFO</EventType><SeverityValue>2</SeverityValue><Severity>INFO
</Severity><SourceName>Service Control
Manager</SourceName><FileName>System</FileName><EventID>7036</EventID><CategoryNumber>0</Catego
ryNumber><RecordNumber>6791</RecordNumber><Message>The nxlog service entered the running state.
</Message><EventReceivedTime>2012-03-08 15:12:14</EventReceivedTime></Event>↵

120.35. Compression (xm_zlib)


This module provides stream processors for compressing and decompressing log data using the gzip data
format defined in RFC 1952 and the zlib format defined in RFC 1950. To decompress input data, stream
processors are defined within the im_file module instances. To compress output data, they are specified within
the om_file module instances. Functionality of the xm_zlib module can be combined with other stream
processors such as xm_crypto.

120.35.1. Configuration
The xm_zlib module accepts the following directives in addition to the common module directives.

Format
This optional directive defines the algorithm for compressing and decompressing log data. The available
values are gzip and zlib; the default value is gzip.

CompressionLevel
This optional directive defines the level of compression and ranges between 0 and 9. 0 means compression

884
with the lowest level, but with the highest performance and 9 means the highest level of compression, but
the lowest performance. If this directive is not specified, the default compression level is set to the default of
the zlib library. This usually equals to 6.

CompBufSize
This optional directive defines the amount of bytes for the compression memory buffer and starts with 8192
bytes. The default value is 16384.

DecompBufSize
This optional directive defines the amount of bytes for the decompression memory buffer and starts with
16384 bytes. The default value is 32768.

DataType
This optional directive defines the data type used by the compress stream processor. Specifying data type
improves compression results. The available values are unknown, text, and binary; the default value is
unknown.

MemoryLevel
This optional directive defines the available amount of compression memory and accepts values between 1
and 9. The default value is 8.

120.35.2. Stream Processors


The xm_zlib module implements the following stream processors for compression operations over log data.

compress
This stream processor compresses log data and is specified in the OutputType directive after the output
writer function. The result is similar to running the following command:

printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" | cat - input_file | gzip -c > compressed_file

decompress
This stream processor decompresses log data and is specified in the InputType directive before the input
reader function. The result is similar to running the following command:

printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" | cat - compressed_file | gzip -dc

120.35.3. Examples
The examples below describe various ways for processing logs with the xm_zlib module.

885
Example 601. Compression of Logs

The configuration below utilizes the im_systemd module to read Systemd messages and convert them to
JSON using the to_json() procedure of the xm_json module. The JSON-formatted messages are then
compressed using the compress stream processor. The result is saved to a file.

nxlog.conf
 1 <Extension zlib>
 2 Module xm_zlib
 3 Format gzip
 4 CompressionLevel 9
 5 CompBufSize 16384
 6 DecompBufsize 16384
 7 </Extension>
 8
 9 <Extension _json>
10 Module xm_json
11 </Extension>
12
13 <Input in>
14 Module im_systemd
15 Exec to_json();
16 </Input>
17
18 <Output out>
19 Module om_file
20 OutputType LineBased, zlib.compress
21 File '/tmp/output'
22 </Output>

Example 602. Decompression of Logs

The following configuration uses the decompress stream processor to process gzip-compressed messages
at the input. The result is saved to a file.

nxlog.conf
 1 <Extension zlib>
 2 Module xm_zlib
 3 Format gzip
 4 CompressionLevel 9
 5 CompBufSize 16384
 6 DecompBufsize 16384
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/tmp/input'
12 InputType zlib.decompress, LineBased
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/tmp/output'
18 </Output>

The xm_zlib module can process data via a single or multiple instances.

886
Multiple instances provide flexibility because each instance can be customized for a specific scenario; while using
a single instance makes configuration shorter.

Example 603. Processing Data With Multiple Module Instances

The configuration below uses the zlib1 instance of the xm_zlib module to decompress gzip-compressed data
at the input. After that, messages are converted to JSON using the xm_json module. The JSON data is then
compressed to a zlib format using the zlib2 instance of the xm_zlib module. The result is saved to a file.

nxlog.conf
 1 <Extension zlib1>
 2 Module xm_zlib
 3 Format gzip
 4 CompressionLevel 9
 5 CompBufSize 16384
 6 DecompBufSize 16384
 7 </Extension>
 8
 9 <Extension zlib2>
10 Module xm_zlib
11 Format zlib
12 CompressionLevel 3
13 CompBufSize 64000
14 DecompBufSize 64000
15 </Extension>
16
17 <Extension _json>
18 Module xm_json
19 </Extension>
20
21 <Input in>
22 Module im_file
23 File '/tmp/input'
24 InputType zlib1.decompress, LineBased
25 Exec to_json();
26 </Input>
27
28 <Output out>
29 Module om_file
30 File 'tmp/output'
31 OutputType LineBased, zlib2.compress
32 </Output>

887
Example 604. Processing Data With a Single Module Instance

The configuration below uses a single zlib1 module instance to decompress gzip-compressed messages via
the decompress stream processor and convert them to IETF Syslog format via the to_syslog_ietf() procedure
in the Exec directive. It then compresses logs using the compress processor. The result is saved to a file.

nxlog.conf
 1 <Extension zlib1>
 2 Module xm_zlib
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input in>
10 Module im_file
11 File '/tmp/input'
12 InputType zlib1.decompress, LineBased
13 Exec to_syslog_ietf();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File 'tmp/output'
19 OutputType LineBased, zlib1.compress
20 </Output>

The InputType and OutputType directives provide sequential usage of multiple stream processors to create
workflows. For example, the xm_zlib module functionality can be combined with the xm_crypto module to
provide compression and encryption operations of logs.

While configuring stream processors, compression should always precede encryption. In the opposite process,
decryption should always occur before decompression.

888
Example 605. Processing Data With Various Stream Processors

The configuration below uses the aes_decrypt stream processor of the xm_crypto module to decrypt, and
the decompress stream processor of the xm_zlib module to decompress log data. Using the Exec directive,
messages with the stdout string in their body are selected. The selected messages then compressed and
encrypted with the compress and aes_encrypt stream processors. The result is saved to a file.

nxlog.conf
 1 <Extension zlib>
 2 Module xm_zlib
 3 Format gzip
 4 CompressionLevel 9
 5 CompBufSize 16384
 6 DecompBufsize 16384
 7 </Extension>
 8
 9 <Extension crypto>
10 Module xm_crypto
11 UseSalt TRUE
12 PasswordFile /tmp/passwordfile
13 </Extension>
14
15 <Input in>
16 Module im_file
17 File '/tmp/input'
18 InputType crypto.aes_decrypt, zlib.decompress, LineBased
19 Exec if not ($raw_event =~ /stdout/) drop();
20 </Input>
21
22 <Output out>
23 Module om_file
24 File 'tmp/output'
25 OutputType LineBased, zlib.compress, crypto.aes_encrypt
26 </Output>

889
Chapter 121. Input Modules
Input modules are responsible for collecting event log data from various sources.

Each module provides a set of fields for each log message, these are documented in the corresponding sections
below. The NXLog core creates a set of core fields which are available to each module.

121.1. Process Accounting (im_acct)


This module can be used to collect process accounting logs from a Linux or BSD kernel.

See the list of installer packages that provide the im_acct module in the Available Modules chapter of the NXLog
User Guide.

121.1.1. Configuration
The im_acct module accepts the following directives in addition to the common module directives.

AcctOff
This boolean directive specifes that accounting should be disabled when im_acct stops. If AcctOff is set to
FALSE, accounting will not be disabled; events will continue to be written to the log file for NXLog to collect
later. The default is FALSE.

AcctOn
This boolean directive specifies that accounting should be enabled when im_acct starts. If AcctOn is set to
FALSE, accounting will not be enabled automatically. The default is TRUE.

File
This optional directive specifies the path where the kernel writes accounting data.

FileSizeLimit
NXLog will automatically truncate the log file when it reaches this size, specified as an integer in bytes (see
Integer). The default is 1 MB.

121.1.2. Fields
The following fields are used by im_acct.

$raw_event (type: string)


A string containing a list of key/value pairs from the event.

$CharactersTransferred (type: string)


The characters transferred.

$Command (type: string)


The first 16 characters of the command name.

$CompatFlag (type: boolean)


Set to TRUE if a COMPAT flag is associated with the process event (used compatibility mode).

$CoreDumpedFlag (type: boolean)


Set to TRUE if a CORE flag is associated with the process event (dumped core).

$EventTime (type: datetime)

890
The process start time.

$ExitCode (type: integer)


The process exit code.

$ForkFlag (type: boolean)


Set to TRUE if a FORK flag is associated with the process event (has executed fork, but no exec).

$Group (type: string)


The system group corresponding to the $GroupID.

$GroupID (type: integer)


The group ID of the process.

$MajorPageFaults (type: string)


The number of major page faults.

$MemoryUsage (type: integer)


The average memory usage of the process (on BSD).

$MemoryUsage (type: string)


The average memory usage of the process (on Linux).

$MinorPageFaults (type: string)


The number of minor page faults.

$RealTime (type: string)


The total elapsed time.

$RWBlocks (type: string)


The number of blocks read or written.

$Severity (type: string)


The severity name: INFO.

$SeverityValue (type: integer)


The INFO severity level value: 2.

$SuFlag (type: boolean)


Set to TRUE if a SU flag is associated with the process event (used superuser privileges).

$SysTime (type: string)


The total system processing time elapsed.

$User (type: string)


The system user corresponding to the $UserID.

$UserID (type: integer)


The user ID of the process.

$UserTime (type: string)


The total user processing time elapsed.

891
$XSIGFlag (type: boolean)
Set to TRUE if an XSIG flag is associated with the process event (killed by a signal).

121.1.3. Examples
Example 606. Collecting Process Accounting Logs

With this configuration, the im_acct module will collect process accounting logs. Process accounting will be
automatically enabled and configured to write logs to the file specified. NXLog will allow the file to grow to a
maximum size of 10 MB before truncating it.

nxlog.conf
1 <Input acct>
2 Module im_acct
3 File '/var/log/acct.log'
4 FileSizeLimit 10M
5 </Input>

121.2. AIX Auditing (im_aixaudit)


This module parses events in the AIX Audit format. This module reads directly from the kernel. See also
xm_aixaudit.

See the list of installer packages that provide the im_aixaudit module in the Available Modules chapter of the
NXLog User Guide.

121.2.1. Configuration
The im_aixaudit module accepts the following directives in addition to the common module directives.

DeviceFile
This optional directive specifies the device file from which to read audit events. If this is not specified, it
defaults to /dev/audit.

EventsConfigFile
This optional directive contains the path to the file with a list of audit events. This file should contain events in
AuditEvent = FormatCommand format. The AuditEvent is a reference to the audit object which is defined
under the /etc/security/audit/objects path. The FormatCommand defines the auditpr output for the
object. For more information, see the The Audit Subsystem in AIX section on the IBM website.

121.2.2. Fields
See the xm_aixaudit Fields.

121.2.3. Examples

892
Example 607. Reading AIX Audit Events From the Kernel

This configuration reads AIX audit events directly from the kernel via the (default) /dev/audit device file.

nxlog.conf
1 <Input in>
2 Module im_aixaudit
3 DeviceFile /dev/audit
4 </Input>

121.3. Azure (im_azure)


This module can be used to collect logs from Microsoft Azure applications.

See the list of installer packages that provide the im_azure module in the Available Modules chapter of the NXLog
User Guide.

121.3.1. Storage Setup


Azure web application logging and storage can be configured with the Azure Management Portal.

1. After logging in to the Portal, click New on the left panel, select the Storage category, and choose the
Storage account - blob, file, table, queue.
2. Create the new storage account. Provide a storage name, location, and replication type.
3. Click [ Create Storage Account ] and wait for storage setup to complete.
4. Go to Apps, select the application for which to enable logging, and click Configure.
5. Scroll down to the application diagnostic section and configure the table and blob storage options
corresponding with the storage account created above.
6. Confirm the changes by clicking Save, then restart the service. Note that it may take a while for Azure to
create the table and/or blob in the storage.

121.3.2. Configuration
The im_azure module accepts the following directives in addition to the common module directives. The AuthKey
and StorageName directives are required, along with either BlobName or TableName.

AuthKey
This mandatory directive specifies the authentication key to use for connecting to Azure.

BlobName
This directive specifies the storage blob to connect to. One of BlobName and TableName must be defined
(but not both).

SSLCompression
This Boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

893
StorageName
This mandatory directive specifies the name of the storage account from which to collect logs.

TableName
This directive specifies the storage table to connect to. One of BlobName and TableName must be defined
(but not both).

Address
This directive specifies the URL for connecting to the storage account and corresponding table or blob. If this
directive is not specified, it defaults to http://<table|blob>.<storagename>.core.windows.net. If
defined, the value must start with http:// or https://.

HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.

894
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSRequireCert
This boolean directive specifies that the remote HTTPS client must present a certificate. If set to TRUE and
there is no certificate presented during the connection handshake, the connection will be refused. The
default value is TRUE: each connection must use a certificate.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

PollInterval
This directive specifies how frequently the module will check for new events, in seconds. If this directive is not
specified, it defaults to 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second).

121.3.3. Fields
The following fields are used by im_azure.

$raw_event (type: string)


The raw string from the event.

$EventTime (type: datetime)


The timestamp of the event.

$ProcessID (type: integer)


The ID of the process which generated the event.

$Severity (type: string)


The severity of the event, if available. The severity is mapped as follows.

Azure Normalized
Severity Severity
Critical 5/CRITICAL

Warning 3/WARNING

Information 2/INFO

Verbose 1/DEBUG

$SeverityValue (type: integer)


The severity value of the event, if available; see the $Severity field.

$SourceName (type: string)


The name of the application which generated the event, if available.

$ThreadID (type: integer)


The ID of the thread which generated the event.

895
121.4. Batched Compression (im_batchcompress)
The im_batchcompress module provides a compressed network transport with optional SSL encryption. It uses its
own protocol to receive and decompress a batch of messages sent by om_batchcompress.

See the list of installer packages that provide the im_batchcompress module in the Available Modules chapter of
the NXLog User Guide.

121.4.1. Configuration
The im_batchcompress module accepts the following directives in addition to the common module directives.

ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).

Port
The module instance will listen on this port for incoming connections. The default is port 2514.

Port directive for will become deprecated in this context from NXLog EE 6.0. Provide the
IMPORTANT
port in ListenAddr.

AllowUntrusted
This boolean directive specifies whether the remote connection should be allowed without certificate
verification. If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The
default value is FALSE: by default, all connections must present a trusted certificate.

CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are

896
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.

CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The filenames in this directory must be in the OpenSSL
hashed format.

CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.

RequireCert
This boolean directive specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: by default, each connections must use a certificate.

SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.

SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

121.4.2. Fields
The following fields are used by im_batchcompress.

$MessageSourceAddress (type: string)


The IP address of the remote host.

121.4.3. Examples

897
Example 608. Reading Batch Compressed Data

This configuration listens on port 2514 for incoming log batches and writes them to file.

nxlog.conf
 1 <Input batchcompress>
 2 Module im_batchcompress
 3 ListenAddr 1.1.1.1:2514
 4 Port
 5 </Input>
 6
 7 # old syntax
 8 #<Input batchcompress>
 9 # Module im_batchcompress
10 # ListenAddr 0.0.0.0
11 # Port 2514
12 #</Input>
13
14 <Output file>
15 Module om_file
16 File "tmp/output"
17 </Output>
18
19 <Route batchcompress_to_file>
20 Path batchcompress => file
21 </Route>

121.5. Basic Security Module Auditing (im_bsm)


This module provides support for parsing events logged using Sun’s Basic Security Module (BSM) Auditing API.
This module reads directly from the kernel. See also xm_bsm.

The BSM /dev/auditpipe device file is available on FreeBSD and macOS. On Solaris, the device file is not
available and the log files must be read and parsed with im_file and xm_bsm as shown in the example.

See the list of installer packages that provide the im_bsm module in the Available Modules chapter of the NXLog
User Guide.

121.5.1. Setup
For information about setting up BSM Auditing, see the xm_bsm Setup section.

121.5.2. Configuration
The im_bsm module accepts the following directives in addition to the common module directives.

DeviceFile
This optional directive specifies the device file from which to read BSM events. If this is not specified, it
defaults to /dev/auditpipe.

EventFile
This optional directive can be used to specify the path to the audit event database containing a mapping
between event names and numeric identifiers. The default location is /etc/security/audit_event which is
used when the directive is not specified.

898
121.5.3. Fields
See the xm_bsm Fields.

121.5.4. Examples
Example 609. Reading BSM Audit Events From the Kernel

This configuration reads BSM audit events directly from the kernel via the (default) /dev/auditpipe device
file (which is not available on Solaris, see the xm_bsm example instead).

nxlog.conf
1 <Input in>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>

121.6. Check Point OPSEC LEA (im_checkpoint)


This module provides support for collecting logs remotely from Check Point devices over the OPSEC LEA
protocol. The OPSEC LEA protocol makes it possible to establish a trusted secure and authenticated connection
with the remote device.

The OPSEC SDK provides libraries only in 32-bit versions and this makes it impossible to compile
a 64-bit application. For this reason the im_checkpoint module uses a helper program called nx-
NOTE
im-checkpoint. This helper is responsible for collecting the logs and transmitting these over a
pipe to the im_checkpoint module.

CheckPoint uses a certificate export method with an activation password so that certificate keys can be securely
transferred over the network in order to establish trust relationships between the entities involved when using
SSL-based authenticated connections. The following entities (hosts) are involved in the log generation and
collection process:

SmartDashboard
The firewall administrator can use the SmartDashboard management interface to connect to and manage the
firewall.

SecurePlatform based FireWall-1


The SecurePlatform based FireWall-1 device will be generating the logs (SPLAT).

NXLog
The log collector running NXLog which connects to SPLAT over the OPSEC LEA protocol utilizing the
im_checkpoint module.

The following steps are required to configure the LEA connection between SPLAT and NXLog.

1. Enable the LEA service on SPLAT. Log in to SPLAT, enter expert mode, and run vi
$FWDIR/conf/fwopsec.conf. Make sure the file contains the following lines. Then restart the firewall with
the cprestart command (or cpstop and cpstart).

fwopsec.conf
lea_server auth_port 18184
lea_server auth_type sslca

2. Make sure SPLAT will accept ICA pull requests, the LEA Connection (port 18184), and can generate logs. For

899
testing purposes, it is easiest to create a single rule to accept all connections and log these. For this the
SmartDashboard host must be added as a GUI Client on SPLAT and a user needs to be configured to be able
to log onto SPLAT remotely from SmartDashboard.
3. Create the certificates for NXLog in SmartDashboard. Select Manage › Servers › OPSEC Applications, then
click [ New ] and select OPSEC Application. A dialog window should appear. Fill in the following properties
and then click [ OK ].

Name
Set to nxlog.

Description
Set to NXLog log collector or something similar.

Host
Click on [ New ] to create a new host and name it accordingly (nxloghost, for example).

Client Entities
Check LEA. All other options should be unchecked.

Secure Internal Communication


Click on [ Communication ]. Another dialog window will appear. Enter and re-enter the activation keys,
then click [ Initialize ]. Trust state should change from Uninitialized to Initialized but trust not established.
Click [ Close ]. Now in the OPSEC Application Properties window the DN should appear. This generated
string looks like this: CN=nxlog,O=splat..ebo9pf. This value will be used in our lea.conf file as the
opsec_sic_name parameter.

4. Retrieve the OPSEC application certificate. From the NXLog host, run the following command:
/opt/nxlog/bin/opsec_pull_cert -h SPLAT_IP_ADDR -n nxlog -p ACTIVATION_KEY. Make sure to
substitute the correct values in place of SPLAT_IP_ADDR and ACTIVATION_KEY. If the command is successful,
the certificate file opsec.p12 should appear in the current directory. Copy this file to /opt/nxlog/etc.
5. Get the DN of SPLAT. In SmartDashboard, double-click on Network Objects › Check Point › SPLAT. The
properties window will contain a similar DN under Secure Internal Communication such as
CN=cp_mgmt,o=splat..ebo9pf.

6. Retrieve the sic_policy.conf file from SPLAT. Initiate a secure copy from the firewall in expert mode. Then
move the file to the correct location.

[Expert@checkpoint]# scp $CPDIR/conf/sic_policy.conf user@rhel:/home/user


[root@rhel ~]# mv /home/user/sic_policy.conf /opt/nxlog/etc

7. Edit /opt/nxlog/etc/sic_policy.conf, and add the necessary policy to the [Outbound rules] section.

sic_policy.conf
1 [Outbound rules]
2 # apply_to peer(s) port(s) service(s) auth-method(s)
3 # --------------------------------------------------------
4
5 # OPSEC configurations - place here (and in [Inbound rules] too)
6 ANY ; ANY ; 18184 ; fwn1_opsec, ssl_opsec, ssl_clear_opsec, lea ; any_method

8. Edit /opt/nxlog/etc/lea.conf. The file should contain the following. Make sure to substitute the correct
value in place of SPLAT_IP_ADDR and use the correct DN values for opsec_sic_name and lea_server
opsec_entity_sic_name.

900
lea.conf
lea_server ip SPLAT_IP_ADDR
lea_server auth_port 18184
lea_server auth_type sslca
opsec_sic_name "CN=nxlog,O=splat..ebo9pf"
opsec_sslca_file /opt/nxlog/etc/opsec.p12
lea_server opsec_entity_sic_name "CN=cp_mgmt,o=splat..ebo9pf"
opsec_sic_policy_file /opt/nxlog/etc/sic_policy.conf

Refer to the Check Point documentation for more information regarding the LEA log service configuration.

To test whether the log collection works, execute the following command: /opt/nxlog/bin/nx-im-checkpoint
--readfromlast FALSE > output.bin. The process should not exit. Type Ctrl+c to interrupt it. The created
file output.bin should contain logs in NXLog’s Binary format.

The OPSEC_DEBUG_LEVEL environment variable can be set to get debugging information if


something goes wrong and there is no output produced. Run OPSEC_DEBUG_LEVEL=1
NOTE
/opt/nxlog/bin/nx-im-checkpoint --readfromlast FALSE > output.bin and check the
debug logs printed to standard error.

The two files sslauthkeys.C and sslsess.C are used during the key-based authentication.
NOTE These files are stored in the same directory where lea.conf resides. To override this, set the
OPSECDIR environment variable.

If the log collection is successful, you can now try running NXLog with the im_checkpoint module.

See the list of installer packages that provide the im_checkpoint module in the Available Modules chapter of the
NXLog User Guide.

121.6.1. Configuration
The im_checkpoint module accepts the following directives in addition to the common module directives.

Command
The optional directive specifies the path of the nx-im-checkpoint binary. If not specified, the default is
/opt/nxlog/bin/nx-im-checkpoint on Linux.

LEAConfigFile
This optional directive specifies the path of the LEA configuration file. If not specified, the default is
/opt/nxlog/etc/lea.conf. This file must be edited in order for the OPSEC LEA connection to work.

LogFile
This can be used to specify the log file to be read. If not specified, it defaults to fw.log. To collect the audit
log, use LogFile fw.adtlog which would then be passed to the nx-im-checkpoint binary as --logfile
fw.adtlog.

ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved record number could be read, the module will resume reading from this saved record number. If
ReadFromLast is FALSE, the module will read all logs from the LEA source. This can result in quite a lot of
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.

Restart

901
Restart the nx-im-checkpoint process if it exits. There is a one second delay before it is restarted to avoid a
denial-of-service if the process is not behaving. This boolean directive defaults to FALSE.

SavePos
This boolean directive specifies that the last record number should be saved when NXLog exits. The record
number will be read from the cache file upon startup. The default is TRUE: the record number is saved if this
directive is not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache
directive.

121.6.2. Fields
The following fields are used by im_checkpoint.

The LEA protocol provides Check Point device logs in a structured format. For the list of LEA fields, see LEA Fields
Update on CheckPoint.com. Some of the field names are mapped to normalized names which NXLog uses in
other modules (such as $EventTime). The list of these fields is provided below. The other LEA fields are
reformatted such that non-alphanumeric characters are replaced with an underscore (_) in field names. The
$raw_event field contains the list of all fields and their respective values without any modification to the original
LEA field naming.

$raw_event (type: string)


Contains the $EventTime, $Hostname, $Severity, and the original LEA fields in fieldname=value pairs.

$AccountName (type: string)


The user name. Originally called user.

$ApplicationName (type: string)


The application that the user is trying to access. Originally called app_name.

$DestinationIPv4Address (type: ipaddr)


The destination IP address of the connection. Originally called dst.

$DestinationPort (type: integer)


The destination port number. Originally called d_port.

$Direction (type: string)


The direction of the connection with respect to the interface. Can be either inbound or outbound. Originally
called i/f_dir.

$EventDuration (type: string)


The duration of the connection. Originally called elapsed.

$EventTime (type: datetime)


The date and time of the event. Originally called time.

$Hostname (type: string)


The IP address or hostname of the device which generated the log. Originally called orig.

$Interface (type: string)


The name of the interface the connection passed through. Originally called i/f_name.

$RecordNumber (type: integer)


The record number which identifies the log entry. Originally called loc.

902
$Severity (type: string)
The IPS protection severity level setting. Originally called severity. Set to INFO if it was not provided in the logs.

$SourceIPv4Address (type: ipaddr)


The source IP address of the connection. Originally called src.

$SourceName (type: string)


The name of the device which generated the log. Originally called product.

$SourcePort (type: integer)


The source port number of the connection. Originally called s_port.

121.6.3. Examples
Example 610. Converting Check Point LEA Input to JSON

This configuration instructs NXLog to collect logs from Check Point devices over the LEA protocol and store
the logs in a file in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input checkpoint>
 6 Module im_checkpoint
 7 Command /opt/nxlog/bin/nx-im-checkpoint
 8 LEAConfigFile /opt/nxlog/etc/lea.conf
 9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'tmp/output'
14 Exec $raw_event = to_json();
15 </Output>
16
17 <Route checkpoint_to_file>
18 Path checkpoint => file
19 </Route>

121.7. DBI (im_dbi)


The im_dbi module allows NXLog to pull log data from external databases. This module utilizes the libdbi
database abstraction library, which supports various database engines such as MySQL, PostgreSQL, MSSQL,
Sybase, Oracle, SQLite, and Firebird. A SELECT statement can be specified, which will be executed periodically to
check for new records.

The im_dbi and om_dbi modules support GNU/Linux only because of the libdbi library. The
NOTE
im_odbc and om_odbc modules provide native database access on Windows.

903
libdbi needs drivers to access the database engines. These are in the libdbd-* packages on
Debian and Ubuntu. CentOS 5.6 has a libdbi-drivers RPM package, but this package does not
NOTE contain any driver binaries under /usr/lib64/dbd. The drivers for both MySQL and PostgreSQL
are in libdbi-dbd-mysql. If these are not installed, NXLog will return a libdbi driver initialization
error.

See the list of installer packages that provide the im_dbi module in the Available Modules chapter of the NXLog
User Guide.

121.7.1. Configuration
The im_dbi module accepts the following directives in addition to the common module directives.

Driver
This mandatory directive specifies the name of the libdbi driver which will be used to connect to the
database. A DRIVER name must be provided here for which a loadable driver module exists under the name
libdbdDRIVER.so (usually under /usr/lib/dbd/). The MySQL driver is in the libdbdmysql.so file.

SQL
This directive should specify the SELECT statement to be executed every PollInterval seconds. The module
automatically appends a WHERE id > ? LIMIT 10 clause to the statement. The result set returned by the
SELECT statement must contain an id column which is then stored and used for the next query.

Option
This directive can be used to specify additional driver options such as connection parameters. The manual of
the libdbi driver should contain the options available for use here.

PollInterval
This directive specifies how frequently the module will check for new records, in seconds. If this directive is
not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check
twice every second).

SavePos
If this boolean directive is set to TRUE, the position will be saved when NXLog exits. The position will be read
from the cache file upon startup. The default is TRUE: the position will be saved if this directive is not
specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

121.7.2. Examples

904
Example 611. Reading From a MySQL Database

This example uses libdbi and the MySQL driver to connect to the logdb database on the local host and
execute the provided statement.

nxlog.conf
 1 <Input dbi>
 2 Module im_dbi
 3 Driver mysql
 4 Option host 127.0.0.1
 5 Option username mysql
 6 Option password mysql
 7 Option dbname logdb
 8 SQL SELECT id, facility, severity, hostname, \
 9 timestamp, application, message \
10 FROM log
11 </Input>
12
13 <Output file>
14 Module om_file
15 File "tmp/output"
16 </Output>
17
18 <Route dbi_to_file>
19 Path dbi => file
20 </Route>

121.8. Event Tracing for Windows (im_etw)


This module can be used to collect events through Event Tracing for Windows (ETW).

ETW is a mechanism in Windows designed for efficient logging of both kernel and user-mode applications. Debug
and Analytical channels are based on ETW and cannot be collected as regular Windows Eventlog channels via the
im_msvistalog module. Various Windows services such as the Windows Firewall and DNS Server can be
configured to log events through Windows Event Tracing.

The im_etw module reads event tracing data directly for maximum efficiency. Unlike other solutions, im_etw does
not save ETW data into intermediary trace files that need to be parsed again.

NOTE The im_etw module is only available on the Windows platform.

See the list of installer packages that provide the im_etw module in the Available Modules chapter of the NXLog
User Guide.

121.8.1. Configuration
The im_etw module accepts the following directives in addition to the common module directives. One of
KernelFlags and Provider must be specified.

KernelFlags
This directive specifies that kernel trace logs should be collected, and accepts a comma-separated list of flags
to use for filtering the logs. The Provider and KernelFlags directives are mutually exclusive (but one must be
specified). The following values are allowed: ALPC, CSWITCH, DBGPRINT, DISK_FILE_IO, DISK_IO,
DISK_IO_INIT, DISPATCHER, DPC, DRIVER, FILE_IO, FILE_IO_INIT, IMAGE_LOAD, INTERRUPT,
MEMORY_HARD_FAULTS, MEMORY_PAGE_FAULTS, NETWORK_TCPIP, NO_SYSCONFIG, PROCESS, PROCESS_COUNTERS,
PROFILE, REGISTRY, SPLIT_IO, SYSTEMCALL, THREAD, VAMAP, and VIRTUAL_ALLOC.

905
Provider
This directive specifies the name (not GUID) of the ETW provider from which to collect trace logs. Providers
available for tracing can be listed with logman query providers. The Provider and KernelFlags directives
are mutually exclusive (but one must be specified). The Windows Kernel Trace provider is not supported;
instead, the KernelFlags directive should be used to open a kernel logger session.

Level
This optional directive specifies the log level for collecting trace events. Because kernel log sessions do not
provide log levels, this directive is only available in combination with the Provider directive. Valid values
include Critical, Error, Warning, Information, and Verbose. If this directive is not specified, the verbose
log level is used.

MatchAllKeyword
This optional directive is used for filtering ETW events based on keywords. Defaults to 0x00000000. For more
information, see System ETW Provider Event Keyword-Level Settings in Microsoft documentation.

MatchAnyKeyword
This optional directive is used for filtering ETW events based on keywords. Defaults to 0x00000000. For more
information, see System ETW Provider Event Keyword-Level Settings in Microsoft documentation.

121.8.2. Fields
The following fields are used by im_etw.

Depending on the ETW provider from which NXLog collects trace logs, the set of fields generated by the im_etw
module may slightly vary. In addition to the fields listed below, the module can generate special provider-specific
fields. If the module is configured to collect trace logs from a custom provider (for example, from a custom user-
mode application), the module will also generate fields derived from the custom provider trace logs.

$raw_event (type: string)


A string containing a field=value pair for each field in the event.

$AccountName (type: string)


The username associated with the event.

$AccountType (type: string)


The type of the account. Possible values are: User, Group, Domain, Alias, Well Known Group, Deleted
Account, Invalid, Unknown, and Computer.

$ActivityID (type: string)


The ID of the activity corresponding to the event.

$ChannelID (type: integer)


The channel to which the event log should be directed.

$Domain (type: string)


The domain name of the user.

$EventId (type: integer)


The Event ID, corresponding to the provider, that indicates the type of event.

$EventTime (type: datetime)


The time when the event was generated.

906
$EventType (type: string)
One of CRITICAL, ERROR, WARNING, DEBUG, AUDIT_FAILURE, AUDIT_SUCCESS, or INFO.

$ExecutionProcessID (type: integer)


The ID of the process that generated the event.

$ExecutionThreadID (type: integer)


The ID of the thread that generated the event.

$Keywords (type: string)


A keyword bit mask corresponding to the current event.

$OpcodeValue (type: integer)


An integer indicating the operation corresponding to the event.

$ProviderGuid (type: string)


The GUID of the trace provider, corresponding to the $SourceName.

$Severity (type: string)


The normalized severity name of the event. See $SeverityValue.

$SeverityValue (type: integer)


The normalized severity number of the event, mapped as follows.

Event Log Normalized


Severity Severity
0/Audit Success 2/INFO

0/Audit Failure 4/ERROR

1/Critical 5/CRITICAL

2/Error 4/ERROR

3/Warning 3/WARNING

4/Information 2/INFO

5/Verbose 1/DEBUG

$SourceName (type: string)


The name of the trace provider.

$TaskValue (type: integer)


An integer indicating a particular component of the provider.

$Version (type: integer)


The version of the event type.

121.8.3. Examples

907
Example 612. Collecting Events From the Windows Kernel Trace

With this configuration, NXLog will collect trace events from the Windows kernel. Only events matching the
PROCESS and THREAD flags will be collected.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 KernelFlags PROCESS, THREAD
4 </Input>

Example 613. Collecting Events From an ETW Provider

With this configuration, NXLog will collect events from the Microsoft-Windows-Firewall trace provider.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Firewall
4 </Input>

Example 614. Setting Level Directive

With this configuration, NXLog will assign event log level for a specified provider.

nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 Level verbose
5 MatchAnyKeyword 0xFFFFFFFFFFFFFFFF
6 MatchAllKeyword 0x0
7 </Input>

121.9. External Programs (im_exec)


This module will execute a program or script on startup and read its standard output. It can be used to easily
integrate with exotic log sources which can be read only with the help of an external script or program.

If you are using a Perl script, consider using im_perl instead or turning on Autoflush with $|
WARNING = 1;, otherwise im_exec might not receive data immediately due to Perl’s internal buffering.
See the Perl language reference for more information about $|.

See the list of installer packages that provide the im_exec module in the Available Modules chapter of the NXLog
User Guide.

121.9.1. Configuration
The im_exec module accepts the following directives in addition to the common module directives. The Command
directive is required.

Command
This mandatory directive specifies the name of the program or script to be executed.

908
Arg
This is an optional parameter. Arg can be specified multiple times, once for each argument that needs to be
passed to the Command. Note that specifying multiple arguments with one Arg directive, with arguments
separated by spaces, will not work (the Command would receive it as one argument).

InputType
See the InputType description in the global module configuration section.

Restart
Restart the process if it exits. There is a one second delay before it is restarted to avoid a denial-of-service
when a process is not behaving. Looping should be implemented in the script itself, this directive is only to
provide some safety against malfunctioning scripts and programs. This boolean directive defaults to FALSE:
the Command will not be restarted if it exits.

121.9.2. Examples
Example 615. Emulating im_file

This configuration uses the tail command to read from a file.

The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.

nxlog.conf
 1 <Input messages>
 2 Module im_exec
 3 Command /usr/bin/tail
 4 Arg -f
 5 Arg /var/log/messages
 6 </Input>
 7
 8 <Output file>
 9 Module om_file
10 File "tmp/output"
11 </Output>
12
13 <Route messages_to_file>
14 Path messages => file
15 </Route>

121.10. Files (im_file)


This module can be used to read log messages from files. The file position can be persistently saved across
restarts in order to avoid reading from the beginning again when NXLog is restarted. External rotation tools are
also supported. When the module is not able to read any more data from the file, it checks whether the opened
file descriptor belongs to the same filename it opened originally. If the inodes differ, the module assumes the file
was moved and reopens its input.

im_file uses a one second interval to monitor files for new messages. This method was implemented because
polling a regular file is not supported on all platforms. If there is no more data to read, the module will sleep for
1 second.

By using wildcards, the module can read multiple files simultaneously and will open new files as they appear. It
will also enter newly created directories if recursion is enabled.

909
The module needs to scan the directory content for wildcarded file monitoring. This can present
a significant load if there are many files (hundreds or thousands) in the monitored directory. For
NOTE
this reason it is highly recommended to rotate files out of the monitored directory either using
the built-in log rotation capabilities of NXLog or with external tools.

See the list of installer packages that provide the im_file module in the Available Modules chapter of the NXLog
User Guide.

121.10.1. Configuration
The im_file module accepts the following directives in addition to the common module directives. The File
directive is required.

File
This mandatory directive specifies the name of the input file to open. It may be given more than once in a
single im_file module instance. The value must be a string type expression. For relative filenames you should
be aware that NXLog changes its working directory to "/" unless the global SpoolDir is set to something else.
On Windows systems the directory separator is the backslash (\). For compatibility reasons the forward slash
(/) character can be also used as the directory separator, but this only works for filenames not containing
wildcards. If the filename is specified using wildcards, the backslash (\) should be used for the directory
separator. Filenames on Windows systems are treated case-insensitively, but case-sensitively on Unix/Linux.

Wildcards are supported in filenames and directories. Wildcards are not regular expressions, but are patterns
commonly used by Unix shells to expand filenames (also known as "globbing").

?
Matches a single character only.

*
Matches zero or more characters.

\*
Matches the asterisk (*) character.

\?
Matches the question mark (?) character.

[…]
Used to specify a single character. The class description is a list containing single characters and ranges of
characters separated by the hyphen (-). If the first character of the class description is ^ or !, the sense of
the description is reversed (any character not in the list is accepted). Any character can have a backslash (
\) preceding it, which is ignored, allowing the characters ] and - to be used in the character class, as well
as ^ and ! at the beginning.

By default, the backslash character (\) is used as an escape sequence. This character is also
the directory separator on Windows. Because of this, escaping of wildcard characters is not
supported on Windows, see the EscapeGlobPatterns directive. However, string literals are
evaluated differently depending on the quotation type. Single quoted strings are interpreted
as-is without escaping, e.g. 'C:\t???*.log' stays C:\t???\*.log. Escape sequences in
NOTE
double quoted strings are processed, for example "C:\\t???\*.log" becomes
C:\t???\*.log after evaluation. In both cases, the evaluated string is the same and gets
separated into parts with different glob patterns at different levels. In the previous example
the parts are c:, t???, and *.log. NXLog matches these at the proper directory levels to find
all matching files.

910
ActiveFiles
This directive specifies the maximum number of files NXLog will actively monitor. If there are modifications to
more files in parallel than the value of this directive, then modifications to files above this limit will only get
noticed after the DirCheckInterval (all data should be collected eventually). Typically there are only a few log
sources actively appending data to log files, and the rest of the files are dormant after being rotated, so the
default value of 10 files should be sufficient in most cases. This directive is also only relevant in case of a
wildcarded File path.

CloseWhenIdle
If set to TRUE, this boolean directive specifies that open input files should be closed as soon as possible after
there is no more data to read. Some applications request an exclusive lock on the log file when written or
rotated, and this directive can possibly help if the application tries again to acquire the lock. The default is
FALSE.

DirCheckInterval
This directive specifies how frequently, in seconds, the module will check the monitored directory for
modifications to files and new files in case of a wildcarded File path. The default is twice the value of the
PollInterval directive (if PollInterval is not set, the default is 2 seconds). Fractional seconds may be specified. It
is recommended to increase the default if there are many files which cannot be rotated out and the NXLog
process is causing high CPU load.

Exclude
This directive can specify a file or a set of files (using wildcards) to be excluded. More than one occurrence of
the Exclude directive can be specified.

InputType
See the InputType directive in the list of common module directives. If this directive is not specified the
default is LineBased (the module will use CRLF as the record terminator on Windows, or LF on Unix).

This directive also supports stream processors, see the description in the InputType section.

NoEscape
This boolean directive specifies whether the backslash (\) in file paths should be disabled as an escape
sequence. This is especially useful for file paths on Windows. By default, NoEscape is FALSE (backslash
escaping is enabled and the path separator on Windows must be escaped).

OnEOF
This optional block directive can be used to specify a group of statements to execute when a file has been
fully read (on end-of-file). Only one OnEOF block can be specified per im_file module instance. The following
directives are used inside this block.

Exec
This mandatory directive specifies the actions to execute after EOF has been detected and the grace
period has passed. Like the normal Exec directive, the OnEOF Exec can be specified as a normal directive
or a block directive.

GraceTimeout
This optional directive specifies the time in seconds to wait before executing the actions configured in the
Exec block or directive. The default is 1 second.

PollInterval
This directive specifies how frequently the module will check for new files and new log entries, in seconds. If
this directive is not specified, it defaults to 1 second. Fractional seconds may be specified (PollInterval 0.5
will check twice every second).

ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if

911
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the file. This can result in quite a lot of messages,
and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.

Recursive
If set to TRUE, this boolean directive specifies that input files set with the File directive should be searched
recursively under sub-directories. For example, /var/log/error.log will match
/var/log/apache2/error.log. Wildcards can be used in combination with Recursive: /var/log/*.log will
match /var/log/apache2/access.log. This directive only causes scanning under the given path and does
not affect the processing of wildcarded directories: /var/*/qemu/debian.log will not match
/var/log/libvirt/qemu/debian.log. The default is FALSE.

RenameCheck
If set to TRUE, this boolean directive specifies that input files should be monitored for possible file rotation via
renaming in order to avoid re-reading the file contents. A file is considered to be rotated when NXLog detects
a new file whose inode and size matches that of another watched file which has just been deleted. Note that
this does not always work correctly and can yield false positives when a log file is deleted and another is
added with the same size. The file system is likely to reuse to inode number of the deleted file and thus the
module will falsely detect this as a rename/rotation. For this reason the default value of RenameCheck is
FALSE: renamed files are considered to be new and the file contents will be re-read.

It is recommended to use a naming scheme for rotated files so names of rotated files do not
NOTE match the wildcard and are not monitored anymore after rotation, instead of trying to solve
the renaming issue with this directive.

SavePos
If this boolean directive is set to TRUE, the file position will be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position will be saved if this directive is
not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

121.10.2. Functions
The following functions are exported by im_file.

string file_name()
Return the name of the currently open file which the log was read from.

integer record_number()
Returns the number of processed records (including the current record) of the currently open file since it was
opened or truncated.

121.10.3. Examples

912
Example 616. Forwarding Logs From a File to a Remote Host

This configuration will read from a file and forward messages via TCP. No additional processing is done.

nxlog.conf
 1 <Input messages>
 2 Module im_file
 3 File "/var/log/messages"
 4 </Input>
 5
 6 <Output tcp>
 7 Module om_tcp
 8 Host 192.168.1.1
 9 Port 514
10 </Output>
11
12 <Route messages_to_tcp>
13 Path messages => tcp
14 </Route>

121.11. File Integrity Monitoring (im_fim)


This module is capable of scanning files and directories and reporting detected changes and deletions. On the
first scan, the checksum of each file is recorded. This checksum is then compared to the checksum value
calculated during successive scans. The im_fim module works on the filesystem level, so it only has access to file
information such as ownership and last modification date, and no information about which user made a change.

Files are checked periodically, not in real-time. If there are multiple changes between two scans, only the
cumulative effect is logged. For example, if one user modifies a file and another user reverts the changes before
the next scan occurs, only the change in modification time is detected.

For real-time monitoring, auditing must be enabled on the host operating system. See the File Integrity
Monitoring chapter in the User Guide for more information.

See the list of installer packages that provide the im_fim module in the Available Modules chapter of the NXLog
User Guide.

121.11.1. Configuration
The im_fim module accepts the following directives in addition to the common module directives. The File
directive is required.

File
This mandatory directive specifies the name of the input file to scan. It must be a string type expression. See
the im_file File directive for more details on how files can be specified. Wildcards are supported. More than
one occurrence of the File directive can be used.

Digest
This specifies the digest method (hash function) to be used to calculate the checksum. The default is sha1.
The following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256,
sha384, and sha512.

Exclude
This directive can specify a file or a set of files (using wildcards) to be excluded from the scan. More than one

913
occurrence of the Exclude directive can be specified.

NoEscape
This boolean directive specifies whether the backslash (\) in file paths should be disabled as an escape
sequence. By default, NoEscape is FALSE (the path separator on Windows needs to be escaped).

Recursive
If set to TRUE, this boolean directive specifies that files set with the File directive should be searched
recursively under sub-directories. For example, /var/log/error.log will match
/var/log/apache2/error.log. Wildcards can be used in combination with Recursive: /var/log/*.log will
match /var/log/apache2/access.log. This directive only causes scanning under the given path and does
not affect the processing of wildcarded directories: /var/*/qemu/debian.log will not match
/var/log/libvirt/qemu/debian.log. The default is FALSE.

ScanInterval
This directive specifies how long the module will wait between scans for modifications, in seconds. The
default is 86400 seconds (1 day). The value of ScanInterval can be set to 0 to disable periodic scanning and
instead invoke scans via the start_scan() procedure.

121.11.2. Functions
The following functions are exported by im_fim.

boolean is_scanning()
Returns TRUE if scanning is in progress.

121.11.3. Procedures
The following procedures are exported by im_fim.

start_scan();
Start the file integrity scan. This could be invoked from the Schedule block, for example.

121.11.4. Fields
The following fields are used by im_fim.

$raw_event (type: string)


A string containing the $EventTime, $Hostname, $EventType, $Object, and other fields (as applicable) from
the event.

$Digest (type: string)


The calculated digest (checksum) value.

$DigestName (type: string)


The name of the digest used to calculate the checksum value (for example, SHA1).

$EventTime (type: datetime)


The time when the modification was detected.

$EventType (type: string)


One of the following values: CHANGE, DELETE, RENAME, or NEW.

914
$FileName (type: string)
The name of the file that the changes were detected on.

$FileSize (type: integer)


The size of the file in bytes after the modification.

$Hostname (type: string)


The name of the originating computer.

$ModificationTime (type: datetime)


The modification time (mtime) of the file when the change is detected.

$Object (type: string)


One of the following values: DIRECTORY or FILE.

$PrevDigest (type: string)


The calculated digest (checksum) value from the previous scan.

$PrevFileName (type: string)


The name of the file from the previous scan.

$PrevFileSize (type: integer)


The size of the file in bytes from the previous scan.

$PrevModificationTime (type: datetime)


The modification time (mtime) of the file from the previous scan.

$Severity (type: string)


The severity name: WARNING.

$SeverityValue (type: integer)


The WARNING severity level value: 3.

121.11.5. Examples
Example 617. Periodic File Integrity Monitoring

With this configuration, NXLog will monitor the specified directories recursively. Scans will occur hourly.

nxlog.conf
 1 <Input fim>
 2 Module im_fim
 3 File "/etc/*"
 4 Exclude "/etc/mtab"
 5 File "/bin/*"
 6 File "/sbin/*"
 7 File "/usr/bin/*"
 8 File "/usr/sbin/*"
 9 Recursive TRUE
10 ScanInterval 3600
11 </Input>

915
Example 618. Scheduled Scan

The im_fim module provides a start_scan() procedure that can be called to invoke the scan. The following
configuration sets ScanInterval to zero to disable periodic scanning and uses a Schedule block instead to
trigger the scan every day at midnight.

nxlog.conf
 1 <Input fim>
 2 Module im_fim
 3 File "/bin/*"
 4 File "/sbin/*"
 5 File "/usr/bin/*"
 6 File "/usr/sbin/*"
 7 Recursive TRUE
 8 ScanInterval 0
 9 <Schedule>
10 When @daily
11 Exec start_scan();
12 </Schedule>
13 </Input>

121.12. Go (im_go)
This module provides support for collecting log data with methods written in the Go language. The file specified
by the ImportLib directive should contain one or more methods which can be called from the Exec directive of
any module. See also the xm_go and om_go modules.

For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.

The Go script imports the NXLog module, and will have access to the following classes and functions.

class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).

nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.

nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.

nxmodule.AddEvent()
This function adds a READ event to NXLog. This allows to call the READ event later.

nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog. This allows to call the delayed READ event later.

class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the function specified by the
ImportFunc directive.

nxlogdata.Get(field string) (interface{}, bool)


This function returns the value/exists pair for the logdata field.

916
nxlogdata.GetString(field string) (string, bool)
This function returns the value/exists pair for the string representation of the logdata field.

nxlogdata.Set(field string, val interface{})


This function sets the logdata field value.

nxlogdata.Delete(field string)
This function removes the field from logdata.

nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.

module
This attribute is set to the module object associated with the event.

See the list of installer packages that provide the im_go module in the Available Modules chapter of the NXLog
User Guide.

121.12.1. Installing the gonxlog.go File


For the Go environment to work with NXLog, the gonxlog.go file has to be installed.

NOTE This applies for Linux only.

1. Copy the gonxlog.go file from the


/opt/nxlog/lib/nxlog/modules/extension/go/gopkg/nxlog.co/gonxlog/ directory to the
$GOPATH/src/nxlog.co/gonxlog/ directory.

2. Change directory to $GOPATH/src/nxlog.co/gonxlog/.

3. Execute the go install gonxlog.go command to install the file.

121.12.2. Compiling the Go File


In order to be able to call Go functions, the Go file must be compiled into a shared object file that has the .so
extension. The syntax for compiling the Go file is the following.

go build -o /path/to/yoursofile.so -buildmode=c-shared /path/to/yourgofile.go

121.12.3. Configuration
The im_go module accepts the following directives in addition to the common module directives.

ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.

ImportFunc
This mandatory directive calls the specified function, which must accept an unsafe.Pointer object as its only
argument. This function is called when the module tries to read data. It is a mandatory function.

121.12.4. Configuration Template

917
In this Go file template, the read function is called via the ImportFunc directive.

im_go Template
//export read
func read(ctx unsafe.Pointer) {
  // get reference to caller module
  if module, ok := gonxlog.GetModule(ctx); ok {
  // generate new logdata for NXLog
  ld := module.NxLogdataNew()
  // set 'raw_event' value
  ld.Set("raw_event", "some string data")
  // send logdata to NXLog input module
  module.Post(ld)
  }
}

121.12.5. Examples

918
Example 619. Using im_go to Generate Event Data

This configuration reads log files from the /var/log/syslog file directory from a remote server via SSH. The
code defined in the shared object library then gets the reference from the context pointer and gets data
from the channel. After that, it generates new log data by setting the raw_event value then sending it to
the input module by calling the read function. Finally it is saved to a file.

nxlog.conf
 1 <Input in1>
 2 Module im_go
 3 ImportLib "input/input.so"
 4 ImportFunc read
 5 </Input>
 6
 7 <Output out>
 8 Module om_file
 9 File "output/file"
10 Exec log_info($raw_event);
11 </Output>

im_go file Sample


//export read
func read(ctx unsafe.Pointer) {
  var str string
  gonxlog.LogDebug("Read called")
  if module, ok := gonxlog.GetModule(ctx); ok {
  if strings == nil {
  gonxlog.LogError("Channel is not initialized!")
  } else {
  select {
  case str, _ = <-strings:
  ld := module.NxLogdataNew()
  ld.Set("raw_event", str)
  module.Post(ld)
  gonxlog.LogInfo("has data")
  module.AddEvent()
  default:
  gonxlog.LogInfo("no data")
  module.AddEventDelayed(50)
  }
  }
  }
}

121.13. HTTP(s) (im_http)


This module can be configured to accept HTTP or HTTPS connections. It expects HTTP POST requests from the
client. The event message must be in the request body, and will be available in the $raw_event field. The size of
the event message must be indicated with Content-Length header. The module will not close the connection while
valid requests are received in order to operate in Keep-Alive mode. It will respond with HTTP/1.1 201 Created to
each valid POST request. This acknowledgment ensures reliable message delivery.

See the list of installer packages that provide the im_http module in the Available Modules chapter of the NXLog
User Guide.

919
121.13.1. Configuration
The im_http module accepts the following directives in addition to the common module directives.

ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).

Port
The module instance will listen for incoming connections on this port. The default is port 80.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.

HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.

920
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.

HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSRequireCert
This boolean directive specifies that the remote HTTPS client must present a certificate. If set to TRUE and
there is no certificate presented during the connection handshake, the connection will be refused. The
default value is TRUE: each connection must use a certificate.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

121.13.2. Fields
The following fields are used by im_http.

$raw_event (type: string)


The content received in the POST request.

$MessageSourceAddress (type: string)


The IP address of the remote host.

121.13.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

921
Example 620. Receiving Logs over HTTPS

This configuration listens for HTTPS connections from localhost. Received log messages are written to file.

nxlog.conf
 1 <Input http>
 2 Module im_http
 3 ListenAddr 127.0.0.1:8888
 4 HTTPSCertFile %CERTDIR%/server-cert.pem
 5 HTTPSCertKeyFile %CERTDIR%/server-key.pem
 6 HTTPSCAFile %CERTDIR%/ca.pem
 7 HTTPSRequireCert TRUE
 8 HTTPSAllowUntrusted FALSE
 9 </Input>
10
11 # old syntax
12 #<Input http>
13 # Module im_http
14 # ListenAddr 127.0.0.1
15 # Port 8888
16 # HTTPSCertFile %CERTDIR%/server-cert.pem
17 # HTTPSCertKeyFile %CERTDIR%/server-key.pem
18 # HTTPSCAFile %CERTDIR%/ca.pem
19 #</Input>

Example 621. Receiving Logs over HTTPS using Certificate Thumbprints

This configuration uses the HTTPSCAThumbprint and HTTPSCertThumbprint directives for the verification
of the Certificate Authority and the SSL handshake respectively.

nxlog.conf
 1 <Input in_https>
 2 Module im_http
 3 ListenAddr 127.0.0.1:443
 4 HTTPSCAThumbprint c2c902f736d39d37fd65c458afe0180ea799e443
 5 HTTPSCertThumbprint 7c2cc5a5fb59d4f46082a510e74df17da95e2152
 6 HTTPSSSLProtocol TLSv1.2
 7 </Input>
 8
 9 # old syntax
10 #<Input in_https>
11 # Module im_http
12 # ListenAddr 127.0.0.1
13 # Port 443
14 # HTTPSCAThumbprint c2c902f736d39d37fd65c458afe0180ea799e443
15 # HTTPSCertThumbprint 7c2cc5a5fb59d4f46082a510e74df17da95e2152
16 # HTTPSSSLProtocol TLSv1.2
17 #</Input>

121.14. Internal (im_internal)


NXLog produces its own logs about its operations, including errors and debug messages. This module makes it
possible to insert those internal log messages into a route. Internal messages can also be generated from the
NXLog language using the log_info(), log_warning(), and log_error() procedures.

922
Only messages with log level INFO and above are supported. Debug messages are ignored due
NOTE to technical reasons. For debugging purposes the direct logging facility should be used: see the
global LogFile and LogLevel directives.

One must be careful about the use of the im_internal module because it is easy to cause
message loops. For example, consider the situation when internal log messages are sent to
a database. If the database is experiencing errors which result in internal error messages,
WARNING then these are again routed to the database and this will trigger further error messages,
resulting in a loop. In order to avoid a resource exhaustion, the im_internal module will
drop its messages when the queue of the next module in the route is full. It is
recommended to always put the im_internal module instance in a separate route.

If internal messages are required in Syslog format, they must be explicitly converted with
NOTE pm_transformer or the to_syslog_bsd() procedure of the xm_syslog module, because the
$raw_event field is not generated in Syslog format.

See the list of installer packages that provide the im_internal module in the Available Modules chapter of the
NXLog User Guide.

121.14.1. Configuration
The im_internal module accepts the following directive in addition to the common module directives.

LogqueueSize
This optional directive specifies the maximum number of internal log messages that can be queued by this
module. When the queue becomes full (which can happen for example, when FlowControl is in effect), a
warning will be logged, and older queued messages will be dropped in favor of new ones. The default value
for this directive is inherited from the value of the global level LogqueueSize directive.

121.14.2. Fields
The following fields are used by im_internal.

$raw_event (type: string)


The string passed to the log_info() or other log_* procedure.

$ErrorCode (type: integer)


The error number provided by the Apache portable runtime library, if an error is logged resulting from an
operating system error.

$EventTime (type: datetime)


The current time.

$Hostname (type: string)


The hostname where the log was produced.

$Message (type: string)


The same value as $raw_event.

$ModuleName (type: string)


The name of the module instance which generated the internal log event. Not to be confused with
$SourceModuleName, which will identify the current im_internal instance.

$ModuleType (type: string)

923
The type of the module (such as im_file) which generated the internal log event. Not to be confused with
$SourceModuleType, which will be im_internal.

$ProcessID (type: integer)


The process ID of the NXLog process.

$Severity (type: string)


The severity name of the event.

$SeverityValue (type: integer)


Depending on the log level of the internal message, the value corresponding to "debug", "info", "warning",
"error", or "critical".

$SourceName (type: string)


Set to nxlog.

121.14.3. Examples
Example 622. Forwarding Internal Messages over Syslog UDP

This configuration collects NXLog internal messages, adds BSD Syslog headers, and forwards via UDP.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input internal>
 6 Module im_internal
 7 </Input>
 8
 9 <Output udp>
10 Module om_udp
11 Host 192.168.1.1
12 Port 514
13 Exec to_syslog_bsd();
14 </Output>
15
16 <Route internal_to_udp>
17 Path internal => udp
18 </Route>

121.15. Java (im_java)


This module provides support for processing NXLog log data with methods written in the Java language. The Java
classes specified via the ClassPath directives may define one or more class methods which can be called from the
Run or Exec directives of this module. Such methods must be declared with the public and static modifiers in
the Java code to be accessible from NXLog, and the first parameter must be of NXLog.Logdata type. See also the
om_java and xm_java modules.

For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.

The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested

924
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.

To have access to log processing methods, the public static method should accept an NXLog.Logdata or
NXLog.Module object as a parameter.

class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:

getField(name)
This method returns the value of the field name in the event.

setField(name, value)
This method sets the value of field name to value.

deleteField(name)
This method removes the field name from the event record.

getFieldnames()
This method returns an array with the names of all the fields currently in the event record.

getFieldtype(name)
This method retrieves the field type using the value from the name field.

post(module)
This method will submit the LogData event to NXLog for processing by the next module in the route.

class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.

logdataNew()
This method returns a new NXLog.Logdata object.

setReadTimer(delay)
This method sets a trigger for another read after a specified delay in milliseconds.

saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.

loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.

Below is the list of methods for sending messages to the internal logger.

NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.

NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.

NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.

925
NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.

121.15.1. Configuration
The NXLog process maintains only one JVM instance for all im_java, om_java, or xm_java running instances. This
means all Java classes loaded by the ClassPath directive will be available for all running instances.

The im_java module accepts the following directives in addition to the common module directives.

ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.

VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.

VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.

JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.

Run
This mandatory directive specifies the static method inside the Classpath file which should be called.

121.15.2. Example of Usage


Example 623. Using the im_java Module for Processing Logs

This example parses the input, keeps only the entries which belong to the PATH type, and generates log
records line-by-line. Using NXLog facilities, these entries are divided into key-value pairs and converted to
JSON format.

The doInput method of the Input Java class is used to run the processing.

Below is the NXLog configuration.

926
nxlog.conf
 1 <Input javain>
 2 Module im_java
 3 # Path to the compiled class
 4 Classpath /tmp/Input.jar
 5 # Static method which will be called by the im_java module
 6 Run Input.doInput
 7 # Path to Java Runtime
 8 JavaHome /usr/lib/jvm/java-11-openjdk-amd64
 9 </Input>
10
11 <Output javaout>
12 Module om_file
13 File "/tmp/output.txt"
14 <Exec>
15 kvp->parse_kvp();
16 delete($EventReceivedTime);
17 delete($SourceModuleName);
18 delete($SourceModuleType);
19 to_json();
20 </Exec>
21 </Output>

Below is the Java class with comments.

927
Input.java
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;

public class Input {

  static String fileName = "/tmp/input.txt";


  static File file = Paths.get(fileName).toFile();
  static List<String> lines = null;
  static int currrent = 0;

  // This is a static method called by the Run directive in nxlog.conf


  // The NXLog.Module is a mandatory parameter
  static public void doInput(NXLog.Module module) {

  if (lines == null)
  {
  lines = new ArrayList<>();

  try(BufferedReader br = new BufferedReader(new FileReader(file))) {


  for(String line; (line = br.readLine()) != null; ) {
  // Checks whether the entry belongs to the `PATH` type
  if(line.contains("type=PATH")){
  lines.add(line);
  }
  }
  } catch (IOException e) {
  e.printStackTrace();
  }
  }

  if (currrent >= lines.size()) {


  return;
  }
  // Creats a new logdata record
  NXLog.Logdata ld = module.logdataNew();
  // Sets the $raw_event
  ld.setField("raw_event", lines.get(currrent));
  currrent ++;
  // Passes the record instance for further processing by other instances
  ld.post(module);
  // Scheduling the next read call
  module.setReadTimer(1);
  }
}

Below are the log samples before and after processing.

928
Input Sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵

type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵

type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵

Output Sample
{
  "type":"PATH",
  "msg":"audit(1489999368.711:35724):",
  "item":0,"name":"/root/test",
  "inode":528869,"dev":"08:01",
  "mode":040755,"ouid":0,
  "ogid":0,
  "rdev":"00:00"
}

121.16. Kafka (im_kafka)


This module implements an Apache Kafka consumer for collecting event records from a Kafka topic. See also the
om_kafka module.

See the list of installer packages that provide the im_kafka module in the Available Modules chapter of the NXLog
User Guide.

121.16.1. Configuration
The im_kafka module accepts the following directives in addition to the common module directives. The
BrokerList and Topic directives are required.

BrokerList
This mandatory directive specifies the list of Kafka brokers to connect to for collecting logs. The list should
include ports and be comma-delimited (for example, localhost:9092,192.168.88.35:19092).

Topic
This mandatory directive specifies the Kafka topic to collect records from.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote brokers. CAFile is required if Protocol is set to ssl. To trust a self-signed certificate presented by
the remote (which is not signed by a CA), provide that certificate instead.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive

929
is not needed for passwordless private keys.

Option
This directive can be used to pass a custom configuration property to the Kafka library (librdkafka). For
example, the group ID string can be set with Option group.id mygroup. This directive may be used more
than once to specify multiple options. For a list of configuration properties, see the librdkafka
CONFIGURATION.md file.

Passing librdkafka configuration properties via the Option directive should be done with
WARNING care since these properties are used for the fine-tuning of the librdkafka performance
and may result in various side effects.

Partition
This optional integer directive specifies the topic partition to read from. If this directive is not given, messages
are collected from partition 0.

Protocol
This optional directive specifies the protocol to use for connecting to the Kafka brokers. Accepted values
include plaintext (the default) and ssl. If Protocol is set to ssl, then the CAFile directive must also be
provided.

121.16.2. Examples
Example 624. Using the im_kafka Module

This configuration collects events from a Kafka cluster using the brokers specified. Events are read from the
first partition of the nxlog topic.

nxlog.conf
 1 <Input in>
 2 Module im_kafka
 3 BrokerList localhost:9092,192.168.88.35:19092
 4 Topic nxlog
 5 Partition 0
 6 Protocol ssl
 7 CAFile /root/ssl/ca-cert
 8 CertFile /root/ssl/client_debian-8.pem
 9 CertKeyFile /root/ssl/client_debian-8.key
10 KeyPass thisisasecret
11 </Input>

121.17. Kernel (im_kernel)


This module collects kernel log messages from the kernel log buffer. This module works on Linux, the BSDs, and
macOS.

In order for NXLog to read logs from the kernel buffer, it may be necessary to disable the
WARNING
system logger (systemd, klogd, or logd) or configure it to not read events from the kernel.

Special privileges are required for reading kernel logs. For this, NXLog needs to be started as root. With the User
and Group global directives, NXLog can then drop its root privileges while keeping the CAP_SYS_ADMIN capability
for reading the kernel log buffer.

930
Unfortunately it is not possible to read from the /proc/kmsg pseudo file for an unprivileged
process even if the CAP_SYS_ADMIN capability is kept. For this reason the /proc/kmsg interface
NOTE is not supported by the im_kernel module. The im_file module should work fine with the
/proc/kmsg pseudo file if one wishes to collect kernel logs this way, though this will require
NXLog to be running as root.

Log Sample
<6>Some message from the kernel.↵

Kernel messages are valid BSD Syslog messages, with a priority from 0 (emerg) to 7 (debug), but do not contain
timestamp and hostname fields. These can be parsed with the xm_syslog parse_syslog_bsd() procedure, and the
timestamp and hostname fields will be added by NXLog.

See the list of installer packages that provide the im_kernel module in the Available Modules chapter of the NXLog
User Guide.

121.17.1. Configuration
The im_kernel module accepts the following directives in addition to the common module directives.

DeviceFile
This directive sets the device file from which to read events, for non-Linux platforms. If this directive is not
specified, the default is /dev/klog.

PollInterval
This directive specifies how frequently the module will check for new events, in seconds, on Linux. If this
directive is not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will
check twice every second).

121.17.2. Examples
Example 625. Reading Messages From the Kernel

This configuration collects log messages from the kernel and writes them to file. This should work on Linux,
the BSDs, and macOS (but the system logger may need to be disabled or reconfigured).

nxlog.conf
 1 # Drop privileges after being started as root
 2 User nxlog
 3 Group nxlog
 4
 5 <Input kernel>
 6 Module im_kernel
 7 </Input>
 8
 9 <Output file>
10 Module om_file
11 File "tmp/output"
12 </Output>

121.18. Linux Audit System (im_linuxaudit)


With this module, NXLog can set up Audit rules and collect the resulting logs directly from the kernel without
requiring auditd or other userspace software. If the auditd service is installed, it must not be running.

Rules must be provided using at least one of the LoadRule and Rules directives. Rules should be specified using

931
the format documented in the Defining Persistent Audit Rules section of the Red Hat Enterprise Linux Security
Guide.

The -e control rule should be included in the ruleset to enable the Audit system (as -e 1 or -e 2). Rules are not
automatically removed, either before applying a ruleset or when NXLog exits. To clear the current ruleset before
setting rules, begin the ruleset with the -D rule. If the Audit configuration is locked when im_linuxaudit starts,
NXLog will print a warning and collect events generated by the active ruleset.

It is recommended that FlowControl be disabled for im_linuxaudit module instances. If the


WARNING im_linuxaudit module instance is suspended and the Audit backlog limit is exceeded, all
processes that generate Audit messages will be blocked.

See the list of installer packages that provide the im_linuxaudit module in the Available Modules chapter of the
NXLog User Guide.

121.18.1. Configuration
The im_linuxaudit module accepts the following directives in addition to the common module directives. At least
one of LoadRule and Rules must be specified.

LoadRule
Use this directive to load a ruleset from an external rules file. This directive can be used more than once.
Wildcards can be used to read rules from multiple files.

Rules
This directive, specified as a block, can be used to provide Audit rules directly from the NXLog configuration
file. The following control rules are supported: -b, -D, -e, -f, -r, --loginuid-immutable,
--backlog_wait_time, and --reset-lost; see auditctl(8) for more information.

Include
This directive can be used inside a Rules block to read rules from a separate file. Like the LoadRule
directive, wildcards are supported.

LockConfig
If this boolean directive is set to TRUE, NXLog will lock the Audit system configuration after the rules have
been set. It will not be possible to modify the Audit configuration until after a reboot. The default is FALSE: the
Audit configuration will not be locked.

121.18.2. Fields
The following fields are used by im_linuxaudit.

$a0 (type: string)


The first four arguments of the system call, encoded in hexadecimal notation.

$a1 (type: string)


The second four arguments of the system call, encoded in hexadecimal notation.

$a2 (type: string)


The third four arguments of the system call, encoded in hexadecimal notation.

$a3 (type: string)


The fourth four arguments of the system call, encoded in hexadecimal notation.

932
$acct (type: string)
A user’s account name.

$addr (type: string)


The IPv4 or IPv6 address. This field usually follows a hostname field and contains the address the host name
resolves to.

$arch (type: string)


Information about the CPU architecture of the system, encoded in hexadecimal notation.

$auid (type: integer)


The Audit user ID. This ID is assigned to a user upon login and is inherited by every process even when the
user’s identity changes (for example, by switching user accounts with su - john.

$cap_fi (type: string)


Data related to the setting of an inherited file system-based capability.

$cap_fp (type: string)


Data related to the setting of a permitted file system-based capability.

$cap_pe (type: string)


Data related to the setting of an effective process-based capability.

$cap_pi (type: string)


Data related to the setting of an inherited process-based capability.

$cap_pp (type: string)


Data related to the setting of a permitted process-based capability.

$capability (type: integer)


The number of bits that were used to set a particular Linux capability. For more information on Linux
capabilities, see the capabilities(7) man page.

$cgroup (type: string)


The path to the cgroup that contains the process at the time the Audit event was generated.

$cmd (type: string)


The entire command line that is executed. This is useful in case of shell interpreters where the exe field
records, for example, /bin/bash as the shell interpreter and the cmd field records the rest of the command
line that is executed, for example helloworld.sh --help.

$comm (type: string)


The command that is executed. This is useful in case of shell interpreters where the exe field records, for
example, /bin/bash as the shell interpreter and the comm field records the name of the script that is
executed, for example helloworld.sh.

$cwd (type: string)


The path to the directory in which a system call was invoked.

$data (type: string)


Data associated with TTY records.

$dev (type: string)

933
The minor and major ID of the device that contains the file or directory recorded in an event.

$devmajor (type: string)


The major device ID.

$devminor (type: string)


The minor device ID.

$egid (type: integer)


The effective group ID of the user who started the analyzed process.

$euid (type: integer)


The effective user ID of the user who started the analyzed process.

$exe (type: string)


The path to the executable that was used to invoke the analyzed process.

$exit (type: integer)


The exit code returned by a system call. This value varies by system call. You can interpret the value to its
human-readable equivalent with the following command: ausearch --interpret --exit exit_code

$family (type: string)


The type of address protocol that was used, either IPv4 or IPv6.

$filetype (type: string)


The type of the file.

$flags (type: integer)


The file system name flags.

$fsgid (type: integer)


The file system group ID of the user who started the analyzed process.

$fsuid (type: integer)


The file system user ID of the user who started the analyzed process.

$gid (type: integer)


The group ID.

$hostname (type: string)


The host name.

$icmptype (type: string)


The type of a Internet Control Message Protocol (ICMP) package that is received. Audit messages containing
this field are usually generated by iptables.

$id (type: integer)


The user ID of an account that was changed.

$inode (type: integer)


The inode number associated with the file or directory recorded in an Audit event.

$inode_gid (type: integer)

934
The group ID of the inode’s owner.

$inode_uid (type: integer)


The user ID of the inode’s owner.

$items (type: integer)


The number of path records that are attached to this record.

$key (type: string)


The user defined string associated with a rule that generated a particular event in the Audit log.

$list (type: string)


The Audit rule list ID. The following is a list of known IDs: 0 — user 1 — task 4 — exit 5 — exclude.

$mode (type: string)


The file or directory permissions, encoded in numerical notation.

$msg (type: string)


A time stamp and a unique ID of a record, or various event-specific <name>=<value> pairs provided by the
kernel or user space applications.

$msgtype (type: string)


The message type that is returned in case of a user-based AVC denial. The message type is determined by D-
Bus.

$name (type: string)


The full path of the file or directory that was passed to the system call as an argument.

$new-disc (type: string)


The name of a new disk resource that is assigned to a virtual machine.

$new-mem (type: integer)


The amount of a new memory resource that is assigned to a virtual machine.

$new-net (type: string)


The MAC address of a new network interface resource that is assigned to a virtual machine.

$new-vcpu (type: integer)


The number of a new virtual CPU resource that is assigned to a virtual machine.

$new_gid (type: integer)


A group ID that is assigned to a user.

$oauid (type: integer)


The user ID of the user that has logged in to access the system (as opposed to, for example, using su) and has
started the target process. This field is exclusive to the record of type OBJ_PID.

$obj (type: string)


The SELinux context of an object. An object can be a file, a directory, a socket, or anything that is receiving the
action of a subject.

$obj_gid (type: integer)


The group ID of an object.

935
$obj_lev_high (type: string)
The high SELinux level of an object.

$obj_lev_low (type: string)


The low SELinux level of an object.

$obj_role (type: string)


The SELinux role of an object.

$obj_uid (type: integer)


The UID of an object.

$obj_user (type: string)


The user that is associated with an object.

$ocomm (type: string)


The command that was used to start the target process.This field is exclusive to the record of type OBJ_PID.

$ogid (type: integer)


The object owner’s group ID.

$old-disk (type: string)


The name of an old disk resource when a new disk resource is assigned to a virtual machine.

$old-mem (type: integer)


The amount of an old memory resource when a new amount of memory is assigned to a virtual machine.

$old-net (type: string)


The MAC address of an old network interface resource when a new network interface is assigned to a virtual
machine.

$old-vcpu (type: integer)


The number of an old virtual CPU resource when a new virtual CPU is assigned to a virtual machine.

$old_prom (type: integer)


The previous value of the network promiscuity flag.

$opid (type: integer)


The process ID of the target process. This field is exclusive to the record of type OBJ_PID.

$oses (type: string)


The session ID of the target process. This field is exclusive to the record of type OBJ_PID.

$ouid (type: integer)


Records the real user ID of the user who started the target process.

$path (type: string)


The full path of the file or directory that was passed to the system call as an argument in case of AVC-related
Audit events

$perm (type: string)


The file permission that was used to generate an event (that is, read, write, execute, or attribute change)

936
$pid (type: integer)
The pid field semantics depend on the origin of the value in this field. In fields generated from user-space,
this field holds a process ID. In fields generated by the kernel, this field holds a thread ID. The thread ID is
equal to process ID for single-threaded processes. Note that the value of this thread ID is different from the
values of pthread_t IDs used in user-space. For more information, see the gettid(2) man page.

$ppid (type: integer)


The Parent Process ID (PID).

$prom (type: string)


The network promiscuity flag.

$proto (type: string)


The networking protocol that was used. This field is specific to Audit events generated by iptables.

$res (type: string)


The result of the operation that triggered the Audit event.

$result (type: string)


The result of the operation that triggered the Audit event.

$saddr (type: string)


The socket address.

$sauid (type: integer)


The sender Audit login user ID. This ID is provided by D-Bus as the kernel is unable to see which user is
sending the original auid.

$ses (type: string)


The session ID of the session from which the analyzed process was invoked.

$sgid (type: integer)


The set group ID of the user who started the analyzed process.

$sig (type: string)


The number of a signal that causes a program to end abnormally. Usually, this is a sign of a system intrusion.

$subj (type: string)


The SELinux context of a subject. A subject can be a process, a user, or anything that is acting upon an object.

$subj_clr (type: string)


The SELinux clearance of a subject.

$subj_role (type: string)


The SELinux role of a subject.

$subj_sen (type: string)


The SELinux sensitivity of a subject.

$subj_user (type: string)


The user that is associated with a subject.

937
$success (type: string)
Whether a system call was successful or failed.

$suid (type: integer)


The set user ID of the user who started the analyzed process.

$syscall (type: string)


The type of the system call that was sent to the kernel.

$terminal (type: string)


The terminal name (without /dev/).

$tty (type: string)


The name of the controlling terminal. The value (none) is used if the process has no controlling terminal.

$uid (type: integer)


the real user ID of the user who started the analyzed process.

$vm (type: string)


The name of a virtual machine from which the Audit event originated.

121.18.3. Examples
Example 626. Collecting Audit Logs With LoadRule Directive

This configuration uses a set of external rule files to configure the Audit system.

nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 LoadRule 'im_linuxaudit_*.rules'
5 </Input>

Example 627. Collecting Audit Logs With Rules Block

This configuration lists the rules inside the NXLog configuration file instead of using a separate Audit rules
file.

nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 <Rules>
5 # Watch /etc/passwd for modifications and tag with 'passwd'
6 -w /etc/passwd -p wa -k passwd
7 </Rules>
8 </Input>

121.19. Mark (im_mark)


Mark messages are used to indicate periodic activity to assure that the logger is running when there are no log
messages coming in from other sources.

938
By default, if no module-specific directives are set, a log message will be generated every 30 minutes containing
-- MARK --.

The $raw_event field is not generated in Syslog format. If mark messages are required in Syslog
NOTE
format, they must be explicitly converted with the to_syslog_bsd() procedure.

The functionality of the im_mark module can be also achieved using the Schedule block with a
NOTE log_info("--MARK--") Exec statement, which would insert the messages via the im_internal
module into a route. Using a single module for this task can simplify configuration.

See the list of installer packages that provide the im_mark module in the Available Modules chapter of the NXLog
User Guide.

121.19.1. Configuration
The im_mark module accepts the following directives in addition to the common module directives.

Mark
This optional directive sets the string for the mark message. The default is -- MARK --.

MarkInterval
This optional directive sets the interval for mark messages, in minutes. The default is 30 minutes.

121.19.2. Fields
The following fields are used by im_mark.

$raw_event (type: string)


The value defined by the Mark directive, -- MARK -- by default.

$EventTime (type: datetime)


The current time.

$Message (type: string)


The same value as $raw_event.

$ProcessID (type: integer)


The process ID of the NXLog process.

$Severity (type: string)


The severity name: INFO.

$SeverityValue (type: integer)


The INFO severity level value: 2.

$SourceName (type: string)


Set to nxlog.

121.19.3. Examples

939
Example 628. Using the im_mark Module

Here, NXLog will write the specified string to file every minute.

nxlog.conf
 1 <Input mark>
 2 Module im_mark
 3 MarkInterval 1
 4 Mark -=| MARK |=-
 5 </Input>
 6
 7 <Output file>
 8 Module om_file
 9 File "tmp/output"
10 </Output>
11
12 <Route mark_to_file>
13 Path mark => file
14 </Route>

121.20. EventLog for Windows XP/2000/2003 (im_mseventlog)


This module can be used to collect EventLog messages on Microsoft Windows platforms. The module looks up
the available EventLog sources stored under the registry key SYSTEM\CurrentControlSet\Services\Eventlog
and polls logs from each of these sources or only the sources defined with the Sources directive.

Windows Vista, Windows 2008, and later use a new EventLog API which is not backward
compatible. Messages in some events produced by sources in this new format cannot be
resolved with the old API which is used by this module. If such an event is encountered, a
$Message similar to the following will be set: The description for EventID XXXX from
source SOURCE cannot be read by im_mseventlog because this does not support
NOTE
the newer WIN2008/Vista EventLog API. Consider using the im_msvistalog module
instead.

Though the majority of event messages can be read with this module even on Windows
2008/Vista and later, it is recommended to use the im_msvistalog module instead.

Strings are stored in DLL and executable files and need to be read by the module when reading
EventLog messages. If a program (DLL/EXE) is already uninstalled and is not available for looking
NOTE up a string, the following message will appear instead:

The description for EventID XXXX from source SOURCE cannot be found.

See the list of installer packages that provide the im_mseventlog module in the Available Modules chapter of the
NXLog User Guide.

121.20.1. Configuration
The im_mseventlog module accepts the following directives in addition to the common module directives.

ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the EventLog. This can result in quite a lot of

940
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.

SavePos
This boolean directive specifies that the file position should be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position will be saved if this directive is
not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

Sources
This optional directive takes a comma-separated list of EventLog filenames, such as Security,
Application, to select specific EventLog sources for reading. If this directive is not specified, then all
available EventLog sources are read (as listed in the registry). This directive should not be confused with the
$SourceName field contained within the EventLog and it is not a list of such names. The value of this is stored
in the FileName field.

UTF8
If this optional boolean directive is set to TRUE, all strings will be converted to UTF-8 encoding. Internally this
calls the convert_fields procedure. The xm_charconv module must be loaded for the character set conversion
to work. The default is TRUE, but conversion will only occur if the xm_charconv module is loaded, otherwise
strings will be in the local codepage.

121.20.2. Fields
The following fields are used by im_mseventlog.

$raw_event (type: string)


A string containing the $EventTime, $Hostname, $Severity, and $Message from the event.

$AccountName (type: string)


The username associated with the event.

$AccountType (type: string)


The type of the account. Possible values are: User, Group, Domain, Alias, Well Known Group, Deleted
Account, Invalid, Unknown, and Computer.

$Category (type: string)


The category name resolved from CategoryNumber.

$CategoryNumber (type: integer)


The category number, stored as Category in the EventRecord.

$Domain (type: string)


The domain name of the user.

$EventID (type: integer)


The event ID of the EventRecord.

$EventTime (type: datetime)


The TimeGenerated field of the EventRecord.

$EventTimeWritten (type: datetime)


The TimeWritten field of the EventRecord.

$EventType (type: string)


The type of the event, which is a string describing the severity. Possible values are: ERROR, AUDIT_FAILURE,

941
AUDIT_SUCCESS, INFO, WARNING, and UNKNOWN.

$FileName (type: string)


The logfile source of the event (for example, Security or Application).

$Hostname (type: string)


The host or computer name field of the EventRecord.

$Message (type: string)


The message from the event.

$RecordNumber (type: integer)


The number of the event record.

$Severity (type: string)


The normalized severity name of the event. See $SeverityValue.

$SeverityValue (type: integer)


The normalized severity number of the event, mapped as follows.

Event Log Normalized


Severity Severity
0/Audit Success 2/INFO

0/Audit Failure 4/ERROR

1/Critical 5/CRITICAL

2/Error 4/ERROR

3/Warning 3/WARNING

4/Information 2/INFO

5/Verbose 1/DEBUG

$SourceName (type: string)


The event source which produced the event (the subsystem or application name).

121.20.3. Examples

942
Example 629. Forwarding EventLogs from a Windows Machine to a Remote Host

This configuration collects Windows EventLog and forwards the messages to a remote host via TCP.

nxlog.conf
 1 <Input eventlog>
 2 Module im_mseventlog
 3 </Input>
 4
 5 <Output tcp>
 6 Module om_tcp
 7 Host 192.168.1.1
 8 Port 514
 9 </Output>
10
11 <Route eventlog_to_tcp>
12 Path eventlog => tcp
13 </Route>

121.21. EventLog for Windows 2008/Vista and Later


(im_msvistalog)
This module can be used to collect EventLog messages on Microsoft Windows platforms which support the
newer EventLog API (also known as the Crimson EventLog subsystem), namely Windows 2008/Vista and later. See
the official Microsoft documentation about Event Logs. The module supports reading all System, Application, and
Custom events. It looks up the available channels and monitors events in each unless the Query and Channel
directives are explicitly defined. Event logs can be collected from remote servers over MSRPC.

For Windows 2003 and earlier, use the im_mseventlog module because the new Windows Event
NOTE
Log API is only available in Windows Vista, Windows 2008, and later.

Use the im_etw module to collect Analytic and Debug logs as the Windows Event Log subsystem,
NOTE
which im_msvistalog uses, does not support subscriptions to Debug or Analytic channels.

In addition to the standard set of fields which are listed under the System section, event providers can define
their own additional schema which enables logging additional data under the EventData section. The Security log
makes use of this new feature and such additional fields can be seen as in the following XML snippet:

943
<EventData>
  <Data Name="SubjectUserSid">S-1-5-18</Data>
  <Data Name="SubjectUserName">WIN-OUNNPISDHIG$</Data>
  <Data Name="SubjectDomainName">WORKGROUP</Data>
  <Data Name="SubjectLogonId">0x3e7</Data>
  <Data Name="TargetUserSid">S-1-5-18</Data>
  <Data Name="TargetUserName">SYSTEM</Data>
  <Data Name="TargetDomainName">NT AUTHORITY</Data>
  <Data Name="TargetLogonId">0x3e7</Data>
  <Data Name="LogonType">5</Data>
  <Data Name="LogonProcessName">Advapi</Data>
  <Data Name="AuthenticationPackageName">Negotiate</Data>
  <Data Name="WorkstationName" />
  <Data Name="LogonGuid">{00000000-0000-0000-0000-000000000000}</Data>
  <Data Name="TransmittedServices">-</Data>
  <Data Name="LmPackageName">-</Data>
  <Data Name="KeyLength">0</Data>
  <Data Name="ProcessId">0x1dc</Data>
  <Data Name="ProcessName">C:\Windows\System32\services.exe</Data>
  <Data Name="IpAddress">-</Data>
  <Data Name="IpPort">-</Data>
</EventData>

NXLog can extract this data when fields are logged using this schema. The values will be available in the fields of
the internal NXLog log structure. This is especially useful because there is no need to write pattern matching
rules to extract this data from the message. These fields can be used in filtering rules, be written into SQL tables,
or be used to trigger actions. The Exec directive can be used for filtering:

1 <Input in>
2 Module im_msvistalog
3 Exec if ($TargetUserName == 'SYSTEM') OR \
4 ($EventType == 'VERBOSE') drop();
5 </Input>

See the list of installer packages that provide the im_msvistalog module in the Available Modules chapter of the
NXLog User Guide.

121.21.1. Configuration
The im_msvistalog module accepts the following directives in addition to the common module directives.

AddPrefix
If this boolean directive is set to TRUE, names of fields parsed from the <EventData> portion of the event
XML will be prefixed with EventData.. For example, $EventData.SubjectUserName will be added to the
event record instead of $SubjectUserName. The same applies to <UserData>. This directive defaults to
FALSE: field names will not be prefixed.

ReadBatchSize
This optional directive can be used to specify the number of event records the EventLog API will pass to the
module for processing. Larger sizes may increase throughput. Note that there is a known issue in the
Windows EventLog subsystem: when this value is higher than 31 it may fail to retrieve some events on busy
systems, returning the error "EvtNext failed with error 1734: The array bounds are invalid." For this reason,
increasing this value is not recommended. The default is 31.

CaptureEventXML
This boolean directive defines whether the module should store raw XML-formatted event data. If set to
TRUE, the module stores raw XML data in the $EventXML field. By default, the value is set to FALSE, and the
$EventXML field is not added to the record.

944
Channel
The name of the Channel to query. If not specified, the module will read from all sources defined in the
registry. See the MSDN documentation about Event Selection.

File
This optional directive can be used to specify a full path to a log file. Log file types that can be used have the
following extensions: .evt, .evtx, and .etl. The path of the file must not be quoted (as opposed to im_file
and om_file). If the File directive is specified, the SavePos directive will be overridden to TRUE. The File
directive can be specified multiple times to read from multiple files. This module finds files only when the
module instance is started; any files added later will not be read until it is restarted. If the log file specified by
this directive is updated with new event records while NXLog is running (the file size or modification date
attribute changes), the module detects the newly appended records on the fly without requiring the module
instance to be restarted. Reading an EventLog file directly is mostly useful for forensics purposes. The System
log would be read directly with the following:

File C:\Windows\System32\winevt\Logs\System.evtx

You can use wildcards to specify file names and directories. Wildcards are not regular expressions, but are
patterns commonly used by Unix shells to expand filenames (also known as "globbing").

?
Matches any single character.

*
Matches any string, including the empty string.

\*
Matches the asterisk (*) character.

\?
Matches the question mark (?) character.

[…]
Matches one character specified within the brackets. The brackets should contain a single character (for
example, [a]) or a range of characters ([a-z]). If the first character in the brackets is ^ or !, it reverses the
wildcard matching logic (the wildcard matches any character not in the brackets). The backslash (\)
characters are ignored and should be used to escape ] and - characters as well as ^ and ! at the
beginning of the pathname.

Language
This optional directive specifies a language to use for rendering the events. The language should be given as a
hyphen-separated language/region code (for example, fr-FR for French). Note that the required language
support must be installed on the system. If this directive is not given, the system’s default locale is used.

PollInterval
This directive specifies how frequently the module will check for new events, in seconds. If this directive is not
specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second).

Query
This directive specifies the query for pulling only specific EventLog sources. See the MSDN documentation
about Event Selection. Note that this directive requires a single-line parameter, so multi-line query XML
should be specified using line continuation:

945
1 Query <QueryList> \
2 <Query Id='1'> \
3 <Select Path='Security'>*[System/Level=4]</Select> \
4 </Query> \
5 </QueryList>

When the Query contains an XPath style expression, the Channel must also be specified. Otherwise if an XML
Query is specified, the Channel should not be used.

QueryXML
This directive is the same as the Query directive above, except it can be used as a block. Multi-line XML
queries can be used without line continuation, and the XML Query can be copied directly from Event Viewer.

 1 <QueryXML>
 2 <QueryList>
 3 <!-- XML-style comments can
 4 span multiple lines in
 5 QueryXML blocks like this.
 6 -->
 7 <Query Id='1'>
 8 <Select Path='Security'>*[System/Level=4]</Select>
 9 </Query>
10 </QueryList>
11 </QueryXML>

Commenting with the # mark does not work within multi-line Query directives or QueryXML
blocks. In this case, use XML-style comments <!-- --> as shown in the example above.
CAUTION Failure to follow this syntax for comments within queries will render the module instance
useless. Since NXLog does not parse the content of QueryXML blocks, this behavior is
expected.

ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the EventLog. This can result in quite a lot of
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.

RemoteAuthMethod
This optional directive specifies the authentication method to use. Available values are Default, Negotiate,
Kerberos, and NTLM. When the directive is not specified, Default is used, which is actually Negotiate.

RemoteDomain
Domain of the user used for authentication when logging on the remote server to collect event logs.

RemotePassword
Password of the user used for authentication when logging on the remote server to collect event logs.

RemoteServer
This optional directive specifies the name of the remote server to collect event logs from. If not specified, the
module will collect locally.

RemoteUser
Name of the user used for authentication when logging on the remote server to collect event logs.

ResolveGUID
This optional boolean directive specifies that GUID values should be resolved to their object names in the

946
$Message field. If ResolveGUID is set to TRUE, it produces two output fields. One that retains the non-
resolved form of the GUID, and another which resolves to the above mentioned object name. To differentiate
the two output fields, the resolved field name will have the DN suffix added to it. If the field already exists with
the same name the resolved field will not be added and the original is preserved. The default setting is FALSE;
the module will not resolve GUID values. Windows Event Viewer shows the Message with the GUID values
resolved, and this must be enabled to get the same output with NXLog.

ResolveSID
This optional boolean directive specifies that SID values should be resolved to user names in the $Message
field. If ResolveSID is set to TRUE, it produces two output fields. One that retains the non-resolved form of
the SID, and another which resolves to the above mentioned user name. To differentiate the two output
fields, the resolved field name will have the Name suffix added to it. If the field already exists with the same
name the resolved field will not be added and the original is preserved. The default setting is FALSE; the
module will not resolve SID values. Windows Event Viewer shows the Message with the SID values resolved,
and this must be enabled to get the same output with NXLog.

SavePos
This boolean directive specifies that the file position should be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position is saved if this directive is not
specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

TolerateQueryErrors
This boolean directive specifies that im_msvistalog should ignore any invalid sources in the query. The default
is FALSE: im_msvistalog will fail to start if any source is invalid.

121.21.2. Fields
The following fields are used by im_msvistalog.

$raw_event (type: string)


A string containing the $EventTime, $Hostname, $Severity, $EventID, and $Message from the event.

$AccountName (type: string)


The username associated with the event.

$AccountType (type: string)


The type of the account. Possible values are: User, Group, Domain, Alias, Well Known Group, Deleted
Account, Invalid, Unknown, and Computer.

$ActivityID (type: string)


A globally unique identifier for the current activity, as stored in EvtSystemActivityID.

$Category (type: string)


The category name resolved from Task.

$Channel (type: string)


The Channel of the event source (for example, Security or Application).

$Domain (type: string)


The domain name of the user.

$ERROR_EVT_UNRESOLVED (type: boolean)


This field is set to TRUE if the event message cannot be resolved and the insertion strings are not present.

947
$EventID (type: integer)
The event ID (specific to the event source) from the EvtSystemEventID field.

$EventTime (type: datetime)


The EvtSystemTimeCreated field.

$EventType (type: string)


The type of the event, which is a string describing the severity. This is translated to its string representation
from EvtSystemLevel. Possible values are: CRITICAL, ERROR, AUDIT_FAILURE, AUDIT_SUCCESS, INFO, WARNING,
and VERBOSE.

$EventXML (type: string)


The raw event data in XML format. This field is available if the module’s CaptureEventXML directive is set to
TRUE.

$ExecutionProcessID (type: integer)


The process identifier of the event producer as in EvtSystemProcessID.

$Hostname (type: string)


The EvtSystemComputer field.

$Keywords (type: string)


The value of the Keywords field from EvtSystemKeywords.

$Message (type: string)


The message from the event.

$Opcode (type: string)


The Opcode string resolved from OpcodeValue.

$OpcodeValue (type: integer)


The Opcode number of the event as in EvtSystemOpcode.

$ProviderGuid (type: string)


The globally unique identifier of the event’s provider as stored in EvtSystemProviderGuid. This corresponds to
the name of the provider in the $SourceName field.

$RecordNumber (type: integer)


The number of the event record.

$RelatedActivityID (type: string)


The RelatedActivityID as stored in EvtSystemRelatedActivityID.

$Severity (type: string)


The normalized severity name of the event. See $SeverityValue.

$SeverityValue (type: integer)


The normalized severity number of the event, mapped as follows.

Event Log Normalized


Severity Severity
0/Audit Success 2/INFO

948
Event Log Normalized
Severity Severity
0/Audit Failure 4/ERROR

1/Critical 5/CRITICAL

2/Error 4/ERROR

3/Warning 3/WARNING

4/Information 2/INFO

5/Verbose 1/DEBUG

$SourceName (type: string)


The event source which produced the event, from the EvtSystemProviderName field.

$TaskValue (type: integer)


The task number from the EvtSystemTask field.

$ThreadID (type: integer)


The thread identifier of the event producer as in EvtSystemThreadID.

$UserID (type: string)


The Security Identifier (SID) which resolves to $AccountName, stored in EvtSystemUserID.

$Version (type: integer)


The Version number of the event as in EvtSystemVersion.

121.21.3. Examples
Due to a bug or limitation of the Windows Event Log API, 23 or more clauses in a query will
result in a failure with the following error message: ERROR failed to subscribe to
NOTE
msvistalog events, the Query is invalid: This operator is unsupported by this
implementation of the filter.; [error code: 15001]

949
Example 630. Forwarding Windows EventLog from Windows to a Remote Host in Syslog Format

This configuration collects Windows EventLog with the specified query. BSD Syslog headers are added and
the messages are forwarded to a remote host via TCP.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input eventlog>
 6 Module im_msvistalog
 7 <QueryXML>
 8 <QueryList>
 9 <Query Id='0'>
10 <Select Path='Application'>*</Select>
11 <Select Path='Security'>*[System/Level&lt;4]</Select>
12 <Select Path='System'>*</Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 </Input>
17
18 <Output tcp>
19 Module om_tcp
20 Host 192.168.1.1
21 Port 514
22 Exec to_syslog_bsd();
23 </Output>
24
25 <Route eventlog_to_tcp>
26 Path eventlog => tcp
27 </Route>

121.22. Null (im_null)


This module does not generate any input, so basically it does nothing. Yet it can be useful for creating a dummy
route, for testing purposes, or for Scheduled NXLog code execution. The im_null module accepts only the
common module directives. See this example for usage.

See the list of installer packages that provide the im_null module in the Available Modules chapter of the NXLog
User Guide.

121.23. Oracle OCI (im_oci)


This module can read input from an Oracle database.

WARNING This module is deprecated, please use the im_odbc module instead.

121.23.1. Configuration
The im_oci module accepts the following directives in addition to the common module directives. The DBname,
Password, and UserName directives are required.

DBname
Name of the database to read the logs from.

950
Password
Password for authenticating to the database server.

UserName
Username for authenticating to the database server.

ORACLE_HOME
This optional directive specifies the directory of the Oracle installation.

SavePos
This boolean directive specifies that the last row ID should be saved when NXLog exits. The row ID will be
read from the cache file upon startup. The default is TRUE: the row ID is saved if this directive is not specified.
Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

121.23.2. Examples
Example 631. Reading Logs from an Oracle Database

This configuration will read logs from the specified database and write them to file.

nxlog.conf
 1 <Input oci>
 2 Module im_oci
 3 dbname //192.168.1.1:1521/orcl
 4 username user
 5 password secret
 6 #oracle_home /home/oracle/instantclient_11_2
 7 </Input>
 8
 9 <Output file>
10 Module om_file
11 File tmp/output
12 </Output>
13
14 <Route oci_to_file>
15 Path oci => file
16 </Route>

121.24. ODBC (im_odbc)


ODBC is a database independent abstraction layer for accessing databases. This module uses the ODBC API to
read data from database tables. There are several ODBC implementations available, and this module has been
tested with unixODBC on Linux (available in most major distributions) and Microsoft ODBC on Windows.

Setting up the ODBC data source is not in the scope of this document. Please consult the relevant ODBC guide:
the unixODBC documentation or the Microsoft ODBC Data Source Administrator guide. The data source must be
accessible by the user NXLog is running under.

In order to continue reading only new log entries after a restart, the table must contain an auto increment, serial,
or timestamp column named id in the returned result set. The value of this column is substituted into the ?
contained in the SELECT (see the SQL directive).

Some data types are not supported by im_odbc. If a column of an unsupported type is included in the result set,
im_odbc will log an unsupported odbc type error to the internal log. To read values from data types that are not

951
directly supported, use the CAST() function to convert to a supported type. See the Reading Unsupported Types
example below. Additionally, due to a change in the internal representation of datetime values in SQL Server,
some timestamp values cannot be compared correctly (when used as the id) without an explicit casting in the
WHERE clause. See the SQL Server Reading Logs by datetime ID example in the User Guide.

See the list of installer packages that provide the im_odbc module in the Available Modules chapter of the NXLog
User Guide.

121.24.1. Configuration
The im_odbc module accepts the following directives in addition to the common module directives. The
ConnectionString and SQL directives are required.

ConnectionString
This specifies the connection string containing the ODBC data source name.

SQL
This mandatory parameter sets the SQL statement the module will execute in order to query data from the
data source. The select statement must contain a WHERE clause using the column aliased as id.

SELECT RecordNumber AS id, DateOccured AS EventTime, data AS Message


FROM logtable WHERE RecordNumber > ?

Note that WHERE RecordNumber > ? is crucial: without this clause the module will read logs in an endless
loop. The result set returned by the select must contain this id column which is then stored and used for the
next query.

IdIsTimestamp
When this directive is set to TRUE, it instructs the module to treat the id field as TIMESTAMP type. If this
directive is not specified, it defaults to FALSE: the id field is treated as an INTEGER/NUMERIC type.

WARNING This configuration directive has been obsoleted in favor of IdType timestamp.

IdType
This directive specifies the type of the id field and accepts the following values: integer, timestamp, and
uniqueidentifier. If this directive is not specified, it defaults to integer and the id field is treated as an
INTEGER/NUMERIC type.

The timestamp type in Microsoft SQL Server is not a real timestamp; see rowversion
NOTE (Transact-SQL) on Microsoft Docs. To use an SQL Server timestamp type field as the id, set
IdType to integer.

The Microsoft SQL Server uniqueidentifier type is only sequential when initialized with
NOTE the NEWSEQUENTIALID function. Even then, the IDs are not guaranteed to be sequential in all
cases. For more information, see uniqueidentifier and NEWSEQUENTIALID on Microsoft
Docs.

The im_odbc module parses timestamps as local time, converted to UTC, and then saves
NOTE them in the event record. This module does not apply any time offset for fields that include
time zone information.

MaxIdSQL
This directive can be used to specify an SQL select statement for fetching the last record. MaxIdSQL is

952
required if ReadFromLast is set to TRUE. The statement must alias the ID column as maxid and return at least
one row with at least that column.

SELECT MAX(RecordNumber) AS maxid FROM logtable

PollInterval
This directive specifies how frequently, in seconds, the module will check for new records in the database by
executing the SQL SELECT statement. If this directive is not specified, the default is 1 second. Fractional
seconds may be specified (PollInterval 0.5 will check twice every second).

ReadFromLast
This boolean directive instructs the module to only read logs that arrived after NXLog was started if the saved
position could not be read (for example on first start). When SavePos is TRUE and a previously saved position
value could be read, the module will resume reading from this saved position. If ReadFromLast is TRUE, the
MaxIDSQL directive must be set. If this directive is not specified, it defaults to FALSE.

SavePos
This boolean directive specifies that the last row id should be saved when NXLog exits. The row id will be read
from the cache file upon startup. The default is TRUE: the row id is saved if this directive is not specified. Even
if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.

121.24.2. Fields
The following fields are used by im_odbc.

In addition to the field below, each column name returned in the result set is mapped directly to an NXLog field
name.

$raw_event (type: string)


This field is constructed from:

• the EventTime column or the current time if EventTime was not returned in the result set;

• the Hostname column or the hostname of the local system if Hostname was not returned in the result set;

• the Severity column or INFO if Severity was not returned in the result set; and

• all other columns as columnname: columnvalue, each starting on a new line.

121.24.3. Examples

953
Example 632. Reading from an ODBC Data Source

This example uses ODBC to connect to the mydb database and retrieve log messages. The messages are
then forwarded to another agent in the NXLog binary format.

nxlog.conf
 1 <Input odbc>
 2 Module im_odbc
 3 ConnectionString DSN=mssql;database=mydb;
 4 SQL SELECT RecordNumber AS id, \
 5 DateOccured AS EventTime, \
 6 data AS Message \
 7 FROM logtable WHERE RecordNumber > ?
 8 </Input>
 9
10 <Output tcp>
11 Module om_tcp
12 Host 192.168.1.1
13 Port 514
14 OutputType Binary
15 </Output>

Example 633. Reading Unsupported Types

This example reads from an SQL Server database. The LogTime field uses the datetimeoffset type, which is
not directly supported by im_odbc. The following configuration uses a SELECT statement that returns two
columns for this field: EventTime for the timestamp and TZOffset for the time-zone offset value.

nxlog.conf
 1 <Input mssql_datetimeoffset>
 2 Module im_odbc
 3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
 4 Trusted_Connection=yes; Database=TESTDB
 5 IdType integer
 6 SQL SELECT RecordID AS id, \
 7 CAST(LogTime AS datetime2) AS EventTime, \
 8 DATEPART(tz, LogTime) AS TZOffset, \
 9 Message \
10 FROM dbo.test1 WHERE RecordID > ?
11 Exec rename_field($id, $RecordID);
12 </Input>

121.25. Packet Capture (im_pcap)


This module provides support to passively monitor network traffic by generating logs for various protocols. It
uses the libpcap and winpcap libraries to capture network traffic.

NOTE Multiple instances of im_pcap are not supported currently.

121.25.1. Configuration
The im_pcap module accepts the following directives in addition to the common module directives.

Dev
This optional directive can only occur once. It specifies the name of a network device/interface on which

954
im_pcap will capture packets. This directive is mutually exclusive with the File directive.

File
This optional directive can only occur once. It specifies the path to the file which contains capture packet data.
The file path do not need to be enclosed in quotation marks, although both single quoted and double quoted
paths are accepted. This directive is mutually exclusive with the Dev directive.

Protocol
This is an optional group directive. It specifies the protocol, port number and protocol-specific fields which
should be captured. May be used multiple times in the module definition, to specify multiple protocols. If no
Protocol directive is specified, then all protocols will be captured. It has the following sub-directives:

Type
Defines the name of a protocol to capture. Allowed types are; ethernet, ipv4, ipv6, ip, tcp, udp, http,
arp, vlan, icmp, pppoe, dns, mpls, gre, ppp_pptp, ssl, sll, dhcp, null_loopback, igmp, vxlan, sip, sdp,
radius.

Port
A comma-separated list of custom port numbers to capture for the protocol specified in this Protocol
group directive. If omitted, the following standard port number(s) corresponding to this protocol will be
used:

DHCP
67, 68

VLAN
4789

DNS
53, 5353, 5355

SIP
5060, 5061

RADIUS
1812

HTTP
80, 8081

SSL
443, 465, 636, 989, 990, 992, 993, 995

Filter
An optional directive that defines a filter, which can be used to further limit the packets that should be
captured and handled by the module. Filters do not need to be enclosed in quotation marks, although both
single quoted and double quoted filters are accepted. If this directive is not used, then no filtering will be
done.

Filtering is done by the libpcap library. See the Manpage of PCAP-FILTER in the libpcap
NOTE
documentation for the syntax.

955
121.25.2. Fields
The following fields are used by im_pcap.

$arp.sender_ip (type: string)


arp.sender_ip

$arp.sender_mac (type: string)


arp.sender_mac

$arp.target_ip (type: string)


arp.target_ip

$arp.type (type: string)


arp.type

$dhcp.address_ip (type: string)


dhcp.address_ip

$dhcp.client_ip (type: string)


dhcp.client_ip

$dhcp.client_mac (type: string)


dhcp.client_mac

$dhcp.elapsed (type: string)


dhcp.elapsed

$dhcp.flags (type: string)


dhcp.flags

$dhcp.hardware_type (type: string)


dhcp.hardware_type

$dhcp.hops (type: string)


dhcp.hops

$dhcp.opcode (type: string)


dhcp.opcode

$dhcp.option (type: string)


dhcp.option

$dhcp.relay_ip (type: string)


dhcp.relay_ip

$dhcp.server_ip (type: string)


dhcp.server_ip

$dhcp.server_name (type: string)


dhcp.server_name

$dhcp.transaction_id (type: string)

956
dhcp.transaction_id

$dns.additional (type: string)


dns.additional

$dns.answer (type: string)


dns.answer

$dns.authority (type: string)


dns.authority

$dns.flags.authentic_data (type: string)


dns.flags.authentic_data

$dns.flags.authoritative (type: string)


dns.flags.authoritative

$dns.flags.checking_disabled (type: string)


dns.flags.checking_disabled

$dns.flags.recursion_available (type: string)


dns.flags.recursion_available

$dns.flags.recursion_desired (type: string)


dns.flags.recursion_desired

$dns.flags.truncated_response (type: string)


dns.flags.truncated_response

$dns.id (type: string)


dns.id

$dns.opcode (type: string)


dns.opcode

$dns.query (type: string)


dns.query

$dns.response (type: string)


dns.response

$dns.response.code (type: string)


dns.response.code

$eth.dest.mac (type: string)


eth.dest.mac

$eth.src_mac (type: string)


eth.src_mac

$http.header (type: string)


http.header

957
$http.request.method (type: string)
http.request.method

$http.request.size (type: string)


http.request.size

$http.request.uri (type: string)


http.request.uri

$http.request.url (type: string)


http.request.url

$http.request.version (type: string)


http.request.version

$http.response.code (type: string)


http.response.code

$http.response.phrase (type: string)


http.response.phrase

$icmp.type (type: string)


icmp.type

$igmp.type (type: string)


igmp.type

$igmp.type_string (type: string)


igmp.type_string

$igmp.version (type: string)


igmp.version

$ipv4.dst (type: string)


ipv4.dst

$ipv4.fragment (type: string)


ipv4.fragment

$ipv4.src (type: string)


ipv4.src

$ipv6.dst (type: string)


ipv6.dst

$ipv6.options (type: string)


ipv6.options

$ipv6.src (type: string)


ipv6.src

$loopback (type: string)


loopback

958
$modbus.function_code (type: string)
Modbus function code

$modbus.length (type: integer)


Length of the payload carried by this Modbus packet

$modbus.malformed (type: string)


The reason why a packet was tagged as malformed by the decoder

$modbus.prot_id (type: integer)


Modbsus protocol ID. Always 0 for Modbus/TCP

$modbus.query (type: string)


Full details of a Modbus query/request, including function ID and all function-specific parameters.

$modbus.query.diagnostic.data (type: string)


modbus.query.diagnostic.data

$modbus.query.encapsulated_interface.data (type: string)


modbus.query.encapsulated_interface.data

$modbus.query.mask_write_register.and_mask (type: string)


modbus.query.mask_write_register.and_mask

$modbus.query.mask_write_register.or_mask (type: string)


modbus.query.mask_write_register.or_mask

$modbus.query.mask_write_register.ref_address (type: string)


modbus.query.mask_write_register.ref_address

$modbus.query.payload_data (type: string)


Hex dump of the full payload data. Only output if the packet cannot be reliably decoded.

$modbus.query.read_coils.qty_of_inputs (type: string)


modbus.query.read_coils.qty_of_inputs

$modbus.query.read_coils.starting_address (type: string)


HASH(0x55e94b4351d8)

$modbus.query.read_device_id.object_id (type: string)


modbus.query.read_device_id.object_id

$modbus.query.read_device_id.read_device_id_code (type: string)


modbus.query.read_device_id.read_device_id_code

$modbus.query.read_discrete_inputs.qty_of_inputs (type: string)


modbus.query.read_discrete_inputs.qty_of_inputs

$modbus.query.read_discrete_inputs.starting_address (type: string)


modbus.query.read_discrete_inputs.starting_address

$modbus.query.read_fifo_queue.fifo_pointer_address (type: string)


modbus.query.read_fifo_queue.fifo_pointer_address

959
$modbus.query.read_file_record.byte_count (type: string)
modbus.query.read_file_record.byte_count

[[im_pcap_field_modbus_query_read_file_record_sub_req_*_file_number]]
$modbus.query.read_file_record.sub_req.*.file_number (type: string)::

modbus.query.read_file_record.sub_req.*.file_number

[[im_pcap_field_modbus_query_read_file_record_sub_req_*_record_length]]
$modbus.query.read_file_record.sub_req.*.record_length (type: string)::

modbus.query.read_file_record.sub_req.*.record_length

[[im_pcap_field_modbus_query_read_file_record_sub_req_*_record_number]]
$modbus.query.read_file_record.sub_req.*.record_number (type: string)::

modbus.query.read_file_record.sub_req.*.record_number

[[im_pcap_field_modbus_query_read_file_record_sub_req_*_reference_type]]
$modbus.query.read_file_record.sub_req.*.reference_type (type: string)::

modbus.query.read_file_record.sub_req.*.reference_type

$modbus.query.read_holding_regs.qty_of_regs (type: string)


modbus.query.read_holding_regs.qty_of_regs

$modbus.query.read_holding_regs.starting_address (type: string)


modbus.query.read_holding_regs.starting_address

$modbus.query.read_input_regs.qty_of_input_regs (type: string)


modbus.query.read_input_regs.qty_of_input_regs

$modbus.query.read_input_regs.starting_address (type: string)


modbus.query.read_input_regs.starting_address

$modbus.query.rw_multiple_regs.qty_to_read (type: string)


modbus.query.rw_multiple_regs.qty_to_read

$modbus.query.rw_multiple_regs.qty_to_write (type: string)


modbus.query.rw_multiple_regs.qty_to_write

$modbus.query.rw_multiple_regs.read_starting_address (type: string)


modbus.query.rw_multiple_regs.read_starting_address

[[im_pcap_field_modbus_query_rw_multiple_regs_reg_*]] $modbus.query.rw_multiple_regs.reg.* (type:


string)::

960
modbus.query.rw_multiple_regs.reg.*

$modbus.query.rw_multiple_regs.write_byte_count (type: string)


modbus.query.rw_multiple_regs.write_byte_count

$modbus.query.rw_multiple_regs.write_starting_address (type: string)


modbus.query.rw_multiple_regs.write_starting_address

$modbus.query.write_file_record.req_data_len (type: string)


modbus.query.write_file_record.req_data_len

[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_file_number]]
$modbus.query.write_file_record.sub_rec.*.file_number (type: string)::

modbus.query.write_file_record.sub_rec.*.file_number

[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_record_length]]
$modbus.query.write_file_record.sub_rec.*.record_length (type: string)::

modbus.query.write_file_record.sub_rec.*.record_length

[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_record_number]]
$modbus.query.write_file_record.sub_rec.*.record_number (type: string)::

modbus.query.write_file_record.sub_rec.*.record_number

[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_reference_type]]
$modbus.query.write_file_record.sub_rec.*.reference_type (type: string)::

modbus.query.write_file_record.sub_rec.*.reference_type

[[im_pcap_field_modbus_query_write_multiple_coils_bit_*]] $modbus.query.write_multiple_coils.bit.*
(type: integer)::

modbus.query.write_multiple_coils.bit.*

$modbus.query.write_multiple_coils.byte_count (type: string)


modbus.query.write_multiple_coils.byte_count

$modbus.query.write_multiple_coils.qty_of_outputs (type: string)


modbus.query.write_multiple_coils.qty_of_outputs

$modbus.query.write_multiple_coils.starting_address (type: string)


modbus.query.write_multiple_coils.starting_address

$modbus.query.write_multiple_registers.byte_count (type: string)


modbus.query.write_multiple_registers.byte_count

961
$modbus.query.write_multiple_registers.qty_of_regs (type: string)
modbus.query.write_multiple_registers.qty_of_regs

[[im_pcap_field_modbus_query_write_multiple_registers_reg_*]]
$modbus.query.write_multiple_registers.reg.* (type: integer)::

modbus.query.write_multiple_registers.reg.*

$modbus.query.write_multiple_registers.starting_address (type: string)


modbus.query.write_multiple_registers.starting_address

$modbus.query.write_single_coil.output_address (type: string)


modbus.query.write_single_coil.output_address

$modbus.query.write_single_coil.output_value (type: string)


modbus.query.write_single_coil.output_value

$modbus.query.write_single_register.reg_address (type: string)


modbus.query.write_single_register.reg_address

$modbus.query.write_single_register.reg_value (type: string)


modbus.query.write_single_register.reg_value

$modbus.response (type: string)


Full details of a Modbus response, including function ID and all function-specific parameters.

$modbus.response.diagnostic.data (type: string)


modbus.response.diagnostic.data

$modbus.response.diagnostic.exception_code (type: string)


modbus.response.diagnostic.exception_code

$modbus.response.enapsulated_interface_transport.exception_code (type: string)


modbus.response.enapsulated_interface_transport.exception_code

$modbus.response.encapsulated_interface.data (type: string)


modbus.response.encapsulated_interface.data

$modbus.response.get_comm_event_counter.event_count (type: string)


modbus.response.get_comm_event_counter.event_count

$modbus.response.get_comm_event_counter.exception_code (type: string)


modbus.response.get_comm_event_counter.exception_code

$modbus.response.get_comm_event_counter.status (type: string)


modbus.response.get_comm_event_counter.status

$modbus.response.get_comm_event_log.byte_count (type: string)


modbus.response.get_comm_event_log.byte_count

[[im_pcap_field_modbus_response_get_comm_event_log_event_*]]
$modbus.response.get_comm_event_log.event.* (type: integer)::

962
+

modbus.response.get_comm_event_log.event.*

$modbus.response.get_comm_event_log.event_count (type: string)


modbus.response.get_comm_event_log.event_count

$modbus.response.get_comm_event_log.excception_code (type: string)


modbus.response.get_comm_event_log.excception_code

$modbus.response.get_comm_event_log.message_count (type: string)


modbus.response.get_comm_event_log.message_count

$modbus.response.get_comm_event_log.status (type: string)


modbus.response.get_comm_event_log.status

$modbus.response.mask_write_register.and_mask (type: string)


modbus.response.mask_write_register.and_mask

$modbus.response.mask_write_register.exc_code (type: string)


modbus.response.mask_write_register.exc_code

$modbus.response.mask_write_register.or_mask (type: string)


modbus.response.mask_write_register.or_mask

$modbus.response.mask_write_register.ref_address (type: string)


modbus.response.mask_write_register.ref_address

$modbus.response.payload_data (type: string)


Hex dump of the full payload data. Only output if the packet cannot be reliably decoded.

[[im_pcap_field_modbus_response_read_coils_bit_*]] $modbus.response.read_coils.bit.* (type: integer)::

modbus.response.read_coils.bit.*

$modbus.response.read_coils.byte_count (type: string)


modbus.response.read_coils.byte_count

$modbus.response.read_coils.exc_code (type: string)


modbus.response.read_coils.exc_code

$modbus.response.read_device_id.conformity_level (type: string)


modbus.response.read_device_id.conformity_level

$modbus.response.read_device_id.id_code (type: string)


modbus.response.read_device_id.id_code

$modbus.response.read_device_id.more_follows (type: string)


modbus.response.read_device_id.more_follows

$modbus.response.read_device_id.next_object_id (type: string)


modbus.response.read_device_id.next_object_id

963
$modbus.response.read_device_id.number_of_objects (type: string)
modbus.response.read_device_id.number_of_objects

[[im_pcap_field_modbus_response_read_device_id_object_*_object_id]]
$modbus.response.read_device_id.object.*.object_id (type: string)::

modbus.response.read_device_id.object.*.object_id

[[im_pcap_field_modbus_response_read_device_id_object_*_object_length]]
$modbus.response.read_device_id.object.*.object_length (type: string)::

modbus.response.read_device_id.object.*.object_length

[[im_pcap_field_modbus_response_read_device_id_object_*_object_value]]
$modbus.response.read_device_id.object.*.object_value (type: string)::

modbus.response.read_device_id.object.*.object_value

[[im_pcap_field_modbus_response_read_discrete_inputs_bit_*]]
$modbus.response.read_discrete_inputs.bit.* (type: integer)::

modbus.response.read_discrete_inputs.bit.*

$modbus.response.read_discrete_inputs.byte_count (type: string)


modbus.response.read_discrete_inputs.byte_count

$modbus.response.read_discrete_inputs.exc_code (type: string)


modbus.response.read_discrete_inputs.exc_code

$modbus.response.read_exception_status.data (type: string)


modbus.response.read_exception_status.data

$modbus.response.read_exception_status.exception_code (type: string)


modbus.response.read_exception_status.exception_code

$modbus.response.read_fifo_queue.byte_count (type: string)


modbus.response.read_fifo_queue.byte_count

$modbus.response.read_fifo_queue.exc_code (type: string)


modbus.response.read_fifo_queue.exc_code

$modbus.response.read_fifo_queue.fifo_count (type: string)


modbus.response.read_fifo_queue.fifo_count

[[im_pcap_field_modbus_response_read_fifo_queue_fifo_value_register_*]]
$modbus.response.read_fifo_queue.fifo_value_register.* (type: string)::

964
modbus.response.read_fifo_queue.fifo_value_register.*

$modbus.response.read_file_record.exc_code (type: string)


modbus.response.read_file_record.exc_code

$modbus.response.read_file_record.resp_data_len (type: string)


modbus.response.read_file_record.resp_data_len

[[im_pcap_field_modbus_response_read_file_record_sub_rec_*_file_resp_len]]
$modbus.response.read_file_record.sub_rec.*.file_resp_len (type: string)::

modbus.response.read_file_record.sub_rec.*.file_resp_len

[[im_pcap_field_modbus_response_read_file_record_sub_rec_*_reference_type]]
$modbus.response.read_file_record.sub_rec.*.reference_type (type: string)::

modbus.response.read_file_record.sub_rec.*.reference_type

$modbus.response.read_holding_regs.byte_count (type: string)


modbus.response.read_holding_regs.byte_count

$modbus.response.read_holding_regs.exc_code (type: string)


modbus.response.read_holding_regs.exc_code

[[im_pcap_field_modbus_response_read_holding_regs_reg_*]] $modbus.response.read_holding_regs.reg.*
(type: string)::

modbus.response.read_holding_regs.reg.*

$modbus.response.read_input_regs.byte_count (type: string)


modbus.response.read_input_regs.byte_count

$modbus.response.read_input_regs.exc_code (type: string)


modbus.response.read_input_regs.exc_code

[[im_pcap_field_modbus_response_read_input_regs_reg_*]] $modbus.response.read_input_regs.reg.* (type:


integer)::

modbus.response.read_input_regs.reg.*

$modbus.response.report_server_id.byte_count (type: string)


modbus.response.report_server_id.byte_count

$modbus.response.report_server_id.data (type: string)


modbus.response.report_server_id.data

$modbus.response.report_server_id.exception_code (type: string)


modbus.response.report_server_id.exception_code

965
$modbus.response.rw_multiple_regs.byte_count (type: string)
modbus.response.rw_multiple_regs.byte_count

$modbus.response.rw_multiple_regs.exc_code (type: string)


modbus.response.rw_multiple_regs.exc_code

[[im_pcap_field_modbus_response_rw_multiple_regs_reg_*]] $modbus.response.rw_multiple_regs.reg.*
(type: string)::

modbus.response.rw_multiple_regs.reg.*

$modbus.response.write_file_record.exc_code (type: string)


modbus.response.write_file_record.exc_code

$modbus.response.write_file_record.resp_data_len (type: string)


modbus.response.write_file_record.resp_data_len

[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_file_number]]
$modbus.response.write_file_record.sub_rec.*.file_number (type: string)::

modbus.response.write_file_record.sub_rec.*.file_number

[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_record_length]]
$modbus.response.write_file_record.sub_rec.*.record_length (type: string)::

modbus.response.write_file_record.sub_rec.*.record_length

[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_record_number]]
$modbus.response.write_file_record.sub_rec.*.record_number (type: string)::

modbus.response.write_file_record.sub_rec.*.record_number

[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_reference_type]]
$modbus.response.write_file_record.sub_rec.*.reference_type (type: string)::

modbus.response.write_file_record.sub_rec.*.reference_type

$modbus.response.write_multiple_coils.exc_code (type: string)


modbus.response.write_multiple_coils.exc_code

$modbus.response.write_multiple_coils.qty_of_outputs (type: string)


modbus.response.write_multiple_coils.qty_of_outputs

$modbus.response.write_multiple_coils.starting_address (type: string)


modbus.response.write_multiple_coils.starting_address

966
$modbus.response.write_multiple_registers.exc_code (type: string)
modbus.response.write_multiple_registers.exc_code

$modbus.response.write_multiple_registers.qty_of_regs (type: string)


modbus.response.write_multiple_registers.qty_of_regs

$modbus.response.write_multiple_registers.starting_address (type: string)


modbus.response.write_multiple_registers.starting_address

$modbus.response.write_single_coil.exc_code (type: string)


modbus.response.write_single_coil.exc_code

$modbus.response.write_single_coil.output_address (type: string)


modbus.response.write_single_coil.output_address

$modbus.response.write_single_coil.output_value (type: string)


modbus.response.write_single_coil.output_value

$modbus.response.write_single_register.exc_code (type: string)


modbus.response.write_single_register.exc_code

$modbus.response.write_single_register.reg_address (type: string)


modbus.response.write_single_register.reg_address

$modbus.response.write_single_register.reg_value (type: string)


modbus.response.write_single_register.reg_value

$modbus.rtu.checksum (type: string)


Modbus packet checksum, as presented by the packet.

$modbus.rtu.computed_checksum (type: string)


Modbus packet checksum, as computed by the logging host.

$modbus.rtu.slave_id (type: string)


Modbus RTU over TCP slave ID

$modbus.trans_id (type: integer)


Modbus transaction ID

$modbus.unit_id (type: integer)


Unit ID

$mpls.experimental (type: string)


mpls.experimental

$mpls.is_bottom (type: string)


mpls.is_bottom

$mpls.label (type: string)


mpls.label

$mpls.ttl (type: string)


mpls.ttl

967
$payload.length (type: string)
payload.length

$pppoe.discovery.code (type: string)


pppoe.discovery.code

$pppoe.discovery.session_id (type: string)


pppoe.discovery.session_id

$pppoe.discovery.type (type: string)


pppoe.discovery.type

$pppoe.discovery.version (type: string)


pppoe.discovery.version

$radius.attr (type: string)


radius.attr

$radius.id (type: string)


radius.id

$radius.message (type: string)


radius.message

$radius.message_code (type: string)


radius.message_code

$radius.message_length (type: string)


radius.message_length

$sip.request.field (type: string)


sip.request.field

$sip.request.method (type: string)


sip.request.method

$sip.request.uri (type: string)


sip.request.uri

$sip.request.version (type: string)


sip.request.version

$sip.response.code (type: string)


sip.response.code

$sip.response.code_str (type: string)


sip.response.code_str

$sip.response.field (type: string)


sip.response.field

$sip.response.version (type: string)


sip.response.version

968
$spd.field (type: string)
spd.field

$ssl.alert.encrypted (type: string)


ssl.alert.encrypted

$ssl.handshake.message (type: string)


ssl.handshake.message

$ssl.stage (type: string)


ssl.stage

$ssl.version (type: string)


ssl.version

$tcp.dst_port (type: string)


tcp.dst_port

$tcp.flag (type: string)


tcp.flag

$tcp.src_port (type: string)


tcp.src_port

$trailer.data (type: string)


trailer.data

$trailer.length (type: string)


trailer.length

$udp.dst_port (type: string)


udp.dst_port

$udp.src_port (type: string)


udp.src_port

$vlan.cfi (type: string)


vlan.cfi

$vlan.id (type: string)


vlan.id

$vlan.priority (type: string)


vlan.priority

121.25.3. Examples

969
Example 634. Reading from a PCAP File While Applying a Packet Filter

In this example, the File directive defines the path and filename of a .pcap file containing packets saved by
Wireshark. The Filter directive defines a filter that selects only TCP packets targeted for port 443. The output
is formatted as JSON while written to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input pcap>
 6 Module im_pcap
 7 File "tmp/example.pcap"
 8 Filter tcp dst port 443
 9 </Input>
10
11 <Output file>
12 Module om_file
13 File "tmp/output"
14 Exec to_json();
15 </Output>

970
Example 635. Capturing TCP, Ethernet, and HTTP Traffic to a Single File

In this example, the configuration illustrates how the Protocol group directive can be defined multiple times
within the same module instance. Three types of network packets are to be captured: HTTP requests; TCP
for the source and destination ports of all visible TCP traffic; and Ethernet to log the MAC addresses of
packet sources and their destinations. The events are formatted to JSON while writing to a file.

This approach has two distinct advantages. It produces events that include all fields of all three protocols,
which enables correlation between protocols that yield source and destination information with those
protocols that do not provide such fields. Additionally, it achieves this goal using a single module instance
instead of multiple instances, which reduces system resource consumption.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input pcap>
 6 Module im_pcap
 7 Dev enp0s3
 8 <Protocol>
 9 Type http
10 Field http.request.uri
11 Field http.request.method
12 Field http.response.code
13 Field http.response.phrase
14 </Protocol>
15 <Protocol>
16 Type tcp
17 Field tcp.src_port
18 Field tcp.dst_port
19 Field tcp.flag
20 </Protocol>
21 <Protocol>
22 Type ethernet
23 Field eth.src_mac
24 Field eth.dest.mac
25 </Protocol>
26 </Input>
27
28 <Output file>
29 Module om_file
30 File "tmp/output"
31 Exec to_json();
32 </Output>

971
Example 636. Capturing TCP, Ethernet, and HTTP Traffic to Separate Files

In this example, each of the three protocols are managed by a separate module instance. The events are
formatted to JSON while being written to each of their respective files. This approach can be used when
there is a need to analyze each protocol in isolation from each other. Because three input instances are
used, more system resources will be consumed when compared to the multi-protocol, single-instance
approach.

nxlog.conf (truncated)
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input pcap_tcp>
 6 Module im_pcap
 7 Dev enp0s3
 8 <Protocol>
 9 Type tcp
10 Field tcp.src_port
11 Field tcp.dst_port
12 Field tcp.flag
13 </Protocol>
14 </Input>
15
16 <Input pcap_http>
17 Module im_pcap
18 Dev enp0s3
19 <Protocol>
20 Type http
21 Field http.request.uri
22 Field http.request.method
23 Field http.response.code
24 Field http.response.phrase
25 </Protocol>
26 </Input>
27
28 <Input pcap_eth>
29 [...]

121.26. Perl (im_perl)


The Perl programming language is widely used for log processing and comes with a broad set of modules
bundled or available from CPAN. Code can be written more quickly in Perl than in C, and code execution is safer
because exceptions (croak/die) are handled properly and will only result in an unfinished attempt at log
processing rather than taking down the whole NXLog process.

This module makes it possible to execute Perl code in an input module to capture and inject event data directly
into NXLog. See also the om_perl and xm_perl modules.

The module will parse the file specified in the PerlCode directive when NXLog starts the module. The Perl code
must implement the read_data subroutine which will be called by the module. To generate event data, the
Log::Nxlog Perl module must be included, which provides the following methods.

To use the im_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the im_perl module on Windows requires Strawberry Perl 5.28.0.1.

972
log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.

log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.

log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.

log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.

add_input_data(event)
Pass the event record to the next module instance in the route. Failure to call this method will result in a
memory leak.

logdata_new()
Create a new event record. The return value can be used with the set_field_*() methods to insert data.

set_field_boolean(event, key, value)


Set the boolean value in the field named key.

set_field_integer(event, key, value)


Set the integer value in the field named key.

set_field_string(event, key, value)


Set the string value in the field named key.

set_read_timer(delay)
Set the timer in seconds to invoke the read_data method again.

The set_read_timer() method should be called in order to invoke read_data again. This is typically
NOTE
used for polling data. The read_data method must not block.

For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.

See the list of installer packages that provide the im_perl module in the Available Modules chapter of the NXLog
User Guide.

121.26.1. Configuration
The im_perl module accepts the following directives in addition to the common module directives.

PerlCode
This mandatory directive expects a file containing valid Perl code that implements the read_data subroutine.
This file is read and parsed by the Perl interpreter.

973
On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.

nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';

Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.

If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.

Call
This optional directive specifies the Perl subroutine to invoke. With this directive, you can call only specific
subroutines from your Perl code. If the directive is not specified, the default subroutine read_data is invoked.

121.26.2. Examples

974
Example 637. Using im_perl to Generate Event Data

In this example, logs are generated by a Perl function that increments a counter and inserts it into the
generated line.

nxlog.conf
 1 <Output file2>
 2 Module om_file
 3 File 'tmp/output2'
 4 </Output>
 5
 6
 7 <Input perl>
 8 Module im_perl
 9 PerlCode modules/input/perl/perl-input.pl
10 Call read_data1
11 </Input>
12
13 <Input perl2>
14 Module im_perl
15 PerlCode modules/input/perl/perl-input2.pl
16 </Input>
17
18 <Route r1>
19 Path perl => file
20 </Route>
21
22 <Route r2>
23 Path perl2 => file2
24 </Route>

perl-input.pl
use strict;
use warnings;

use Log::Nxlog;

my $counter;

sub read_data1
{
  my $event = Log::Nxlog::logdata_new();
  $counter //= 1;
  my $line = "Input1: this is a test line ($counter) that should appear in the output";
  $counter++;
  Log::Nxlog::set_field_string($event, 'raw_event', $line);
  Log::Nxlog::add_input_data($event);
  if ( $counter <= 100 )
  {
  Log::Nxlog::set_read_timer(0);
  }
}

121.27. Named Pipes (im_pipe)


This module can be used to read log messages from named pipes on UNIX-like operating systems.

975
121.27.1. Configuration
The im_pipe module accepts the following directives in addition to the common module directives.

Pipe
This mandatory directive specifies the name of the input pipe file. The module checks if the specified pipe file
exists and creates it in case it does not. If the specified pipe file is not a named pipe, the module does not
start.

InputType
This directive specifies the input data format. The default value is LineBased. See the InputType directive in
the list of common module directives.

121.27.2. Examples
This example provides the NXLog configuration for processing messages from a named pipe on a UNIX-like
operating system.

Example 638. Forwarding Logs From a Pipe to a Remote Host

With this configuration, NXLog reads messages from a named pipe and forwards them via TCP. No
additional processing is done.

nxlog.conf
<Input in>↵
  Module im_pipe↵
  Pipe "tmp/pipe"↵
</Input>↵

<Output out>↵
  Module om_tcp↵
  Host 192.168.1.2↵
  Port 514↵
</Output>↵

121.28. Python (im_python)


This module provides support for collecting log data with methods written in the Python language. The file
specified by the PythonCode directive should contain a read_data() method which is called by the im_python
module instance. See also the xm_python and om_python modules.

The Python script should import the nxlog module, and will have access to the following classes and functions.

nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.

nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.

nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.

976
nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.

class nxlog.Module
This class will be instantiated by NXLog and passed to the read_data() method in the script.

logdata_new()
This method returns a new LogData event object.

set_read_timer(delay)
This method sets a trigger for another read after a specified delay in seconds (float).

class nxlog.LogData
This class represents a Logdata event object.

delete_field(name)
This method removes the field name from the event record.

field_names()
This method returns a list with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

post()
This method will submit the LogData event to NXLog for processing by the next module in the route.

set_field(name, value)
This method sets the value of field name to value.

module
This attribute is set to the Module object associated with the event.

See the list of installer packages that provide the im_python module in the Available Modules chapter of the
NXLog User Guide.

121.28.1. Configuration
The im_python module accepts the following directives in addition to the common module directives.

PythonCode
This mandatory directive specifies a file containing Python code. The im_python instance will call a
read_data() function which must accept an nxlog.Module object as its only argument.

Call
This optional directive specifies the Python method to invoke. With this directive, you can call only specific
methods from your Python code. If the directive is not specified, the default method read_data is invoked.

121.28.2. Examples

977
Example 639. Using im_python to Generate Event Data

In this example, a Python script is used to read Syslog events from multiple log files bundled in tar archives,
which may be compressed. The parse_syslog() procedure is also used to parse the events.

To avoid re-reading archives, each one should be removed after reading (see the
NOTE
comments in the script) or other similar functionality implemented.

nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_python
7 PythonCode modules/input/python/2_python.py
8 Exec parse_syslog();
9 </Input>

2_python.py (truncated)
import os
import tarfile

import nxlog

LOG_DIR = 'modules/input/python/2_logdir'
POLL_INTERVAL = 30

def read_data(module):
  nxlog.log_debug('Checking for new archives')
  for file in os.listdir(LOG_DIR):
  path = os.path.join(LOG_DIR, file)
  nxlog.log_debug("Attempting to read from '{}'".format(path))
  try:
  for line in read_tar(path):
  event = module.logdata_new()
  event.set_field('ImportFile', path)
  event.set_field('raw_event', line)
[...]

121.29. Redis (im_redis)


This module can retrieve data stored in a Redis server. The module issues LPOP commands using the Redis
Protocol to pull data.

The output counterpart, om_redis, can be used to populate the Redis server with data.

See the list of installer packages that provide the im_redis module in the Available Modules chapter of the NXLog
User Guide.

121.29.1. Configuration
The im_redis module accepts the following directives in addition to the common module directives. The Host
directive is required.

Host

978
This mandatory directive specifies the IP address or DNS hostname of the Redis server to connect to.

Channel
This optional directive defines the Redis channel this module will subscribe to. This directive can be specified
multiple times within the module definition. When the Command directive is set to PSUBSCRIBE, each
Channel directive specifies a glob that will be matched by the Redis server against its available channels. For
the SUBSCRIBE command, Channel specifies the channel names which will be matched as is (no globbing).
The usage of this directive is mutually exclusive with the usage of the LPOP and RPOP commands in the
Command directive.

Command
This optional directive can be used to choose between the LPOP, RPOP, SUBSCRIBE and PSUBSCRIBE
commands. The default Command is set to LPOP, if this directive is not specified.

InputType
See the InputType directive in the list of common module directives. The default is the Dgram reader function,
which expects a plain string. To preserve structured data Binary can be used, but it must also be set on the
other end.

Key
This specifies the Key used by the LPOP command. The default is nxlog. The usage of this directive is
mutually exclusive with the usage of the SUBSCRIBE and PSUBSCRIBE commands in the Command directive.

PollInterval
This directive specifies how frequently the module will check for new data, in seconds. If this directive is not
specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second). The usage of this directive is mutually exclusive with the usage of the SUBSCRIBE and
PSUBSCRIBE commands in the Command directive.

Port
This specifies the port number of the Redis server. The default is port 6379.

121.29.2. Fields
The following fields are used by im_redis.

$raw_event (type: string)


The received string.

$Channel (type: string)


For the SUBSCRIBE and PSUBSCRIBE commands, this is the Redis Pub/Sub channel from which the message
was received. Otherwise, it is undefined.

121.30. Windows Registry Monitoring (im_regmon)


This module periodically scans the Windows registry and generates event records if a change in the monitored
registry entries is detected.

NOTE This module is only available on Windows.

See the list of installer packages that provide the im_regmon module in the Available Modules chapter of the
NXLog User Guide.

979
121.30.1. Configuration
The im_regmon module accepts the following directives in addition to the common module directives. The
RegValue directive is required.

RegValue
This mandatory directive specifies the name of the registry entry. It must be a string type expression.
Wildcards are also supported. See the File directive of im_file for more details on how wildcarded entries can
be specified. More than one occurrence of the RegValue directive can be specified. The path of the registry
entry specified with this directive must start with one of the following: HKCC, HKU, HKCU, HKCR, or HKLM.

64BitView
If set to TRUE, this boolean directive indicates that the 64 bit registry view should be monitored. The default is
TRUE.

Digest
This specifies the digest method (hash function) to be used to calculate the checksum. The default is sha1.
The following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256,
sha384, and sha512.

Exclude
This directive specifies a single registry path or a set of registry values (using wildcards) to be excluded from
the scan. More than one occurrence of the Exclude directive can be used.

Recursive
If set to TRUE, this boolean directive specifies that registry entries set with the RegValue directive should be
scanned recursively under subkeys. For example, HKCU\test\value will match HKCU\test\subkey\value.
Wildcards can be used in combination with Recursive: HKCU\test\value* will match
HKCU\test\subkey\value2. This directive only causes scanning under the given path: HKCU\*\value will not
match HKCU\test\subkey\value. The default is FALSE.

ScanInterval
This directive specifies how frequently, in seconds, the module will check the registry entry or entries for
modifications. The default is 86400 (1 day). The value of ScanInterval can be set to 0 to disable periodic
scanning and instead invoke scans via the start_scan() procedure.

121.30.2. Procedures
The following procedures are exported by im_regmon.

start_scan();
Trigger the Windows registry integrity scan. This procedure returns before the scan is finished.

121.30.3. Fields
The following fields are used by im_regmon.

$raw_event (type: string)


A string containing the $EventTime, $Hostname, and other fields.

$Digest (type: string)


The calculated digest (checksum) value.

980
$DigestName (type: string)
The name of the digest used to calculate the checksum value (for example, SHA1).

$EventTime (type: datetime)


The current time.

$EventType (type: string)


One of the following values: CHANGE or DELETE.

$Hostname (type: string)


The name of the system where the event was generated.

$PrevDigest (type: string)


The calculated digest (checksum) value from the previous scan.

$PrevValueSize (type: integer)


The size of the registry entry’s value from the previous scan.

$RegistryValueName (type: string)


The name of the registry entry where the changes were detected.

$Severity (type: string)


The severity name: WARNING.

$SeverityValue (type: integer)


The WARNING severity level value: 3.

$ValueSize (type: integer)


The size of the registry entry’s value after the modification.

121.30.4. Examples

981
Example 640. Periodic Registry Monitoring

This example monitors the registry entry recursively, and scans every 10 seconds. Messages generated by
any detected changes will be written to file in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input regmon>
 6 Module im_regmon
 7 RegValue 'HKLM\Software\Policies\*'
 8 ScanInterval 10
 9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'C:\test\regmon.log'
14 Exec to_json();
15 </Output>
16
17 <Route regmon_to_file>
18 Path regmon => file
19 </Route>

Example 641. Scheduled Registry Scan

The im_regmon module provides a start_scan() procedure that can be called to invoke the scan. The
following configuration will trigger the scan every day at midnight.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input regmon>
 6 Module im_regmon
 7 RegValue 'HKLM\Software\*'
 8 Exclude 'HKLM\Software\Program Groups\*'
 9 ScanInterval 0
10 <Schedule>
11 When @daily
12 Exec start_scan();
13 </Schedule>
14 </Input>
15
16 <Output file>
17 Module om_file
18 File 'C:\test\regmon.log'
19 Exec to_json();
20 </Output>
21
22 <Route dailycheck>
23 Path regmon => file
24 </Route>

982
121.31. Ruby (im_ruby)
This module provides support for collecting log data with methods written in the Ruby language. See also the
xm_ruby and om_ruby modules.

The Nxlog module provides the following classes and methods.

Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.

Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.

Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.

Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.

class Nxlog.Module
This class will be instantiated by NXLog and passed to the method specified by the Call directive.

logdata_new()
This method returns a new LogData object.

set_read_timer(delay)
This method sets a trigger for another read after a specified delay in seconds (float).

class Nxlog.LogData
This class represents an event.

field_names()
This method returns an array with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

post()
This method will submit the event to NXLog for processing by the next module in the route.

set_field(name, value)
This method sets the value of field name to value.

See the list of installer packages that provide the im_ruby module in the Available Modules chapter of the NXLog
User Guide.

121.31.1. Configuration
The im_ruby module accepts the following directives in addition to the common module directives. The RubyCode
directive is required.

RubyCode

983
This mandatory directive specifies a file containing Ruby code. The im_ruby instance will call the method
specified by the Call directive. The method must accept an Nxlog.Module object as its only argument.

Call
This optional directive specifies the Ruby method to call. The default is read_data.

121.31.2. Examples
Example 642. Using im_ruby to Generate Events

In this example, events are generated by a simple Ruby method that increments a counter. Because this
Ruby method does not set the $raw_event field, it would be reasonable to use to_json() or some other way
to preserve the fields for further processing.

nxlog.conf
1 <Input in>
2 Module im_ruby
3 RubyCode ./modules/input/ruby/input2.rb
4 Call read_data
5 </Input>

input2.rb
$index = 0

def read_data(mod)
  Nxlog.log_debug('Creating new event via input.rb')
  $index += 1
  event = mod.logdata_new
  event.set_field('Counter', $index)
  event.set_field('Message', "This is message #{$index}")
  event.post
  mod.set_read_timer 0.3
end

121.32. TLS/SSL (im_ssl)


The im_ssl module uses the OpenSSL library to provide an SSL/TLS transport. It behaves like the im_tcp module,
except that an SSL handshake is performed at connection time and the data is sent over a secure channel. Log
messages transferred over plain TCP can be eavesdropped or even altered with a man-in-the-middle attack,
while the im_ssl module provides a secure log message transport.

See the list of installer packages that provide the im_ssl module in the Available Modules chapter of the NXLog
User Guide.

121.32.1. Configuration
The im_ssl module accepts the following directives in addition to the common module directives.

ListenAddr
The module will accept connections on this IP address or DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).

984
Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.

Port
The module will listen for incoming connections on this port number. The default is port 514.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.

AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all connections must present a trusted certificate.

CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.

CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.

CRLFile

985
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.

RequireCert
This boolean value specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: each connection must use a certificate.

SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.

SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

121.32.2. Fields
The following fields are used by im_ssl.

$raw_event (type: string)


The received string.

$MessageSourceAddress (type: string)


The IP address of the remote host.

121.32.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

986
Example 643. Accepting Binary Logs From Another NXLog Agent

This configuration accepts secured log messages in the NXLog binary format and writes them to file.

nxlog.conf
 1 <Input ssl>
 2 Module im_ssl
 3 ListenAddr localhost:23456
 4 CAFile %CERTDIR%/ca.pem
 5 CertFile %CERTDIR%/client-cert.pem
 6 CertKeyFile %CERTDIR%/client-key.pem
 7 KeyPass secret
 8 InputType Binary
 9 </Input>
10
11 # old syntax
12 #<Input ssl>
13 # Module im_ssl
14 # ListenAddr localhost
15 # Port 23456
16 # CAFile %CERTDIR%/ca.pem
17 # CertFile %CERTDIR%/client-cert.pem
18 # CertKeyFile %CERTDIR%/client-key.pem
19 # KeyPass secret
20 # InputType Binary
21 #</Input>

121.33. Systemd (im_systemd)


Systemd is a Linux initialization system with parallelization capabilities and dependency-based control logic.
Systemd journal is the logging component of systemd.

The im_systemd module accepts messages from the systemd journal.

To enable running the im_systemd module under the nxlog user, the latter must be added to
NOTE the systemd-journal group. For example, this could be the following command:
$ sudo gpasswd -a nxlog -g systemd-journal

121.33.1. Configuration
The im_systemd module accepts the following directive in addition to the common module directives.

ReadFromLast
If set to TRUE, this optional boolean directive will read only new entries from the journal.

121.33.2. Fields
The following fields are used by im_systemd.

$raw_event (type: string)


The received string.

$AuditSession (type: string)


Session of the process the journal entry originates from, as maintained by the kernel audit subsystem.

987
$AuditUID (type: string)
Login UID of the process the journal entry originates from, as maintained by the kernel audit subsystem.

$AuditUID (type: string)


Login UID of the process the journal entry originates from, as maintained by the kernel audit subsystem.

$BootID (type: string)


Kernel boot ID for the boot the message was generated in, formatted as a 128-bit hexadecimal string.

$Capabilities (type: string)


Effective capabilities of the process the journal entry originates from.

$CodeFile (type: string)


Code location to generate this message, if known. Contains the source filename.

$CodeFunc (type: string)


Code location to generate this message, if known. Contains the function name.

$CodeLine (type: integer)


Code location to generate this message, if known. Contains the line number.

$CoredumpUnit (type: string)


Annotation to the message in case it contains coredumps from system and session units.

$CoredumpUserUnit (type: string)


Annotation to the message in case it contains coredumps from system and session units.

$DevLink (type: string)


Additional symlink names pointing to the device node under the '/dev' directory.

$DevName (type: string)


Device name of the kernel as it shows up in the device tree under the '/sys' directory.

$DevNode (type: string)


Node path of the device under the '/dev' directory.

$Errno (type: integer)


Low-level Unix error number which caused the entry, if any. Contains the numeric value of 'errno' formatted
as a decimal string.

$EventTime (type: datetime)


The earliest trusted timestamp of the message, if any is known that is different from the reception time of the
journal.

$Facility (type: string)


Syslog compatibility fields containing the facility.

$Group (type: string)


Group ID of the process the journal entry originates from.

$Hostname (type: string)


The name of the originating host.

988
$KernelDevice (type: string)
Device name of the kernel. If the entry is associated to a block device, the field contains the major and minor
of the device node, separated by ":" and prefixed by "b". Similar for character devices but prefixed by "c". For
network devices, this is the interface index prefixed by "n". For all other devices, this is the subsystem name
prefixed by "+", followed by ":", followed by the kernel device name.

$KernelSubsystem (type: string)


Subsystem name of the kernel.

$MachineID (type: string)


Machine ID of the originating host.

$Message (type: string)


A human-readable message string for the current entry. This is supposed to be the primary text shown to the
user. This is usually not translated (but might be in some cases), and not supposed to be parsed for
metadata.

$MessageID (type: string)


A 128-bit message identifier for recognizing certain message types, if this is desirable. This should contain a
128-bit identifier formatted as a lower-case hexadecimal string, without any separating dashes or suchlike.
This is recommended to be a UUID-compatible ID, but this is not enforced, and formatted differently.

$ObjAuditSession (type: integer)


This field contains the same value as the 'AuditSession', except that the process identified by PID is described,
instead of the process which logged the message.

$ObjAuditUID (type: integer)


This field contains the same value as the 'AuditUID', except that the process identified by PID is described,
instead of the process which logged the message.

$ObjGroup (type: integer)


This field contains the same value as the 'Group', except that the process identified by PID is described,
instead of the process which logged the message.

$ObjProcessCmdLine (type: integer)


This field contains the same value as the 'ProcessCmdLine', except that the process identified by PID is
described, instead of the process which logged the message.

$ObjProcessExecutable (type: integer)


This field contains the same value as the 'ProcessExecutable', except that the process identified by PID is
described, instead of the process which logged the message.

$ObjProcessID (type: integer)


This field contains the same value as the 'ProcessID', except that the process identified by PID is described,
instead of the process which logged the message.

$ObjProcessName (type: integer)


This field contains the same value as the 'ProcessName', except that the process identified by PID is
described, instead of the process which logged the message.

$ObjSystemdCGroup (type: integer)


This field contains the same value as the 'SystemdCGroup', except that the process identified by PID is
described, instead of the process which logged the message.

989
$ObjSystemdOwnerUID (type: integer)
This field contains the same value as the 'SystemdOwnerUID', except that the process identified by PID is
described, instead of the process which logged the message.

$ObjSystemdSession (type: integer)


This field contains the same value as the 'SystemdSession', except that the process identified by PID is
described, instead of the process which logged the message.

$ObjSystemdUnit (type: integer)


This field contains the same value as the 'SystemdUnit', except that the process identified by PID is described,
instead of the process which logged the message.

$ObjUser (type: integer)


This field contains the same value as the 'User', except that the process identified by PID is described, instead
of the process which logged the message.

$ObjUser (type: integer)


This field contains the same name as the 'User', except that the process identified by PID is described, instead
of the process which logged the message.

$ProcessCmdLine (type: string)


Command line of the process the journal entry originates from.

$ProcessExecutable (type: string)


Executable path of the process the journal entry originates from.

$ProcessID (type: string)


Syslog compatibility field containing the client PID.

$ProcessName (type: string)


Name of the process the journal entry originates from.

$SelinuxContext (type: string)


SELinux security context (label) of the process the journal entry originates from.

$Severity (type: string)


A priority value between 0 ("emerg") and 7 ("debug") formatted as a string. This field is compatible with
syslog’s priority concept.

$SeverityValue (type: integer)


A priority value between 0 ("emerg") and 7 ("debug") formatted as a decimal string. This field is compatible
with syslog’s priority concept.

$SourceName (type: string)


Syslog compatibility field containing the identifier string (i.e. "tag").

$SysInvID (type: string)


Invocation ID for the runtime cycle of the unit the message was generated in, as available to processes of the
unit in $INVOCATION_ID.

$SystemdCGroup (type: string)


Control group path in the systemd hierarchy of the process the journal entry originates from.

990
$SystemdOwnerUID (type: string)
Owner UID of the systemd session (if any) of the process the journal entry originates from.

$SystemdSession (type: string)


Systemd session ID (if any) of the process the journal entry originates from.

$SystemdSlice (type: string)


Systemd slice unit of the process the journal entry originates from.

$SystemdUnit (type: string)


Systemd unit name (if any) of the process the journal entry originates from.

$SystemdUserUnit (type: string)


Systemd user session unit name (if any) of the process the journal entry originates from.

$Transport (type: string)


Transport of the entry to the journal service. Available values are: audit, driver, syslog, journal, stdout, kernel.

$User (type: string)


User ID of the process the journal entry originates from.

121.33.3. Examples
Example 644. Using the im_systemd Module to Read the Systemd Journal

In this example, NXLog reads the recent journal messages.

nxlog.conf
1 <Input systemd>
2 Module im_systemd
3 ReadFromLast TRUE
4 </Input>

Below is the sample of a systemd journal message after it has been accepted by the im_systemd module
and converted into JSON format using the xm_json module.

Event Sample
{"Severity":"info","SeverityValue":6,"Facility":"auth","FacilityValue":3,↵
"Message":"Reached target User and Group Name Lookups.","SourceName":"systemd",↵
"ProcessID":"1","BootID":"179e1f0a40c64b6cb126ed97278aef89",↵
"MachineID":"0823d4a95f464afeb0021a7e75a1b693","Hostname":"user",↵
"Transport":"kernel","EventReceivedTime":"2020-02-05T14:46:09.809554+00:00",↵
"SourceModuleName":"systemd","SourceModuleType":"im_systemd"}↵

121.34. TCP (im_tcp)


This module accepts TCP connections on the configured address and port. It can handle multiple simultaneous
connections. The TCP transfer protocol provides more reliable log transmission than UDP. If security is a concern,
consider using the im_ssl module instead.

This module provides no access control. Firewall rules can be used to deny connections from
NOTE
certain hosts.

See the list of installer packages that provide the im_tcp module in the Available Modules chapter of the NXLog

991
User Guide.

121.34.1. Configuration
The im_tcp module accepts the following directives in addition to the common module directives.

ListenAddr
The module will accept connections on this IP address or DNS hostname. For security, the default listen
address is localhost (the localhost loopback address is not accessible from the outside). To receive logs from
remote hosts, the address specified here must be accessible. The any address 0.0.0.0 is commonly used
here. Add the port number to the end of a host using a colon as a separator (host:port).

Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.

Port
The module will listen for incoming connections on this port number. The default port is 514 if this directive is
not specified.

Port directive will become deprecated from NXLog EE 6.0. Provide the port using
IMPORTANT
ListenAddr.

ReusePort
This optional boolean directive enables synchronous listening on the same port by multiple module
instances. Each module instance runs in its own thread, allowing NXLog to process incoming data
simultaneously to take better advantage of multiprocessor systems. The default value is FALSE.

To enable synchronous listening, the configuration file should contain multiple im_tcp module instances
listening on the same port and the ReusePort directive set to TRUE, see the Examples section.

121.34.2. Fields
The following fields are used by im_tcp.

$raw_event (type: string)


The received string.

$MessageSourceAddress (type: string)


The IP address of the remote host.

121.34.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

992
Example 645. Using the im_tcp Module

With this configuration, NXLog listens for TCP connections on port 1514 and writes the received log
messages to a file.

nxlog.conf
 1 <Input tcp>
 2 Module im_tcp
 3 Host 0.0.0.0
 4 Port 1514
 5 </Input>
 6
 7 <Output file>
 8 Module om_file
 9 File "tmp/output"
10 </Output>
11
12 <Route tcp_to_file>
13 Path tcp => file
14 </Route>

Example 646. Reusing a Single Port by Multiple Module Instances

The configuration below provides two im_tcp module instances to reuse port 1514 via the ReusePort
directive. Received messages are written to the /tmp/output file.

nxlog.conf
 1 <Input tcp_one>
 2 Module im_tcp
 3 Host 192.168.31.11
 4 Port 1514
 5 ReusePort TRUE
 6 </Input>
 7
 8 <Input tcp_two>
 9 Module im_tcp
10 Host 192.168.31.11
11 Port 1514
12 ReusePort TRUE
13 </Input>
14
15 <Output file>
16 Module om_file
17 File "tmp/output"
18 </Output>
19
20 <Route tcp_to_file>
21 Path tcp_one, tcp_two => file
22 </Route>

121.35. Test Generator (im_testgen)


This module generates simple events for testing, with an incremented integer up to the number of events
specified by the MaxCount directive.

See the list of installer packages that provide the im_testgen module in the Available Modules chapter of the

993
NXLog User Guide.

121.35.1. Configuration
The im_testgen module accepts the following directives in addition to the common module directives.

MaxCount
The module will generate this many events, and then stop generating events. If this directive is not specified,
im_testgen will continue generating events until the module is stopped or NXLog exits.

121.36. UDP (im_udp)


This module accepts UDP datagrams on the configured address and port. UDP is the transport protocol of the
legacy BSD Syslog as described in RFC 3164, so this module can be particularly useful to receive such messages
from older devices which do not support other transports.

UDP is an unreliable transport protocol, and does not guarantee delivery. Messages may
WARNING not be received or may be truncated. It is recommended to use the TCP or SSL transport
modules instead, if possible.

To reduce the likelihood of message loss, consider:

• increasing the socket buffer size with SockBufSize,


• raising the route priority by setting the Priority directive (to a low number such as 1), and
• adding additional buffering by increasing the LogqueueSize or adding a pm_buffer instance.

This module provides no access control. Firewall rules can be used to drop log events from
NOTE
certain hosts.

For parsing Syslog messages, see the pm_transformer module or the parse_syslog_bsd() procedure of xm_syslog.

See the list of installer packages that provide the im_udp module in the Available Modules chapter of the NXLog
User Guide.

121.36.1. Configuration
The im_udp module accepts the following directives in addition to the common module directives.

ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).

Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.

Port
The module will listen for incoming connections on this port number. The default is port 514.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.

994
ReusePort
This optional boolean directive enables synchronous listening on the same port by multiple module
instances. Each module instance runs in its own thread, allowing NXLog to process incoming data
simultaneously to take better advantage of multiprocessor systems. The default value is FALSE.

To enable synchronous listening, the configuration file should contain multiple im_udp module instances
listening on the same port and the ReusePort directive set to TRUE, see the Examples section.

SockBufSize
This optional directive sets the socket buffer size (SO_RCVBUF) to the value specified. If not set, the operating
system defaults are used. If UDP packet loss is occurring at the kernel level, setting this to a high value (such
as 150000000) may help. On Windows systems the default socket buffer size is extremely low, and using this
option is highly recommended.

UseRecvmmsg
This boolean directive specifies that the recvmmsg() system call should be used, if available, to receive
multiple messages per call to improve performance. The default is TRUE.

121.36.2. Fields
The following fields are used by im_udp.

$raw_event (type: string)


The received string.

$MessageSourceAddress (type: string)


The IP address of the remote host.

121.36.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

Example 647. Using the im_udp Module

This configuration accepts log messages via UDP and writes them to a file.

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3 ListenAddr 192.168.1.1:514
 4 </Input>
 5
 6 # old syntax
 7 #<Input udp>
 8 # Module im_udp
 9 # Host 192.168.1.1
10 # Port 514
11 #</Input>
12
13 <Output file>
14 Module om_file
15 File "tmp/output"
16 </Output>
17
18 <Route udp_to_file>
19 Path udp => file

995
Example 648. Reusing the Single Port by Multiple Module Instances

The configuration below provides two im_udp module instances to reuse port 514 via the ReusePort
directive. Received messages are written to the /tmp/output file.

nxlog.conf
 1 <Input udp_one>
 2 Module im_udp
 3 Host 192.168.1.1
 4 Port 514
 5 ReusePort TRUE
 6 </Input>
 7
 8 <Input udp_two>
 9 Module im_udp
10 Host 192.168.1.1
11 Port 514
12 ReusePort TRUE
13 </Input>
14
15 <Output file>
16 Module om_file
17 File "tmp/output"
18 </Output>
19
20 <Route udp_to_file>
21 Path udp_one, udp_two => file
22 </Route>

121.37. Unix Domain Sockets (im_uds)


This module allows log messages to be received over a Unix domain socket. Unix systems traditionally have a
/dev/log or similar socket used by the system logger to accept messages. Applications use the syslog(3) system
call to send messages to the system logger.

It is recommended to disable FlowControl when this module is used to collect local Syslog
messages from the /dev/log Unix domain socket. Otherwise, if the corresponding Output queue
NOTE
becomes full, the syslog() system call will block in any programs trying to write to the system log
and an unresponsive system may result.

For parsing Syslog messages, see the pm_transformer module or the parse_syslog_bsd() procedure of xm_syslog.

See the list of installer packages that provide the im_uds module in the Available Modules chapter of the NXLog
User Guide.

121.37.1. Configuration
The im_uds module accepts the following directives in addition to the common module directives.

UDS
This specifies the path of the Unix domain socket. The default is /dev/log.

CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the directory where the UDS
socket file is located, if it does not already exist. The default is FALSE.

996
UDSType
This directive specifies the domain socket type. Supported values are dgram and stream. The default is dgram.

InputType
See the InputType directive in the list of common module directives. This defaults to dgram if UDSType is set
to dgram or to linebased if UDSType is set to stream.

UDSGroup
Use this directive to set the group ownership for the created socket. By default, this is the group NXLog is
running as, (which may be specified by the global Group directive).

UDSOwner
Use this directive to set the user ownership for the created socket. By default, this is the user NXLog is
running as (which may be specified by the global User directive).

UDSPerms
This directive specifies the permissions to use for the created socket. This must be a four-digit octal value
beginning with a zero. By default, universal read/write permissions will be set (octal value 0666).

121.37.2. Examples
Example 649. Using the im_uds Module

This configuration will accept logs via the specified socket and write them to file.

nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 FlowControl False
5 </Input>

Example 650. Setting Socket Ownership With im_uds

This configuration accepts logs via the specified socket, and also specifies ownership and permissions to
use for the socket.

nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /opt/nxlog/var/spool/nxlog/socket
4 UDSOwner root
5 UDSGroup adm
6 UDSPerms 0660
7 </Input>

121.38. Windows Performance Counters (im_winperfcount)


This module periodically retrieves the values of the specified Windows Performance Counters to create an event
record. Each event record contains a field for each counter. Each field is named according to the name of the
corresponding counter.

997
NOTE This module is only available on Microsoft Windows.

If performance counters are not working or some counters are missing, it may be necessary to
rebuild the performance counter registry settings by running C:\windows\system32\lodctr.exe
TIP
/R. See How to rebuild performance counters on Windows Vista/Server2008/7/Server2008R2 on
TechNet for more details, including how to save a backup before rebuilding.

See the list of installer packages that provide the im_winperfcount module in the Available Modules chapter of the
NXLog User Guide.

121.38.1. Configuration
The im_winperfcount module accepts the following directives in addition to the common module directives. The
Counter directive is required.

Counter
This mandatory directive specifies the name of the performance counter that should be polled, such as
\Memory\Available Bytes. More than one Counter directive can be specified to poll multiple counters at
once. Available counter names can be listed with typeperf -q (see the typeperf command reference on
Microsoft Docs).

PollInterval
This directive specifies how frequently, in seconds, the module will poll the performance counters. If this
directive is not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will
check twice every second).

UseEnglishCounters
This optional boolean directive specifies whether to use English counter names. This makes it possible to use
the same NXLog configuration across all deployments even if the localization differs. If this directive is not
specified it defaults to FALSE (native names will be used).

AllowInvalidCounters
If set to TRUE, invalid counter names will be ignored and a warning will be logged instead of stopping with an
error. If this directive is not specified it defaults to FALSE.

121.38.2. Fields
The following fields are used by im_winperfcount.

$raw_event (type: string)


A string containing a header (composed of the $EventTime and $Hostname fields) followed by a list of key-
value pairs for each counter.

$EventTime (type: datetime)


The current time.

$Hostname (type: string)


The name of the system where the event was generated.

$ProcessID (type: integer)


The process ID of the NXLog process.

998
$Severity (type: string)
The severity name: INFO.

$SeverityValue (type: integer)


The INFO severity level value: 2.

$SourceName (type: string)


Set to nxlog.

121.38.3. Examples
Example 651. Polling Windows Performance Counters

With this configuration, NXLog will retrieve the specified counters every 60 seconds. The resulting messages
will be written to file in JSON format.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input counters>
 6 Module im_winperfcount
 7 Counter \Memory\Available Bytes
 8 Counter \Process(_Total)\Working Set
 9 PollInterval 60
10 </Input>
11
12 <Output file>
13 Module om_file
14 File 'C:\test\counter.log'
15 Exec to_json();
16 </Output>
17
18 <Route perfcount>
19 Path counters => file
20 </Route>

121.39. Windows Event Collector (im_wseventing)


This module can be used to collect Windows EventLog from Microsoft Windows clients that have Windows Event
Forwarding (WEF) configured. This module takes the role of the collector (Subscription Manager) to accept
eventlog records from Windows clients over the WS-Management protocol. WS-Eventing is a subset of WS-
Management used to forward Windows EventLog.

The im_mseventlog module requires NXLog to be installed as an agent on the source host. The im_msvistalog
module can be configured to pull Windows EventLog remotely from Windows hosts with a NXLog agent running
on Windows. The im_wseventing module, can be used on all supported platforms including GNU/Linux systems to
remotely collect Windows EventLog without requiring any software to be installed on the source host. Windows
clients can be configured through Group Policy to forward EventLog to the system running the im_wseventing
module, without the need to list each client machine individually in the configuration.

The WS-Eventing protocol and im_wseventing support HTTPS using X509 certificates and Kerberos to authenticate
and securely transfer EventLog.

999
While there are other products implementing the WS-Eventing protocol (such as IBM
WebSphere DataPower), this module was implemented with the primary purpose of collecting
NOTE
and parsing forwarded events from Microsoft Windows. Compatibility with other products has
not been assessed.

See the list of installer packages that provide the im_wseventing module in the Available Modules chapter of the
NXLog User Guide.

121.39.1. Kerberos Setup


Follows these steps to set up Windows Event Forwarding with Kerberos.

The steps and examples below assume these systems:

• Windows domain controller ad.domain.com at 192.168.0.2

• RHEL Linux node linux.domain.com at 192.168.0.3

1. Join the Linux node to the domain.


a. Set the hostname:

# hostnamectl set-hostname linux

b. Set the nameserver and static IP address (substitute the correct interface name).

# nano /etc/sysconfig/network-scripts/ifcfg-enp0s3

Set to:

BOOTPROTO=static
IPADDR=192.168.0.3
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.2

2. Synchronize the time on linux.domain.com with ad.domain.com. For example:

# ntpdate ad.domain.com

3. Go to the domain controller ad.domain.com and create a new user linux (the name of the user should
match the hostname of the Linux node).
a. Go to Administrative Tools → Active Directory Users and Computers → ad.domain.com → Users.
b. Right click and choose New → User.
i. First name: linux

ii. Full name: linux

iii. User logon name: linux

iv. Set a password on the next page.


v. Uncheck User must change password at next logon.
vi. Check Password never expires.
c. Right click on the new user, click Properties, and open the Account tab.
i. Check This account supports Kerberos AES 128 bit encryption.
ii. Check This account supports Kerberos AES 256 bit encryption.

1000
4. In the DNS settings on the domain controller, create an A record for linux.domain.com.

a. Go to Administrative Tools → DNS → Forward Lookup Zones → ad.domain.com.


b. Right click and choose New Host (A or AAAA)….
c. Add a record with name linux and IP address 192.168.0.3.

5. Open a Command Prompt on ad.domain.com and execute these commands. Use the same <password> as
in step 3b.

> ktpass /princ hosts/linux.domain.com@DOMAIN.COM /pass <password> /mapuser DOMAIN\linux -pType


KRB5_NT_PRINCIPAL /out hosts-nxlog.keytab /crypto AES256-SHA1
> ktpass /princ http/linux.domain.com@DOMAIN.COM /pass <password> /mapuser DOMAIN\linux -pType
KRB5_NT_PRINCIPAL /out nxlog.keytab /crypto AES256-SHA1

6. Copy the resulting hosts-nxlog.keytab and nxlog.keytab files to linux.domain.com.

7. Update the Group Policy on the domain controller.


a. Run gpedit.msc and go to Computer Configuration → Administrative Templates → Windows
Components → Event Forwarding.
b. Open and enable the Configure target Subscription Manager setting.
c. Click Show… beside the SubscriptionManagers option.
d. Type into the Value field: Server=http://linux.domain.com:80,Refresh=30.

e. In the Command Prompt, run gpupdate /force.

8. Set up Kerberos on linux.domain.com.

a. Confirm that the Kerberos krb5 client and utility software are installed on the system. The required
package can be installed with (for example) yum install krb5-workstation or apt install krb5-
user.

b. Edit the default configuration file at /etc/krb5.conf.

i. In section [domain_realm], add:

.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM

ii. In section [realms], add:

DOMAIN.COM = {
 kdc = domain.com
 admin_server = domain.com
}

c. Use ktutil to merge the two keytab files generated in step 5.

# ktutil
ktutil: rkt /root/hosts-nxlog.keytab
ktutil: rkt /root/nxlog.keytab
ktutil: wkt /root/nxlog-result.keytab
ktutil: q

d. Validate the merged keytab.

1001
# klist -e -k -t /root/nxlog-result.keytab
Keytab name: FILE:/root/nxlog-result.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
  5 17.02.2016 04:16:37 hosts/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
  4 17.02.2016 04:16:37 http/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)

e. Either copy the keytab into place, or merge if there are already keys in /etc/krb5.keytab.

▪ To simply copy the keytab:

cp /root/nxlog-result.keytab /etc/krb5.keytab

▪ To merge the keytab and validate the result:

# ktutil
ktutil: rkt /etc/krb5.keytab
ktutil: rkt /root/nxlog-result.keytab
ktutil: wkt /etc/krb5.keytab
ktutil: q
# klist -e -k -t /etc/krb5.keytab
Keytab name: FILE:/etc/krb5.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
  5 31.12.1969 15:00:00 HTTP/linux.domain.com@PEROKSID.COM (aes256-cts-hmac-sha1-96)
  5 17.02.2016 04:20:08 HTTP/linux.domain.com@PEROKSID.COM (aes256-cts-hmac-sha1-96)
  5 17.02.2016 04:20:08 hosts/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
  4 17.02.2016 04:20:08 http/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)

9. Make sure the port defined in the im_wseventing configuration is accessible from the Windows clients. The
local firewall rules on the Linux node may need to be updated.
10. Configure and run NXLog. See the configuration example below.

121.39.2. HTTPS Setup


To set up Windows Event Forwarding over HTTPS the following steps are required:

• X509 certificate generation using either OpenSSL or the Windows certificate manager,
• configuration of the NXLog im_wseventing module.
• configuration of Windows Remote Management (WinRM) on each Windows source host,

These steps are covered in greater detail below.

We will refer to the host running NXLog with the im_wseventing module as server. Under
NOTE Windows the Subscription Manager refers to the same entity since im_wsevening is what
manages the subscription. We will use the name client when referring to the Windows hosts
sending the logs using WEF.

The client certificate must have the X509 v3 Extended Key Usage: TLS Web Client Authentication
extension and the server certificate needs the X509 v3 Extended Key Usage: TLS Web Server
Authentication extension. You will likely encounter an error when trying to configure WEF and the connection
to the server will fail without these extended key usage attributes. Also make sure that the intended purpose of
the certificates are set to Server Authentication and Client Authentication respectively.

When generating the certificates please ensure that the CN in the server certificate subject matches the reverse
DNS name, otherwise you may get errors in the Microsoft Windows/Event-ForwardingPlugin/Operational
eventlog saying The SSL certificate contains a common name (CN) that does not match the

1002
hostname.

Generating the certificates with OpenSSL

If you prefer Windows skip to the next section.

For OpenSSL based certificate generation see the scripts in our public git repository.

Generate the CA certificate and private key:

SUBJ="/CN=NXLog-WEF-CA/O=nxlog.org/C=HU/ST=state/L=location"
openssl req -x509 -nodes -newkey rsa:2048 -keyout ca-key.pem -out ca-cert.pem -batch -subj "$SUBJ"
-config gencert.cnf
openssl x509 -outform der -in ca-cert.pem -out ca-cert.crt

Generate the client certificate and export it together with the CA in PFX format to be imported into the Windows
certificate store:

CLIENTSUBJ="/CN=winclient.domain.corp/O=nxlog.org/C=HU/ST=state/L=location"

openssl req -new -newkey rsa:2048 -nodes -keyout client-key.pem -out req.pem -batch -subj
"$CLIENTSUBJ" -config gencert.cnf
openssl x509 -req -days 1024 -in req.pem -CA ca-cert.pem -CAkey ca-key.pem -out client-cert.pem
-set_serial 01 -extensions client_cert -extfile gencert.cnf
openssl pkcs12 -export -out client.pfx -inkey client-key.pem -in client-cert.pem -certfile ca-
cert.pem

Generate the server certificate to be used by the im_wseventing module:

SERVERSUBJ="/CN=nxlogserver.domain.corp/O=nxlog.org/C=HU/ST=state/L=location"
openssl req -new -newkey rsa:2048 -nodes -keyout server-key.pem -out req.pem -batch -subj
"$SERVERSUBJ" -config gencert.cnf
openssl x509 -req -days 1024 -in req.pem -CA ca-cert.pem -CAkey ca-key.pem -out server-cert.pem
-set_serial 01 -extensions server_cert -extfile gencert.cnf
openssl x509 -outform der -in server-cert.pem -out server-cert.crt

In order to generate the certificates with the correct extensions the following is needed in gencert.cnf:

[ server_cert ]
basicConstraints=CA:FALSE
nsCertType = server
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
#crlDistributionPoints=URI:http://127.0.0.1/crl.pem

[ client_cert ]
basicConstraints=CA:FALSE
nsCertType = client
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth

If you are using an intermediary CA please make sure that the ca-cert.pem file contains—in
NOTE correct order—the public part of every issuer’s certificate. The easiest way to achieve this is to
'cat' the pem certificates together.

1003
If you have more complex requirements follow this guide on how to set up a CA and generate certificates with
OpenSSl.

Generating the certificates with the Windows certificate manager

For more information on creating certificates under windows see this document: Request Certificates by Using
the Certificate Request Wizard.

Make sure to create the certificates with the required extensions as noted above. Once you have issued the
certificates you will need to export the server certificate in PFX format. The PFX must contain the private key also,
the password may be omitted. The PFX file can then be converted to the PEM format required by im_wseventing
using openssl:

openssl pkcs12 -in server.pfx -nocerts -nodes -out server-key.pem


openssl pkcs12 -in server.pfx -nokeys -nodes -out server-cert.pem

You will also need to export the CA certificate (without the private key) the same way and convert it into ca-
cert.pem.

Configure NXLog with the im_wseventing module

You will need to use server-key.pem, server-cert.pem and ca-cert.pem for the HTTPSCertKeyFile,
HTTPSCertFile and HTTPSCAFile respectively.

Optionally you can use the QueryXML option to filter on specific channels or events.

See the configuration example below for how your nxlog.conf should look.

Once the configuration is complete you may start the nxlog service.

Configuring WinRM and WEF

1. Install, configure, and enable Windows Remote Management (WinRM) on each source host.
a. Make sure the Windows Remote Management (WS-Management) service is installed, running, and set to
Automatic startup type.
b. If WinRM is not already installed, see these instructions on MSDN: Installation and Configuration for
Windows Remote Management.
c. Check that the proper client authentication method (Certificate) is enabled for WinRM. Issue the
following command:

winrm get winrm/config/Client/Auth

This should produce the following output

 Auth
 Basic = false
 Digest = true
 Kerberos = true
 Negotiate = true
 Certificate = true
 CredSSP = true [Source="GPO"]

If Certificate authentication is set to false, it should be enabled with the following:

winrm set winrm/config/client/auth @{Certificate="true"}

1004
Windows Remoting does not support event forwarding over unsecured transport (such
as HTTP). Therefore it is recommended to disable the Basic authentication:
NOTE
winrm set winrm/config/client/auth @{Basic="false"}

d. Import the client authentication certificate if you used OpenSSL to generate these. In the Certificate
MMC snap-in for the Local Computer click More actions - All Tasks - Import…. Import the
client.pfx file. Enter the private key password (if set) and make sure the Include all extended
properties check-box is selected.

After importing is completed, open the Certificates MMC snap-in, select Computer
account and double-click on the client certificate to check if the full certificate chain is
NOTE
available and trusted. You may want to move the CA certificate under the Trusted
Root Certification Authorities in order to make the client certificate trusted.

e. Grant the NetworkService account the proper permissions to access the client certificate using the
Windows HTTP Services Certificate Configuration Tool (WinHttpCertCfg.exe) and check that the
NetworkService account has access to the private key file of the client authentication certificate by
running the following command:

winhttpcertcfg -l -c LOCAL_MACHINE\my -s <certificate subject name>

If NetworkService is not listed in the output, grant it permissions by running the following command:

winhttpcertcfg -g -c LOCAL_MACHINE\my -s <certificate subject name> -a NetworkService

f. In order to access the Security EventLog, the NetworkService account needs to be added to the Event Log
Readers group.
g. Configure the source host security policy to enable event forwarding:
i. Run the Group Policy MMC snap-in (gpedit.msc) and go to Computer Configuration ›
Administrative Templates › Windows Components › Event Forwarding.
ii. Right-click the SubscriptionManager setting and select Properties › ]. Enable the
SubscriptionManager setting › and click [ Show  to add a server address.
iii. Add at least one setting that specifies the NXLog collector system. The SubscriptionManager
Properties window contains an Explain tab that describes the syntax for the setting. If you have
used the gencert-server.sh script it should print the subscription manager string that has the
following format:

Server=HTTPS://<FQDN of im_wseventing><:port>/wsman/,Refresh=<Refresh interval in


seconds>, IssuerCA=<certificate authority certificate thumbprint>

An example would be as follows:

Server=HTTPS://nxlogserver.domain.corp:5985/wsman/,Refresh=14400,IssuerCA=57F5048548A6A98
3C3A14DA80E0626E4A462FC04

iv. To find the IssuerCA fingerprint, open MMC, add the Certificates snap-in, select the Local Computer
account find the Issuing CA certificate. Copy the Thumbprint from the Details tab. Please make
sure to eliminate spaces and the invisible non-breaking space that is before the first character of the
thumbprint on Windows 2008.
v. After the SubscriptionManager setting has been added, ensure the policy is applied by running:

gpupdate /force

vi. At this point the WinRM service on the Windows client should connect to NXLog and there should be

1005
a connection attempt logged in nxlog.log and you should soon start seeing events arriving.

121.39.3. Forwarding Security Events


In adherence to C2-level Security, access to audit data of security-related events is limited to authorized
administrators. WinRM runs as a network service and may not have access to the Security log, as such it may not
forward Security events. To give it access to the Security Log:

1. Open Group Policy Editor by running gpedit.msc.

2. Go to Computer Configuration → Policies → Administrative Templates → Windows Components →


Event Log Service → Configure Log Access.
3. In the Configure Log Access policy setting, enter:

O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;NS)

4. Run gpupdate /force to apply changes.

121.39.4. Troubleshooting
WEF if not easy to configure and there may be many things that can go wrong. To troubleshoot WEF you should
check the Windows Eventlog under the following channels:

• Applications and Services Logs/Microsoft Windows/Event-ForwardingPlugin

• Applications and Services Logs/Microsoft Windows/Windows Remote Management

• Applications and Services Logs/Microsoft Windows/CAPI2

The CN in the server certificate subject must match the reverse dns, otherwise you may get errors in the
Microsoft Windows/Event-ForwardingPlugin/Operational eventlog saying The SSL certificate
contains a common name (CN) that does not match the hostname. Also in that case the WinRM service
may want to use a CRL url to download the revocation list. If it cannot check the CRL there will be error messages
under Applications and Services Logs/Microsoft Windows/CAPI2 such as this:

<Result value="80092013">The revocation function was unable to check


revocation because the revocation server was offline.</Result>

In our experience if the FQDN and the reverse DNS of the server is properly set up it shouldn’t fail with the CRL
check.

Unfortunately the diagnostic messages in the Windows Eventlog are in some cases rather sloppy. You may see
messages such as The forwarder is having a problem communicating with the subscription manager
at address https://nxlog:5985/wsman/. Error code is 42424242 and the Error Message is . Note
the empty error message. Other than guessing you may try looking up the error code on the internet…

If the IssuerCA thumbprint is incorrect or it can’t locate the certificate in the certificate store then the above error
will be logged in the Windows EventLog with Error code 2150858882.

The Refresh interval should be set to a higher value (e.g. Refresh=1200), in the GPO Subscription Manager
settings otherwise the windows client will reconnect too frequently resulting in a lot of connection/disconnection
messages in nxlog.log.

By default the module does not log connection attempts which would be otherwise useful for troubleshooting
purposes. This can be turned on with the LogConnections configuration directive. The windows Event Forwarding
service may disconnect during the TLS handshake with the following message logged in nxlog.log when
LogConnections is enabled. This is normal as long as there is another connection attempt right after the
disconnection.

1006
2017-09-28 12:16:01 INFO connection accepted from 10.2.0.161:49381
2017-09-28 12:16:01 ERROR im_wseventing got disconnected during SSL handshake
2017-09-28 12:16:01 INFO connection accepted from 10.2.0.161:49381

See the article on Technet titled Windows Event Forwarding to a workgroup Collector Server for further
instructions and troubleshooting tips.

121.39.5. Configuration
The im_wseventing module accepts the following directives in addition to the common module directives. The
Address and ListenAddr directives are required.

Address
This mandatory directive accepts a URL address. This address is sent to the client to notify it where the events
should be sent. For example, Address https://nxlogserver.domain.corp:5985/wsman.

ListenAddr
This mandatory directive specifies the address of the interface where the module should listen for client
connections. Normally the any address 0.0.0.0 is used.

AddPrefix
If this boolean directive is set to TRUE, names of fields parsed from the <EventData> portion of the event
XML will be prefixed with EventData.. For example, $EventData.SubjectUserName will be added to the
event record instead of $SubjectUserName. The same applies to <UserData>. This directive defaults to
FALSE: field names will not be prefixed.

CaptureEventXML
This boolean directive defines whether the module should store raw XML-formatted event data. If set to
TRUE, the module stores raw XML data in the $EventXML field. By default, the value is set to FALSE, and the
$EventXML field is not added to the event record.

ConnectionRetry
This optional directive specifies the reconnection interval. The default value is PT60.0S (60 seconds).

ConnectionRetryTotal
This optional directive specifies the maximum number of reconnection attempts. The default is 5 attempts. If
the client exceeds the retry count it will consider the subscription to be stale and will not attempt to
reconnect.

Expires
This optional directive can be used to specify a duration after which the subscription will expire, or an
absolute time when the subscription will expire. By default, the subscription will never expire.

HeartBeats
Heartbeats are dummy events that do not appear in the output. These are used by the client to signal that
logging is still functional if no events are generated during the specified time period. The default heartbeat
value of PT3600.000S may be overridden with this optional directive.

HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.

1007
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.

HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

1008
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

LogConnections
This boolean directive can be used to turn on logging of connections. Since WEF connections can be quite
frequent and excessive it could generate a lot of noise. On the other hand it can be useful to troubleshoot
clients. This is disabled by default.

MaxElements
This optional directive specifies the maximum number of event records to be batched by the client. If this is
not specified the default value is decided by the client.

MaxEnvelopeSize
This optional directive can be used to set a limit on the size of the allowed responses, in bytes. The default
size is 153600 bytes. Event records exceeding this size will be dropped by the client and replaced by a drop
notification.

MaxTime
This optional directive specifies the maximum amount of time allowed to elapse for the client to batch events.
The default value is PT30.000S (30 seconds).

Port
This specifies the port on which the module will listen for incoming connections. The default is port 5985.

Query
This directive specifies the query for pulling only specific EventLog sources. See the MSDN documentation
about Event Selection. Note that this directive requires a single-line parameter, so multi-line query XML
should be specified using line continuation:

1 Query <QueryList> \
2 <Query Id='1'> \
3 <Select Path='Security'>*[System/Level=4]</Select> \
4 </Query> \
5 </QueryList>

QueryXML
This directive is the same as the Query directive above, except it can be used as a block. Multi-line XML
queries can be used without line continuation, and the XML Query can be directly copied from Event Viewer.

1009
 1 <QueryXML>
 2 <QueryList>
 3 <!-- XML-style comments can
 4 span multiple lines in
 5 QueryXML blocks like this.
 6 -->
 7 <Query Id='1'>
 8 <Select Path='Security'>*[System/Level=4]</Select>
 9 </Query>
10 </QueryList>
11 </QueryXML>

Commenting with the # mark does not work within multi-line Query directives or QueryXML
blocks. In this case, use XML-style comments <!-- --> as shown in the example above.
CAUTION Failure to follow this syntax for comments within queries will render the module instance
useless. Since NXLog does not parse the content of QueryXML blocks, this behavior is
expected.

SubscriptionName
The default value of NXLog Subscription may be overridden by with this optional directive. This name will
appear in the client logs.

121.39.6. Fields
The following fields are used by im_wseventing.

The actual fields generated will vary depending on the particular event’s source data.

$raw_event (type: string)


A string containing the $EventID, $EventType, $EventTime, $Hostname, and $Message from the event.

$ActivityID (type: string)


A globally unique identifier for the current activity.

$Channel (type: string)


The Channel of the event source (for example, Security or Application).

$EventData (type: string)


Event-specific data. This field is mutually exclusive with $UserData.

$EventID (type: integer)


The event ID specific to the event source.

$EventTime (type: datetime)


The timestamp that indicates when the event was logged.

$EventType (type: string)


The type of the event, which is a string describing the severity. This is translated to its string representation.
Possible values are: CRITICAL, ERROR, AUDIT_FAILURE, AUDIT_SUCCESS, INFO, WARNING, and VERBOSE.

$EventXML (type: string)


The raw event data in XML format. This field is available if the module’s CaptureEventXML directive is set to
TRUE.

1010
$ExecutionProcessID (type: integer)
The ID identifying the process that generated the event.

$ExecutionThreadID (type: integer)


The ID identifying the thread that generated the event.

$Hostname (type: string)


The name of the computer that generated the event.

$Keywords (type: string)


The keywords used to classify the event, as a hexadecimal number.

$Level (type: string)


The level of the event as a string, resolved from $LevelValue. Possible values include: Success, Information,
Warning, Error, Audit Success, and Audit Failure.

$LevelValue (type: integer)


The level of the event.

$Message (type: string)


The message from the event.

$MessageID (type: string)


The unique identifier of the message.

$Opcode (type: string)


The Opcode string resolved from OpcodeValue.

$OpcodeValue (type: integer)


The Opcode number of the event as in EvtSystemOpcode.

$param1 (type: string)


Additional event-specific data ($param1, $param2, and so on).

$ProviderGuid (type: string)


The globally unique identifier of the event’s provider. This corresponds to the name of the provider in the
$SourceName field.

$RecordNumber (type: integer)


The number of the event record.

$Severity (type: string)


The normalized severity name of the event. See $SeverityValue.

$SeverityValue (type: integer)


The normalized severity number of the event, mapped as follows.

Event Log Normalized


Severity Severity
0/Audit Success 2/INFO

0/Audit Failure 4/ERROR

1011
Event Log Normalized
Severity Severity
1/Critical 5/CRITICAL

2/Error 4/ERROR

3/Warning 3/WARNING

4/Information 2/INFO

5/Verbose 1/DEBUG

$SourceName (type: string)


The event source which produced the event.

$Task (type: string)


The task defined in the event.

$UserData (type: string)


Event-specific data. This field is mutually exclusive with $EventData.

$UserID (type: string)


The Security Identifier (SID) of the account associated with the event.

$Version (type: integer)


The version number of the event.

121.39.7. Examples

1012
Example 652. Collecting Forwarded Events Using Kerberos

This example collects Windows EventLog data using Kerberos.

nxlog.conf
 1 SuppressRepeatingLogs FALSE
 2
 3 <Extension json>
 4 Module xm_json
 5 </Extension>
 6
 7 <Input wseventin>
 8 Module im_wseventing
 9 Address http://LINUX.DOMAIN.COM:80/wsman
10 ListenAddr 0.0.0.0
11 Port 80
12 SubscriptionName test
13 Exec log_info(to_json());
14 <QueryXML>
15 <QueryList>
16 <Query Id="0" Path="Application">
17 <Select Path="Application">*</Select>
18 <Select Path="Security">*</Select>
19 <Select Path="Setup">*</Select>
20 <Select Path="System">*</Select>
21 <Select Path="ForwardedEvents">*</Select>
22 <Select Path="Windows PowerShell">*</Select>
23 </Query>
24 </QueryList>
25 </QueryXML>
26 </Input>

1013
Example 653. Collecting Forwarded Events Using HTTPS

This example Input module instance collects Windows EventLog remotely. Two EventLog queries are
specified, the first for hostnames matching foo* and the second for other hostnames.

nxlog.conf
 1 <Input wseventing>
 2 Module im_wseventing
 3 ListenAddr 0.0.0.0
 4 Port 5985
 5 Address https://linux.corp.domain.com:5985/wsman
 6 HTTPSCertFile %CERTDIR%/server-cert.pem
 7 HTTPSCertKeyFile %CERTDIR%/server-key.pem
 8 HTTPSCAFile %CERTDIR%/ca.pem
 9 <QueryXML>
10 <QueryList>
11 <Computer>foo*</Computer>
12 <Query Id="0" Path="Application">
13 <Select Path="Application">*</Select>
14 </Query>
15 </QueryList>
16 </QueryXML>
17 <QueryXML>
18 <QueryList>
19 <Query Id="0" Path="Application">
20 <Select Path="Application">*</Select>
21 <Select Path="Microsoft-Windows-Winsock-AFD/Operational">*</Select>
22 <Select Path="Microsoft-Windows-Wired-AutoConfig/Operational">*</Select>
23 <Select Path="Microsoft-Windows-Wordpad/Admin">*</Select>
24 <Select Path="Windows PowerShell">*</Select>
25 </Query>
26 </QueryList>
27 </QueryXML>
28 </Input>

121.40. ZeroMQ (im_zmq)


This module provides message transport over ZeroMQ, a scalable high-throughput messaging library.

The corresponding output module is om_zmq.

See the list of installer packages that provide the im_zmq module in the Available Modules chapter of the NXLog
User Guide.

121.40.1. Configuration
The im_zmq module accepts the following directives in addition to the common module directives. The Address,
ConnectionType, Port, and SocketType directives are required.

Address
This directive specifies the ZeroMQ socket address.

ConnectionType
This mandatory directive specifies the underlying transport protocol. It may be one of the following: TCP, PGM,
or EPGM.

1014
Port
This directive specifies the ZeroMQ socket port.

SocketType
This mandatory directive defines the type of the socket to be used. It may be one of the following: REQ,
DEALER, SUB, XSUB, or PULL. This must be set to SUB if ConnectionType is set to PGM or EPGM.

Connect
If this boolean directive is set to TRUE, im_zmq will connect to the Address specified. If FALSE, im_zmq will bind
to the Address and listen for connections. The default is FALSE.

InputType
See the InputType directive in the list of common module directives. The default is Dgram.

Interface
This directive specifies the ZeroMQ socket interface.

SockOpt
This directive can be used to set ZeroMQ socket options. For example, SockOpt ZMQ_SUBSCRIBE
ANIMALS.CATS. This directive may be used more than once to set multiple options.

121.40.2. Examples
Example 654. Using the im_zmq Module

This example configuration accepts ZeroMQ messages over TCP and writes them to file.

nxlog.conf
 1 <Input zmq>
 2 Module im_zmq
 3 SocketType PULL
 4 ConnectionType TCP
 5 Address 10.0.0.1
 6 Port 1415
 7 </Input>
 8
 9 <Output file>
10 Module om_file
11 File "/var/log/zmq-messages.log"
12 </Output>
13
14 <Route zmq_to_file>
15 Path zmq => file
16 </Route>

1015
Chapter 122. Processor Modules
Processor modules can be used to process log messages in the log message path between configured Input and
Output modules.

122.1. Blocker (pm_blocker)


This module blocks log messages and can be used to simulate a blocked route. When the module blocks the data
flow, log messages are first accumulated in the buffers, and then the flow control mechanism pauses the input
modules. Using the block() procedure, it is possible to programmatically stop or resume the data flow. It can be
useful for real-world scenarios as well as testing. See the examples below. When the module starts, the blocking
mode is disabled by default (it operates like pm_null would).

See the list of installer packages that provide the pm_blocker module in the Available Modules chapter of the
NXLog User Guide.

122.1.1. Configuration
The pm_blocker module accepts only the common module directives.

122.1.2. Functions
The following functions are exported by pm_blocker.

boolean is_blocking()
Return TRUE if the module is currently blocking the data flow, FALSE otherwise.

122.1.3. Procedures
The following procedures are exported by pm_blocker.

block(boolean mode);
When mode is TRUE, the module will block. A block(FALSE) should be called from a Schedule block or
another module, it might not get invoked if the queue is already full.

122.1.4. Examples

1016
Example 655. Using the pm_blocker Module

In this example messages are received over UDP and forwarded to another host via TCP. The log data is
forwarded during non-working hours (between 7pm and 8am). During working hours, the data is buffered
on the disk.

nxlog.conf (truncated)
 1 <Input udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 1514
 5 </Input>
 6
 7 <Processor buffer>
 8 Module pm_buffer
 9 # 100 MB disk buffer
10 MaxSize 102400
11 Type disk
12 </Processor>
13
14 <Processor blocker>
15 Module pm_blocker
16 <Schedule>
17 When 0 8 * * *
18 Exec blocker->block(TRUE);
19 </Schedule>
20 <Schedule>
21 When 0 19 * * *
22 Exec blocker->block(FALSE);
23 </Schedule>
24 </Processor>
25
26 <Output tcp>
27 Module om_tcp
28 Host 192.168.1.1
29 [...]

122.2. Buffer (pm_buffer)


Messages received over UDP may be dropped by the operating system if packets are not read from the message
buffer fast enough. Some logging subsystems using a small circular buffer can overwrite old logs in the buffer if it
is not read, also resulting in loss of log data. Buffering can help in such situations.

The pm_buffer module supports disk- and memory-based log message buffering. If both are required, multiple
pm_buffer instances can be used with different settings. Because a memory buffer can be faster, though its size is
limited, combining memory and disk based buffering can be a good idea if buffering is frequently used.

The disk-based buffering mode stores the log message data in chunks. When all the data is successfully
forwarded from a chunk, it is then deleted in order to save disk space.

Using pm_buffer is only recommended when there is a chance of message loss. The built-in flow
control in NXLog ensures that messages will not be read by the input module until the output
side can send, store, or forward. When reading from files (with im_file) or the Windows EventLog
NOTE
(with im_mseventlog or im_msvistalog) it is rarely necessary to use the pm_buffer module unless
log rotation is used. During a rotation, there is a possibility of dropping some data while the
output module (im_tcp, for example) is being blocked.

1017
See the list of installer packages that provide the pm_buffer module in the Available Modules chapter of the
NXLog User Guide.

122.2.1. Configuration
The pm_buffer module accepts the following directives in addition to the common module directives. The
MaxSize and Type directives are required.

CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the output directory before
opening the file for writing if it does not exist. The default is FALSE.

MaxSize
This mandatory directive specifies the size of the buffer in kilobytes.

Type
This directive can be set to either Mem or Disk to select memory- or disk-based buffering.

Directory
This directory will be used to store the disk buffer file chunks. This is only valid if Type is set to Disk.

WarnLimit
This directive specifies an optional limit, smaller than MaxSize, which will trigger a warning message when
reached. The log message will not be generated again until the buffer size drops to half of WarnLimit and
reaches it again in order to protect against a warning message flood.

122.2.2. Functions
The following functions are exported by pm_buffer.

integer buffer_count()
Return the number of log messages held in the memory buffer.

integer buffer_size()
Return the size of the memory buffer in bytes.

122.2.3. Examples

1018
Example 656. Using a Memory Buffer to Protect Against UDP Message Loss

This configuration accepts log messages via UDP and forwards them via TCP. An intermediate memory-
based buffer allows the im_udp module instance to continue accepting messages even if the om_tcp output
stops working (caused by downtime of the remote host or network issues, for example).

nxlog.conf
 1 <Input udp>
 2 Module im_udp
 3 Host 0.0.0.0
 4 Port 514
 5 </Input>
 6
 7 <Processor buffer>
 8 Module pm_buffer
 9 # 1 MB buffer
10 MaxSize 1024
11 Type Mem
12 # warn at 512k
13 WarnLimit 512
14 </Processor>
15
16 <Output tcp>
17 Module om_tcp
18 Host 192.168.1.1
19 Port 1514
20 </Output>
21
22 <Route udp_to_tcp>
23 Path udp => buffer => tcp
24 </Route>

122.3. Event Correlator (pm_evcorr)


The pm_evcorr module provides event correlation functionality in addition to the already available NXLog
language features such as variables and statistical counters which can be also used for event correlation
purposes.

This module was greatly inspired by the Perl based correlation tool SEC. Some of the rules of the pm_evcorr
module were designed to mimic those available in SEC. This module aims to be a better alternative to SEC with
the following advantages:

• The correlation rules in SEC work with the current time. With pm_evcorr it is possible to specify a time field
which is used for elapsed time calculation making offline event correlation possible.
• SEC uses regular expressions extensively, which can become quite slow if there are many correlation rules. In
contrast, this module can correlate pre-processed messages using fields from, for example, the pattern
matcher and Syslog parsers without requiring the use of regular expressions (though these are also available
for use by correlation rules). Thus testing conditions can be significantly faster when simple comparison is
used instead of regular expression based pattern matching.
• This module was designed to operate on fields, making it possible to correlate structured logs in addition to
simple free-form log messages.
• Most importantly, this module is written in C, providing performance benefits (where SEC is written in pure
Perl).

The rulesets of this module can use a context. A context is an expression which is evaluated during runtime to a
value and the correlation rule is checked in the context of this value. For example, to count the number of failed

1019
logins per user and alert if the failed logins exceed 3 for the user, the $AccountName would be used as the
context. There is a separate context storage for each correlation rule instance. For global contexts accessible
from all rule instances, see module variables and statistical counters.

See the list of installer packages that provide the pm_evcorr module in the Available Modules chapter of the
NXLog User Guide.

122.3.1. Configuration
The pm_evcorr module accepts the following directives in addition to the common module directives.

The pm_evcorr configuration contains correlation rules which are evaluated for each log message processed by
the module. Currently there are seven rule types supported by pm_evcorr: Absence, Group, Pair, Simple, Stop,
Suppressed, and Thresholded. These rules are defined in configuration blocks. The rules are evaluated in the
order they are defined. For example, a correlation rule can change a state, variable, or field which can be then
used by a later rule. File inclusion can be useful to store correlation rules in a separate file.

Absence
This rule type does the opposite of Pair. When TriggerCondition evaluates to TRUE, this rule type will wait
Interval seconds for RequiredCondition to become TRUE. If it does not become TRUE, it executes the
statement(s) in the Exec directive(s).

Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.

Exec
One or more Exec directives must be specified, each taking a statement as argument.

The evaluation of this Exec is not triggered by a log event; thus it does not make sense to
NOTE
use log data related operations such as accessing fields.

Interval
This mandatory directive takes an integer argument specifying the number of seconds to wait for
RequiredCondition to become TRUE. Its value must be greater than 0. The TimeField directive is used to
calculate time.

RequiredCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
this evaluates to TRUE after TriggerCondition evaluated to TRUE within Interval seconds, the statement(s)
in the Exec directive(s) are NOT executed.

TriggerCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.

Group
This rule type groups messages together based on the specified correlation context. The Exec block is
executed at each event. The last log data of each context group is available through get_prev_event_data().
This way, fields and information can be propagated from the previous group event to the following one.

Context
This mandatory directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.

Exec
One or more Exec directives must be specified, each taking a statement as an argument.

1020
Pair
When TriggerCondition evaluates to TRUE, this rule type will wait Interval seconds for RequiredCondition to
become TRUE. It then executes the statement(s) in the Exec directive(s).

Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.

Exec
One or more Exec directives must be specified, each taking a statement as argument.

Interval
This directive takes an integer argument specifying the number of seconds to wait for RequiredCondition
to become TRUE. If this directive is 0 or not specified, the rule will wait indefinitely for RequiredCondition
to become TRUE. The TimeField directive is used to calculate time.

RequiredCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
this evaluates to TRUE after TriggerCondition evaluated to TRUE within Interval seconds, the statement(s)
in the Exec directive(s) are executed.

TriggerCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.

Simple
This rule type is essentially the same as the Exec directive supported by all modules. Because Execs are
evaluated before the correlation rules, the Simple rule was also needed to be able to evaluate a statement as
the other rules do, following the rule order. The Simple block has one directive also with the same name.

Exec
One or more Exec directives must be specified, with a statement as argument.

Stop
This rule type will stop evaluating successive rules if the Condition evaluates to TRUE. The optional Exec
directive will be evaluated in this case.

Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
it evaluates to TRUE, the correlation rule engine will stop checking any further rules.

Exec
One or more Exec directives may be specified, each taking a statement as argument. This will be
evaluated when the specified Condition is satisfied. This directive is optional.

Suppressed
This rule type matches the given condition. If the condition evaluates to TRUE, the statement specified with
the Exec directive is evaluated. The rule will then ignore any log messages for the time specified with Interval
directive. This rule is useful for avoiding creating multiple alerts in a short period when a condition is
satisfied.

Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.

Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.

1021
Exec
One or more Exec directives must be specified, each taking a statement as argument.

Interval
This mandatory directive takes an integer argument specifying the number of seconds to ignore the
condition. The TimeField directive is used to calculate time.

Thresholded
This rule type will execute the statement(s) in the Exec directive(s) if the Condition evaluates to TRUE
Threshold or more times during the Interval specified. The advantage of this rule over the use of statistical
counters is that the time window is dynamic and shifts as log messages are processed.

Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.

Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.

Exec
One or more Exec directives must be specified, each taking a statement as argument.

Interval
This mandatory directive takes an integer argument specifying a time window for Condition to become
TRUE. Its value must be greater than 0. The TimeField directive is used to calculate time. This time window
is dynamic, meaning that it will shift.

Threshold
This mandatory directive takes an integer argument specifying the number of times Condition must
evaluate to TRUE within the given time Interval. When the threshold is reached, the module executes the
statement(s) in the Exec directive(s).

ContextCleanTime
When a Context is used in the correlation rules, these must be purged from memory after they are expired,
otherwise using too many context values could result in a high memory usage. This optional directive
specifies the interval between context cleanups, in seconds. By default a 60 second cleanup interval is used if
any rules use a Context and this directive is not specified.

TimeField
This specifies the name of the field to use for calculating elapsed time, such as EventTime. The name of the
field must be specified without the leading dollar sign ($). If this parameter is not specified, the current time is
assumed. This directive makes it possible to accurately correlate events based on the event time recorded in
the logs and to do non-real-time event correlation.

122.3.2. Functions
The following functions are exported by pm_evcorr.

unknown get_prev_event_data(string field_name)


When the correlation rule triggers an Exec, the data might not be available. This function can be used to
retrieve fields of the event that triggered the rule. The field must be specified as a string (for example,
get_prev_event_data("EventTime")). This is applicable only for the Pair and Absence rule types.

1022
122.3.3. Examples
Example 657. The Absence Directive

The following configuration shows the Absence directive. In this case, if TriggerCondition evaluates to
TRUE, it waits the seconds defined in Interval for the RequiredCondition to become TRUE. If the
RequiredCondition does not become TRUE within the specified interval, then it executes what is defined in
Exec.

nxlog.conf
 1 <Input internal>
 2 Module im_internal
 3 <Exec>
 4 $raw_event = $Message;
 5 $EventTime = 2010-01-01 00:01:00;
 6 </Exec>
 7 </Input>
 8
 9 <Processor evcorr>
10 Module pm_evcorr
11 TimeField EventTime
12 <Absence>
13 TriggerCondition $Message =~ /^absence-trigger/
14 RequiredCondition $Message =~ /^absence-required/
15 Interval 10
16 <Exec>
17 log_info("'absence-required' not received within 10 secs");
18 </Exec>
19 </Absence>
20 </Processor>

Input Sample
2010-01-01 00:00:26 absence-trigger↵
2010-01-01 00:00:29 absence-required - will not log 'got absence'↵
2010-01-01 00:00:46 absence-trigger↵
2010-01-01 00:00:57 absence-required - will log an additional 'absence-required not received
within 10 secs'↵

Output Sample
absence-trigger↵
absence-required - will not log 'got absence'↵
absence-trigger↵
absence-required - will log an additional 'absence-required not received within 10 secs'↵
'absence-required' not received within 10 secs↵

1023
Example 658. The Group Directive

The following configuration shows rules for the Group directive. It rewrites the events to exclude the date
and time, then rewrites the $raw_event with the context and message. After that, for every matched event,
it adds the $Message field of the newly matched event to it.

nxlog.conf
 1 <Processor evcorr>
 2 Module pm_evcorr
 3 TimeField EventTime
 4 ContextCleanTime 10
 5 <Group>
 6 Context $Context
 7 <Exec>
 8 if defined get_prev_event_data("raw_event")
 9 {
10 $raw_event = get_prev_event_data("raw_event") + ", " + $Message;
11 }
12 else
13 {
14 $raw_event = "Context: " + $Context + " Messages: " + $Message;
15 }
16 </Exec>
17 </Group>
18 </Processor>

Input Sample
2010-01-01 00:00:01 [a] suppressed1↵
2010-01-01 00:00:02 [b] suppressed2↵
2010-01-01 00:00:03 [a] suppressed3↵
2010-01-01 00:00:04 [b] suppressed4↵
2010-01-01 00:00:04 [b] suppressed5↵
2010-01-01 00:00:05 [c] suppressed6↵
2010-01-01 00:00:06 [c] suppressed7↵
2010-01-01 00:00:34 [b] suppressed8↵
2010-01-01 00:01:00 [a] pair-first1↵

Output Sample
Context: a Messages: suppressed1↵
Context: b Messages: suppressed2↵
Context: a Messages: suppressed1, suppressed3↵
Context: b Messages: suppressed2, suppressed4↵
Context: b Messages: suppressed2, suppressed4, suppressed5↵
Context: c Messages: suppressed6↵
Context: c Messages: suppressed6, suppressed7↵
Context: b Messages: suppressed2, suppressed4, suppressed5, suppressed8↵
Context: a Messages: suppressed1, suppressed3, pair-first1↵

1024
Example 659. The Pair Directive

The following configuration shows rules for the Pair directive. In this case, if TriggerCondition evaluates to
TRUE, it waits the seconds defined in Interval for the RequiredCondition to become TRUE, then executes
what is defined in Exec. If the Interval is 0, there is no window for matching.

nxlog.conf
 1 <Processor evcorr>
 2 Module pm_evcorr
 3 TimeField EventTime
 4 <Pair>
 5 TriggerCondition $Message =~ /^pair-first/
 6 RequiredCondition $Message =~ /^pair-second/
 7 Interval 30
 8 Exec $raw_event = "got pair";
 9 </Pair>
10 </Processor>

Input Sample
2010-01-01 00:00:12 pair-first - now look for pair-second↵
2010-01-01 00:00:22 pair-second - will log 'got pair'↵
2010-01-01 00:00:25 pair-first↵
2010-01-01 00:00:56 pair-second - will not log 'got pair' because it is over the interval↵

Output Sample
pair-first - now look for pair-second↵
got pair↵
pair-first↵

Example 660. The Simple Directive

The following configuration shows rules for the Simple directive. In this case, if the $Message field starts
with simple it is rewritten to got simple.

nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Simple>
5 Exec if $Message =~ /^simple/ $raw_event = "got simple";
6 </Simple>
7 </Processor>

Input Sample
2010-01-01 00:00:00 Not simple↵
2010-01-01 00:00:05 Not simple again↵
2010-01-01 00:00:10 simple1↵
2010-01-01 00:00:15 simple2↵

Output Sample
Not simple↵
Not simple again↵
got simple↵
got simple↵

1025
Example 661. The Stop Directive

The following configuration shows a rule for the Stop directive in conjunction with the Simple directive. In
this case, if the Stop condition evaluates to FALSE, the Simple directive returns the output as rewritten.

nxlog.conf
 1 <Processor evcorr>
 2 Module pm_evcorr
 3 TimeField EventTime
 4 <Stop>
 5 Condition $EventTime < 2010-01-01 00:00:00
 6 Exec log_debug("got stop");
 7 </Stop>
 8 <Simple>
 9 Exec $raw_event = "rewritten";
10 </Simple>
11 </Processor>

Input Sample
2010-01-02 00:00:00 this will be rewritten↵
2010-01-02 00:00:10 this too↵
2010-01-02 00:00:15 as well as this↵

Output Sample
rewritten↵
rewritten↵
rewritten↵

Example 662. The Suppressed Directive

The following configuration shows a rule for the Suppressed directive. In this case, the directive matches
the input event and executes the corresponding action, but only for the time defined in the Interval
condition in seconds. After that, it logs the input as is.

nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Suppressed>
5 Condition $Message =~ /^to be suppressed/
6 Interval 30
7 Exec $raw_event = "suppressed..";
8 </Suppressed>
9 </Processor>

Input Sample
2010-01-01 00:00:01 to be suppressed1 - Suppress kicks in, will log 'suppressed..'↵
2010-01-01 00:00:21 to be suppressed2 - suppressed and logged as is↵
2010-01-01 00:00:23 to be suppressed3 - suppressed and logged as is↵

Output Sample
suppressed..↵
to be suppressed2 - suppressed and logged as is↵
to be suppressed3 - suppressed and logged as is↵

1026
Example 663. The Thresholded Directive

The following configuration shows rules for the Thresholded directive. In this case, if the number of events
exceeds the given threshold within the interval period, the action defined in Exec is carried out.

nxlog.conf
 1 <Processor evcorr>
 2 Module pm_evcorr
 3 TimeField EventTime
 4 <Thresholded>
 5 Condition $Message =~ /^thresholded/
 6 Threshold 3
 7 Interval 60
 8 Exec $raw_event = "got thresholded";
 9 </Thresholded>
10 </Processor>

Input Sample
2010-01-01 00:00:13 thresholded1 - not tresholded will log as is↵
2010-01-01 00:00:15 thresholded2 - not tresholded will log as is↵
2010-01-01 00:00:20 thresholded3 - will log 'got thresholded'↵
2010-01-01 00:00:25 thresholded4 - will log 'got thresholded' again↵

Output Sample
thresholded1 - not tresholded will log as is↵
thresholded2 - not tresholded will log as is↵
got thresholded↵
got thresholded↵

122.4. Filter (pm_filter)


This is a simple module which forwards log messages if the specified condition is TRUE.

This module has been deprecated and will be removed in a future release. Filtering is now
NOTE
possible in any module with a conditional drop() procedure in an Exec block or directive.

Example 664. Filtering Events With drop()

This statement drops the current event if the $raw_event field matches the specified regular expression.

1 if $raw_event =~ /^Debug/ drop();

See the list of installer packages that provide the pm_filter module in the Available Modules chapter of the NXLog
User Guide.

122.4.1. Configuration
The pm_filter module accepts the following directives in addition to the common module directives.

Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. If the
expression does not evaluate to TRUE, the log message is discarded.

1027
122.4.2. Examples
Example 665. Filtering Messages

This configuration retains only log messages that match one of the regular expressions, all others are
discarded.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Processor filter>
 7 Module pm_filter
 8 Condition $raw_event =~ /failed/ or $raw_event =~ /error/
 9 </Processor>
10
11 <Output file>
12 Module om_file
13 File "/var/log/error"
14 </Output>
15
16 <Route uds_to_file>
17 Path uds => filter => file
18 </Route>

122.5. HMAC Message Integrity (pm_hmac)


In order to protect log messages, this module provides cryptographic checksumming on messages using the
HMAC algorithm with a specific hash function. Messages protected this way cannot be altered, deleted, or
inserted without detection. A separate verification procedure using the pm_hmac_check module is necessary for
the receiver.

NOTE This module has been deprecated and will be removed in a future release.

When the module starts, it creates an initial random hash value which is signed with the private key and stored in
$nxlog.hmac_initial field. As messages pass through the module, it calculates a hash value using the previous
hash value, the initial hash value, and the fields of the log message. This calculated value is added to the log
message as a new field called $nxlog.hmac, and can be used to later verify the integrity of the message.

If the attacker can insert messages at the source, this module will add a HMAC value and
WARNING the activity will go unnoticed. This method only secures messages that are already
protected with a HMAC value.

For this method to work more securely, the private key should be protected by a password and
NOTE the password should not be stored with the key (the configuration file should not contain the
password). This will force the agent to prompt for the password when it is started.

See the list of installer packages that provide the pm_hmac module in the Available Modules chapter of the
NXLog User Guide.

122.5.1. Configuration
The pm_hmac module accepts the following directives in addition to the common module directives. The

1028
CertKeyFile directive is required.

CertKeyFile
This mandatory directive specifies the path of the private key file to be used to sign the initial hash value.

Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the HMAC
value. This directive is optional, and the $raw_event field will be used if it is not specified.

HashMethod
This directive sets the hash function. The following message digest methods can be used: md2, md5, mdc2,
rmd160, sha, sha1, sha224, sha256, sha384, and sha512. The default is md5.

KeyPass
This specifies the password of the CertKeyFile.

122.5.2. Fields
The following fields are used by pm_hmac.

$nxlog.hmac (type: string)


The digest value calculated from the log message fields.

$nxlog.hmac_initial (type: string)


The initial HMAC value which starts the chain.

$nxlog.hmac_sig (type: string)


The signature of nxlog.hmac_initial created with the private key.

122.5.3. Examples

1029
Example 666. Protecting Messages with a HMAC Value

This configuration uses the im_uds module to read log messages from a socket. It then adds a hash value
to each message. Finally it forwards them via TCP to another NXLog agent in the binary format.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Processor hmac>
 7 Module pm_hmac
 8 CertKeyFile %CERTDIR%/client-key.pem
 9 KeyPass secret
10 HashMethod SHA1
11 </Processor>
12
13 <Output tcp>
14 Module om_tcp
15 Host 192.168.1.1
16 Port 1514
17 OutputType Binary
18 </Output>
19
20 <Route uds_to_tcp>
21 Path uds => hmac => tcp
22 </Route>

122.6. HMAC Message Integrity Checker (pm_hmac_check)


This module is the pair of pm_hmac to check message integrity.

NOTE This module has been deprecated and will be removed in a future release.

See the list of installer packages that provide the pm_hmac_check module in the Available Modules chapter of the
NXLog User Guide.

122.6.1. Configuration
The pm_hmac_check module accepts the following directives in addition to the common module directives. The
CertFile directive is required.

CertFile
This mandatory directive specifies the path of the certificate file to be used to verify the signature of the initial
hash value.

HashMethod
This directive sets the hash function. The following message digest methods can be used: md2, md5, mdc2,
rmd160, sha, sha1, sha224, sha256, sha384, and sha512. The default is md5. This must be the same as the
hash method used for creating the HMAC values.

CADir
This optional directive specifies the path to a directory containing certificate authority (CA) certificates, which

1030
will be used to verify the certificate. The certificate filenames in this directory must be in the OpenSSL hashed
format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including a copy
of the certificate in this directory.

CAFile
This optional directive specifies the path of the certificate authority (CA) certificate, which will be used to
verify the certificate. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

CRLDir
This optional directive specifies the path to a directory containing certificate revocation lists (CRLs), which will
be consulted when checking the certificate. The certificate filenames in this directory must be in the OpenSSL
hashed format.

CRLFile
This optional directive specifies the path of the certificate revocation list (CRL), which will be consulted when
checking the certificate.

Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the HMAC
value. This directive is optional, and the $raw_event field will be used if it is not specified.

122.6.2. Fields
The following fields are used by pm_hmac_check.

$nxlog.hmac (type: string)


The HMAC value stored in this field is compared against the calculated value. This field is generated by the
pm_hmac module.

$nxlog.hmac_initial (type: string)


The initial HMAC value which starts the chain. This is generated by the pm_hmac module.

$nxlog.hmac_sig (type: string)


The signature of nxlog.hmac_initial to be verified with the certificate’s public key. This field is generated by the
pm_hmac module.

122.6.3. Examples

1031
Example 667. Verifying Message Integrity

This configuration accepts log messages in the NXLog binary format. The HMAC values are checked, then
the messages are written to file.

nxlog.conf
 1 <Input tcp>
 2 Module im_tcp
 3 Host 192.168.1.1
 4 Port 1514
 5 InputType Binary
 6 </Input>
 7
 8 <Processor hmac_check>
 9 Module pm_hmac_check
10 CertFile %CERTDIR%/client-cert.pem
11 CAFile %CERTDIR%/ca.pem
12 # CRLFile %CERTDIR%/crl.pem
13 HashMethod SHA1
14 </Processor>
15
16 <Output file>
17 Module om_file
18 File "/var/log/msg"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => hmac_check => file
23 </Route>

122.7. De-Duplicator (pm_norepeat)


This module can be used to filter out repeating messages. Like Syslog daemons, this module checks the previous
message against the current. If they match, the current message is dropped. The module waits one second for
duplicated messages to arrive. If duplicates are detected, the first message is forwarded, the rest are dropped,
and a message containing "last message repeated n times" is sent instead.

This module has been deprecated and will be removed in a future release. The functionality of
NOTE
this module can be implemented with Variables.

See the list of installer packages that provide the pm_norepeat module in the Available Modules chapter of the
NXLog User Guide.

122.7.1. Configuration
The pm_norepeat module accepts the following directives in addition to the common module directives.

CheckFields
This optional directive takes a comma-separated list of field names which are used to compare log messages.
Only the fields listed here are compared, the others are ignored. For example, the $EventTime field will be
different in repeating messages, so this field should not be used in the comparison. If this directive is not
specified, the default field to be checked is $Message.

1032
122.7.2. Fields
The following fields are used by pm_norepeat.

$raw_event (type: string)


A string containing the last message repeated n times message.

$EventTime (type: datetime)


The time of the last event or the current time if EventTime was not present in the last event.

$Message (type: string)


The same value as $raw_event.

$ProcessID (type: integer)


The process ID of the NXLog process.

$Severity (type: string)


The severity name: INFO.

$SeverityValue (type: integer)


The INFO severity level value: 2.

$SourceName (type: string)


Set to nxlog.

122.7.3. Examples
Example 668. Filtering Out Duplicated Messages

This configuration reads log messages from the socket. The $Hostname, $SourceName, and $Message fields
are used to detect duplicates. Then the messages are written to file.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Processor norepeat>
 7 Module pm_norepeat
 8 CheckFields Hostname, SourceName, Message
 9 </Processor>
10
11 <Output file>
12 Module om_file
13 File "/var/log/messages"
14 </Output>
15
16
17 <Route uds_to_file>
18 Path uds => norepeat => file
19 </Route>

1033
122.8. Null (pm_null)
This module does not do any special processing, so basically it does nothing. Yet it can be used with the Exec and
Schedule directives, like any other module.

The pm_null module accepts only the common module directives.

See this example for usage.

See the list of installer packages that provide the pm_null module in the Available Modules chapter of the NXLog
User Guide.

122.9. Pattern Matcher (pm_pattern)


This module makes it possible to execute pattern matching with a pattern database file in XML format. The
pm_pattern module has been replaced by an extension module, xm_pattern, which provides nearly identical
functionality with the improved flexibility of an extension module.

See the list of installer packages that provide the pm_pattern module in the Available Modules chapter of the
NXLog User Guide.

122.9.1. Configuration
The pm_pattern module accepts the following directives in addition to the common module directives.

PatternFile
This mandatory directive specifies the name of the pattern database file.

122.9.2. Fields
The following fields are used by pm_pattern.

$PatternID (type: integer)


The ID number of the pattern which matched the message.

$PatternName (type: string)


The name of the pattern which matched the message.

122.9.3. Examples
Example 669. Using the pm_pattern Module

This configuration reads BSD Syslog messages from the socket, processes the messages with a pattern file,
and then writes them to file in JSON format.

1034
nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input uds>
10 Module im_uds
11 UDS /dev/log
12 Exec parse_syslog_bsd();
13 </Input>
14
15 <Processor pattern>
16 Module pm_pattern
17 PatternFile /var/lib/nxlog/patterndb.xml
18 </Processor>
19
20 <Output file>
21 Module om_file
22 File "/var/log/out"
23 Exec to_json();
24 </Output>
25
26 <Route uds_to_file>
27 Path uds => pattern => file
28 </Route>

The following pattern database contains two patterns to match SSH authentication messages. The patterns
are under a group named ssh which checks whether the $SourceName field is sshd and only tries to match
the patterns if the logs are indeed from sshd. The patterns both extract AuthMethod, AccountName, and
SourceIP4Address from the log message when the pattern matches the log. Additionally TaxonomyStatus and
TaxonomyAction are set. The second pattern utilizes the Exec block, which is evaluated when the pattern
matches.

For this pattern to work, the logs must be parsed with parse_syslog() prior to processing by
NOTE the pm_pattern module (as in the above example), because it uses the $SourceName and
$Message fields.

patterndb.xml
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
 <created>2010-01-01 01:02:03</created>
 <version>42</version>

 <group>
  <name>ssh</name>
  <id>42</id>
  <matchfield>
  <name>SourceName</name>
  <type>exact</type>
  <value>sshd</value>
  </matchfield>

  <pattern>
  <id>1</id>

1035
  <name>ssh auth success</name>

  <matchfield>
  <name>Message</name>
  <type>regexp</type>
  <!-- Accepted publickey for nxlogfan from 192.168.1.1 port 4242 ssh2 -->
  <value>^Accepted (\S+) for (\S+) from (\S+) port \d+ ssh2</value>
  <capturedfield>
  <name>AuthMethod</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>AccountName</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>SourceIP4Address</name>
  <type>string</type>
  </capturedfield>
  </matchfield>

  <set>
  <field>
  <name>TaxonomyStatus</name>
  <value>success</value>
  <type>string</type>
  </field>
  <field>
  <name>TaxonomyAction</name>
  <value>authenticate</value>
  <type>string</type>
  </field>
  </set>
  </pattern>

  <pattern>
  <id>2</id>
  <name>ssh auth failure</name>

  <matchfield>
  <name>Message</name>
  <type>regexp</type>
  <value>^Failed (\S+) for invalid user (\S+) from (\S+) port \d+ ssh2</value>

  <capturedfield>
  <name>AuthMethod</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>AccountName</name>
  <type>string</type>
  </capturedfield>
  <capturedfield>
  <name>SourceIP4Address</name>
  <type>string</type>
  </capturedfield>
  </matchfield>

  <set>
  <field>

1036
  <name>TaxonomyStatus</name>
  <value>failure</value>
  <type>string</type>
  </field>
  <field>
  <name>TaxonomyAction</name>
  <value>authenticate</value>
  <type>string</type>
  </field>
  </set>

  <exec>
  $TestField = 'test';
  </exec>
  <exec>
  $TestField = $Testfield + 'value';
  </exec>
  </pattern>

 </group>

</patterndb>

122.10. Format Converter (pm_transformer)


The pm_transformer module provides parsers for BSD Syslog, IETF Syslog, CSV, JSON, and XML formatted data
and can also convert between.

This module has been deprecated and will be removed in a future release. Format conversion is
NOTE now possible in any module by using functions and procedures provided by the following
modules: xm_syslog, xm_csv, xm_json, and xm_xml.

See the list of installer packages that provide the pm_transformer module in the Available Modules chapter of the
NXLog User Guide.

122.10.1. Configuration
The pm_transformer module accepts the following directives in addition to the common module directives. For
conversion to occur, the InputFormat and OutputFormat directives must be specified.

InputFormat
This directive specifies the input format of the $raw_event field so that it is further parsed into fields. If this
directive is not specified, no parsing will be performed.

CSV
Input is parsed as a comma-separated list of values. See xm_csv for similar functionality. The input fields
must be defined by CSVInputFields.

JSON
Input is parsed as JSON. This does the same as the parse_json() procedure.

syslog_bsd
Same as syslog_rfc3164.

syslog_ietf
Same as syslog_rfc5424.

1037
syslog_rfc3164
Input is parsed in the BSD Syslog format as defined by RFC 3164. This does the same as the
parse_syslog_bsd() procedure.

syslog_rfc5424
Input is parsed in the IETF Syslog format as defined by RFC 5424. This does the same as the
parse_syslog_ietf() procedure.

XML
Input is parsed as XML. This does the same as the parse_xml() procedure.

OutputFormat
This directive specifies the output transformation. If this directive is not specified, fields are not converted
and $raw_event is left unmodified.

CSV
Output in $raw_event is formatted as a comma-separated list of values. See xm_csv for similar
functionality.

JSON
Output in $raw_event is formatted as JSON. This does the same as the to_json() procedure.

syslog_bsd
Same as syslog_rfc3164.

syslog_ietf
Same as syslog_rfc5424.

syslog_rfc3164
Output in $raw_event is formatted in the BSD Syslog format as defined by RFC 3164. This does the same
as the to_syslog_bsd() procedure.

syslog_rfc5424
Output in $raw_event is formatted in the IETF Syslog format as defined by RFC 5424. This does the same
as the to_syslog_ietf() procedure.

syslog_snare
Output in $raw_event is formatted in the SNARE Syslog format. This does the same as the
to_syslog_snare() procedure. This should be used in conjunction with the im_mseventlog or im_msvistalog
module to produce an output compatible with Snare Agent for Windows.

XML
Output in $raw_event is formatted in XML. This does the same as the to_xml() procedure.

CSVInputFields
This is a comma-separated list of fields which will be set from the input parsed. The field names must have
the dollar sign ($) prepended.

CSVInputFieldTypes
This optional directive specifies the list of types corresponding to the field names defined in CSVInputFields. If
specified, the number of types must match the number of field names specified with CSVInputFields. If this
directive is omitted, all fields will be stored as strings. This directive has no effect on the fields-to-CSV
conversion.

1038
CSVOutputFields
This is a comma-separated list of message fields which are placed in the CSV lines. The field names must have
the dollar sign ($) prepended.

122.10.2. Examples
Example 670. Using the pm_transformer Module

This configuration reads BSD Syslog messages from file and writes them to another file in CSV format.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input filein>
 6 Module im_file
 7 File "tmp/input"
 8 </Input>
 9
10 <Processor transformer>
11 Module pm_transformer
12 InputFormat syslog_rfc3164
13 OutputFormat csv
14 CSVOutputFields $facility, $severity, $timestamp, $hostname, \
15 $application, $pid, $message
16 </Processor>
17
18 <Output fileout>
19 Module om_file
20 File "tmp/output"
21 </Output>
22
23 <Route filein_to_fileout>
24 Path filein => transformer => fileout
25 </Route>

122.11. Timestamping (pm_ts)


This module provides support for the Time-Stamp Protocol as defined in RFC 3161. A cryptographic hash value of
the log messages is sent over a HTTP or HTTPS channel to a Time-Stamp Authority server which creates a
cryptographic Time-Stamp signature to prove that the message existed at that time and to be able to verify its
validity at a later time. This may be mandatory for regulatory compliance, financial transactions, and legal
evidence.

NOTE This module has been deprecated and will be removed in a future release.

A timestamp request is created for each log message received by this module, and the response is appended to
the tsa_response field. The module does not request the certificate to be included in the response as this would
greatly increase the size of the responses. The certificate used by the server for creating timestamps should be
saved manually for later verification. The module establishes one HTTP connection to the server for the time-
stamping by using HTTP Keep-Alive requests and will reconnect if the remote closes the connection.

1039
Since each log message generates a HTTP(S) request to the Time-Stamp server, the message
throughput can be greatly affected. It is recommended that only messages of relevant
importance are time-stamped through the use of proper filtering rules applied to messages
NOTE before they reach the pm_ts module instance.

Creating timestamps in batch mode (requesting one timestamp on multiple messages) is not
supported at this time.

See the list of installer packages that provide the pm_ts module in the Available Modules chapter of the NXLog
User Guide.

122.11.1. Configuration
The pm_ts module accepts the following directives in addition to the common module directives. The URL
directive is required.

URL
This mandatory directive specifies the URL of the Time-Stamp Authority server. The URL must begin with
either http:// for plain HTTP over TCP or https:// for HTTP over SSL.

Digest
This specifies the digest method (hash function) to be used. The SHA1 hash function is used by default. The
following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256, sha384,
and sha512. Note that the Time-Stamp server must support the digest method specified.

Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the hash value
sent to the TSA server. This directive is optional, and the $raw_event field is used if it is not specified.

HTTPSAllowUntrusted
This boolean directive specifies that the connection to the Time-Stamp Authority server should be allowed
without certificate verification. If set to TRUE, the connection will be allowed even if the server provides an
unknown or self-signed certificate. The default value is FALSE: all Time-Stamp Authority servers must present
a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote Time-Stamp Authority server. The certificate filenames in this directory
must be in the OpenSSL hashed format. This directive can only be specified if the URL begins with https. A
remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including a copy of the
certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate of the certificate authority (CA) certificate, which will be used to check
the certificate of the remote Time-Stamp Authority server. This directive can only be specified if the URL
begins with https. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

1040
HTTPSCertFile
This specifies the path of the certificate file to be used for the SSL handshake. This directive can only be
specified if the URL begins with https. If this directive is not specified but the URL begins with https, then an
anonymous SSL connection is attempted without presenting a client-side certificate.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake. This directive can only be
specified if the URL begins with https. If this directive is not specified but the URL begins with https, then an
anonymous SSL connection is attempted without presenting a client-side certificate.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote Time-Stamp Authority server. The certificate filenames in this
directory must be in the OpenSSL hashed format. This directive can only be specified if the URL begins with
https.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote Time-Stamp Authority server. This directive can only be specified if the URL begins
with https.

HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

1041
122.11.2. Fields
The following fields are used by pm_ts.

$TSAResponse (type: binary)


The response for the Time-Stamp request from the server. This does not include the certificate.

122.11.3. Examples
Example 671. Storing Requested Timestamps in a CSV File

With this configuration, NXLog will read BSD Syslog messages from the socket, add timestamps, and then
save the messages to file in CSV format.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input uds>
 6 Module im_uds
 7 UDS /dev/log
 8 Exec parse_syslog_bsd();
 9 </Input>
10
11 <Processor ts>
12 Module pm_ts
13 URL https://tsa-server.com:8080/tsa
14 Digest md5
15 </Processor>
16
17 <Processor csv>
18 Module pm_transformer
19 OutputFormat csv
20 CSVOutputFields $facility, $severity, $timestamp, $hostname, \
21 $application, $pid, $message, $tsa_response
22 </Processor>
23
24 <Output file>
25 Module om_file
26 File "/dev/stdout"
27 </Output>
28
29 <Route uds_to_file>
30 Path uds => ts => csv => file
31 </Route>

1042
Chapter 123. Output Modules
Output modules are responsible for writing event log data to various destinations.

123.1. Batched Compression (om_batchcompress)


This module uses its own protocol to send batches of log messages to a remote NXLog instance configured with
the im_batchcompress module. The messages are compressed in batches in order to achieve better
compression ratios than would be possible individually. The module serializes and sends all fields across the
network so that structured data is preserved. It can be configured to send data using SSL for secure and
encrypted data transfer. The protocol contains an acknowledgment in order to ensure that the data is received
by the remote server. The batch will be resent if the server does not respond with an acknowledgment.

See the list of installer packages that provide the om_batchcompress module in the Available Modules chapter of
the NXLog User Guide.

123.1.1. Configuration
The om_batchcompress module accepts the following directives in addition to the common module directives. The
Host directive is required.

Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.

Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
2514.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.

AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with unknown and self-signed certificates. The default value
is FALSE: all connections must present a trusted certificate.

CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

1043
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.

CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.

CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.

FlushInterval
The module will send a batch of data to the remote destination after this amount of time in seconds, unless
FlushLimit is reached first. This defaults to 5 seconds.

FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
compress and send the batch to the remote. This defaults to 500 events. The FlushInterval directive may
trigger sending the batch before this limit is reached if the log volume is low to ensure that data is sent
promptly.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.

LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.

SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.

SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the

1044
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

UseSSL
This boolean directive specifies that SSL transfer mode should be enabled. The default is FALSE.

123.1.2. Examples
Example 672. Sending Logs With om_batchcompress

This configuration forwards logs in compressed batches to a remote NXLog agent over the default port.
Batches are sent at least once every two seconds, or more frequently if the buffer reaches 100 events.

nxlog.conf
 1 <Output batchcompress>
 2 Module om_batchcompress
 3 Host rcvr:2514
 4 FlushLimit 100
 5 FlushInterval 2
 6 </Output>
 7
 8 # old syntax
 9 #<Output batchcompress>
10 # Module om_batchcompress
11 # Host 10.0.0.1
12 # Port 2514
13 # FlushLimit 100
14 # FlushInterval 2
15 #</Output>

Example 673. Sending Batch Compressed Logs with Failover

This configuration sends logs in compressed batches to a remote NXLog agent in a failover configuration
(multiple Hosts defined). The actual destinations used in this case are localhost:2514, 192.168.1.1:2514
and example.com:1234.

nxlog.conf
1 <Output batchcompress>
2 Module om_batchcompress
3 # destination host / IP and destination port
4 Host example1:2514
5 # first fail-over
6 Host example2:2514
7 # originating port
8 LocalPort 15000
9 </Output>

123.2. Blocker (om_blocker)


This module is mostly for testing purposes. It will block log messages in order to simulate a blocked route, like
when a network transport output module such as om_tcp blocks because of a network problem.

The sleep() procedure can also be used for testing by simulating log message delays.

See the list of installer packages that provide the om_blocker module in the Available Modules chapter of the

1045
NXLog User Guide.

123.2.1. Configuration
The om_blocker module accepts only the common module directives.

123.2.2. Examples
Example 674. Testing Buffering With the om_blocker Module

Because the route in this configuration is blocked, this will test the behavior of the configured memory-
based buffer.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Processor buffer>
 7 Module pm_buffer
 8 WarnLimit 512
 9 MaxSize 1024
10 Type Mem
11 </Processor>
12
13 <Output blocker>
14 Module om_blocker
15 </Output>
16
17 <Route uds_to_blocker>
18 Path uds => buffer => blocker
19 </Route>

123.3. DBI (om_dbi)


The om_dbi module allows NXLog to store log data in external databases. This module utilizes the libdbi database
abstraction library, which supports various database engines such as MySQL, PostgreSQL, MSSQL, Sybase,
Oracle, SQLite, and Firebird. An INSERT statement can be specified, which will be executed for each log, to insert
into any table schema.

The im_dbi and om_dbi modules support GNU/Linux only because of the libdbi library. The
NOTE
im_odbc and om_odbc modules provide native database access on Windows.

libdbi needs drivers to access the database engines. These are in the libdbd-* packages on
Debian and Ubuntu. CentOS 5.6 has a libdbi-drivers RPM package, but this package does not
NOTE contain any driver binaries under /usr/lib64/dbd. The drivers for both MySQL and PostgreSQL
are in libdbi-dbd-mysql. If these are not installed, NXLog will return a libdbi driver initialization
error.

See the list of installer packages that provide the om_dbi module in the Available Modules chapter of the NXLog
User Guide.

1046
123.3.1. Configuration
The om_dbi module accepts the following directives in addition to the common module directives.

Driver
This mandatory directive specifies the name of the libdbi driver which will be used to connect to the
database. A DRIVER name must be provided here for which a loadable driver module exists under the name
libdbdDRIVER.so (usually under /usr/lib/dbd/). The MySQL driver is in the libdbdmysql.so file.

SQL
This directive should specify the INSERT statement to be executed for each log message. The field names
(names beginning with $) will be replaced with the value they contain. String types will be quoted.

Option
This directive can be used to specify additional driver options such as connection parameters. The manual of
the libdbi driver should contain the options available for use here.

123.3.2. Examples
These two examples are for the plain Syslog fields. Other fields generated by parsers, regular expression rules,
the pm_pattern pattern matcher module, or input modules, can also be used. Notably, the im_msvistalog and
im_mseventlog modules generate different fields than those shown in these examples.

1047
Example 675. Storing Syslog in a PostgreSQL Database

Below is a table schema which can be used to store Syslog data:

CREATE TABLE log (


  id serial,
  timestamp timestamp not null,
  hostname varchar(32) default NULL,
  facility varchar(10) default NULL,
  severity varchar(10) default NULL,
  application varchar(10) default NULL,
  message text,
  PRIMARY KEY (id)
);

The following configuration accepts log messages via TCP and uses libdbi to insert log messages into the
database.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input tcp>
 6 Module im_tcp
 7 Port 1234
 8 Host 0.0.0.0
 9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output dbi>
13 Module om_dbi
14 SQL INSERT INTO log (facility, severity, hostname, timestamp, \
15 application, message) \
16 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
17 $SourceName, $Message)
18 Driver pgsql
19 Option host 127.0.0.1
20 Option username dbuser
21 Option password secret
22 Option dbname logdb
23 </Output>
24
25 <Route tcp_to_dbi>
26 Path tcp => dbi
27 </Route>

1048
Example 676. Storing Logs in a MySQL Database

This configuration reads log messages from the socket and inserts them into a MySQL database.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input uds>
 6 Module im_uds
 7 UDS /dev/log
 8 Exec parse_syslog_bsd();
 9 </Input>
10
11 <Output dbi>
12 Module om_dbi
13 SQL INSERT INTO log (facility, severity, hostname, timestamp, \
14 application, message) \
15 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
16 $SourceName, $Message)
17 Driver mysql
18 Option host 127.0.0.1
19 Option username mysql
20 Option password mysql
21 Option dbname logdb
22 </Output>
23
24 <Route uds_to_dbi>
25 Path uds => dbi
26 </Route>

123.4. Elasticsearch (om_elasticsearch)


This module allows logs to be stored in an Elasticsearch server. It will connect to the URL specified in the
configuration in either plain HTTP or HTTPS mode. This module supports bulk data operations and dynamic
indexing. Event data is sent in batches, reducing the latency caused by the HTTP responses, thus improving
Elasticsearch server performance.

This module requires the xm_json extension module to be loaded in order to convert the
NOTE payload to JSON. If the $raw_event field does not start with a left curly bracket ({), the module
will automatically convert the data to JSON.

See the list of installer packages that provide the om_elasticsearch module in the Available Modules chapter of the
NXLog User Guide.

123.4.1. Using Elasticsearch With NXLog Enterprise Edition 3.x


Some setup is required when using Elasticsearch with NXLog Enterprise Edition 3.x. Consider the following
points. None of this is required with NXLog Enterprise Edition 4.1 and later.

• By default, Elasticsearch will not automatically detect the date format used by NXLog Enterprise Edition 3.x.
As a result, NXLog datetime values, such as $EventTime, will be mapped as strings rather than dates. To fix
this, add an Elasticsearch template for indices matching the specified pattern (nxlog*). Extend the
dynamic_date_formats setting to include additional date formats. For compatibility with indices created
with Elasticsearch 5.x or older, use _default_ instead of _doc (but _default_ will not be supported by

1049
Elasticsearch 7.0.0).

$ curl -X PUT localhost:9200/_template/nxlog?pretty \


  -H 'Content-Type: application/json' -d '
  {
  "index_patterns" : ["nxlog*"],
  "mappings" : {
  "_doc": {
  "dynamic_date_formats": [
  "strict_date_optional_time",
  "YYYY-MM-dd HH:mm:ss.SSSSSSZ",
  "YYYY-MM-dd HH:mm:ss"
  ]
  }
  }
  }'

• The IndexType directive should be set to _doc (the default in NXLog Enterprise Edition 3.x is logs). However,
for compatibility with indices created with Elasticsearch 5.x or older, set IndexType as required for the
configured mapping types. See the IndexType directive below for more information.

123.4.2. Configuration
The om_elasticsearch module accepts the following directives in addition to the common module directives. The
URL directive is required.

URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.
The URL should point to the _bulk endpoint, or Elasticsearch will return 400 Bad Request.

FlushInterval
The module will send a bulk index command to the defined endpoint after this amount of time in seconds,
unless FlushLimit is reached first. This defaults to 5 seconds.

FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send a bulk index command to the endpoint defined in URL. This defaults to 500 events. The FlushInterval
directive may trigger sending the bulk index request before this limit is reached if the log volume is low to
ensure that data is promptly sent to the indexer.

Index
This directive specifies the index to insert the event data into. It must be a string type expression. If the
expression in the Index directive is not a constant string (it contains functions, field names, or operators), it
will be evaluated for each event to be inserted. The default is nxlog. Typically, an expression with strftime() is
used to generate an index name based on the event’s time or the current time (for example,
strftime(now(), "nxlog-%Y%m%d").

IndexType
This directive specifies the index type to use in the bulk index command. It must be a string type expression.
If the expression in the IndexType directive is not a constant string (it contains functions, field names, or
operators), it will be evaluated for each event to be inserted. The default is _doc. Note that index mapping
types have been deprecated and will be removed in Elasticsearch 7.0.0 (see Removal of mapping types in the

1050
Elasticsearch Reference). IndexType should only be used if required for indices created with Elasticsearch 5.x
or older.

ID
This directive allows to specify a custom _id field for Elasticsearch documents. If the directive is not defined,
Elasticsearch uses a GUID for the _id field. Setting custom _id fields can be useful for correlating
Elasticsearch documents in the future and can help to prevent storing duplicate events in the Elasticsearch
storage. The directive’s argument must be a string type expression. If the expression in the ID directive is not
a constant string (it contains functions, field names, or operators), it will be evaluated for each event to be
submitted. You can use a concatenation of event fields and the event timestamp to uniquely and
informatively identify events in the Elasticsearch storage.

HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-
signed certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.

HTTPSKeyPass

1051
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should connect
to the Elasticsearch server through a proxy.

The om_elasticsearch module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.

ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.

SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.

123.4.3. Examples

1052
Example 677. Sending Logs to an Elasticsearch Server

This configuration reads log messages from file and forwards them to the Elasticsearch server on localhost.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input file>
 6 Module im_file
 7 File '/var/log/myapp*.log'
 8 # Parse log here if needed
 9 # $EventTime should be set here
10 </Input>
11
12 <Output elasticsearch>
13 Module om_elasticsearch
14 URL http://localhost:9200/_bulk
15 FlushInterval 2
16 FlushLimit 100
17 # Create an index daily
18 Index strftime($EventTime, "nxlog-%Y%m%d")
19 # Or use the following if $EventTime is not set
20 # Index strftime(now(), "nxlog-%Y%m%d")
21 </Output>

Example 678. Sending Logs to an Elasticsearch Server with Failover

This configuration sends log messages to an Elasticsearch server in a failover configuration (multiple URLs
defined). The actual destinations used in this case are http://localhost:9200/_bulk,
http://192.168.1.1:9200/_bulk, and http://example.com:9200/_bulk.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output elasticsearch>
 6 Module om_elasticsearch
 7 URL http://localhost:9200/_bulk
 8 URL http://192.168.1.1:9200/_bulk
 9 URL http://example.com:9200/_bulk
10 </Output>

123.5. EventDB (om_eventdb)


This custom output module uses libdrizzle to insert log message data into a MySQL database with a special
schema. It also supports Unix domain socket connections to the database for faster throughput.

See the list of installer packages that provide the om_eventdb module in the Available Modules chapter of the
NXLog User Guide.

1053
123.5.1. Configuration
The om_eventdb module accepts the following directives in addition to the common module directives. The
DBname, Password, and UserName directives are required along with one of Host and UDS.

DBname
Name of the database to read the logs from.

Host
This specifies the IP address or a DNS hostname the module should connect to (the hostname of the MySQL
server). This directive cannot be used with UDS.

Password
Password for authenticating to the database server.

UDS
For Unix domain socket connections, this directive can be used to specify the path of the socket such as
/var/run/mysqld.sock. This directive cannot be used with the Host and Port directive.

UserName
Username for authenticating to the database server.

BulkLoad
If set to TRUE, this optional boolean directive instructs the module to use a bulk-loading technique to load
data into the database; otherwise traditional INSERT statements are issued to the server. The default is TRUE.

LoadInterval
This directive specifies how frequently bulk loading should occur, in seconds. It can be only used when
BulkLoad is set to TRUE. The default bulk load interval is 20 seconds.

Port
This specifies the port the module should connect to, on which the database is accepting connections. This
directive cannot be used with UDS. The default is port 3306.

TempDir
This directive sets a directory where temporary files are written. It can be only used when BulkLoad is set to
TRUE. If this directive is not specified, the default directory is /tmp. If the chosen directory does not exist, the
module will try to create it.

123.5.2. Examples

1054
Example 679. Storing Logs in an EventDB Database

This configuration accepts log messages via TCP in the NXLog binary format and inserts them into a
database using libdrizzle.

nxlog.conf
 1 <Input tcp>
 2 Module im_tcp
 3 Host localhost
 4 Port 2345
 5 InputType Binary
 6 </Input>
 7
 8 <Output eventdb>
 9 Module om_eventdb
10 Host localhost
11 Port 3306
12 Username joe
13 Password secret
14 Dbname eventdb_test2
15 </Output>
16
17 <Route tcp_to_eventdb>
18 Path tcp => eventdb
19 </Route>

123.6. Program (om_exec)


This module will execute a program or script on startup and write (pipe) log data to its standard input. Unless
OutputType is set to something else, only the contents of the $raw_event field are sent over the pipe. The
execution of the program or script will terminate when the module is stopped, which usually happens when
NXLog exits and the pipe is closed.

The program or script is started when NXLog starts and must not exit until the module is
NOTE
stopped. To invoke a program or script for each log message, use xm_exec instead.

See the list of installer packages that provide the om_exec module in the Available Modules chapter of the NXLog
User Guide.

123.6.1. Configuration
The om_exec module accepts the following directives in addition to the common module directives. The
Command directive is required.

Command
This mandatory directive specifies the name of the program or script to be executed.

Arg
This is an optional parameter. Arg can be specified multiple times, once for each argument that needs to be
passed to the Command. Note that specifying multiple arguments with one Arg directive, with arguments
separated by spaces, will not work (the Command will receive it as one argument).

Restart
Restart the process if it exits. There is a one second delay before it is restarted to avoid a denial-of-service

1055
when a process is not behaving. Looping should be implemented in the script itself. This directive is only to
provide some safety against malfunctioning scripts and programs. This boolean directive defaults to FALSE:
the Command will not be restarted if it exits.

123.6.2. Examples
Example 680. Piping Logs to an External Program

With this configuration, NXLog will start the specified command, read logs from socket, and write those logs
to the standard input of the command.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Output someprog>
 7 Module om_exec
 8 Command /usr/bin/someprog
 9 Arg -
10 </Output>
11
12 <Route uds_to_someprog>
13 Path uds => someprog
14 </Route>

123.7. Files (om_file)


This module can be used to write log messages to a file.

See the list of installer packages that provide the om_file module in the Available Modules chapter of the NXLog
User Guide.

123.7.1. Configuration
The om_file module accepts the following directives in addition to the common module directives. The File
directive is required.

File
This mandatory directive specifies the name of the output file to open. It must be a string type expression. If
the expression in the File directive is not a constant string (it contains functions, field names, or operators), it
will be evaluated before each event is written to the file (and after the Exec is evaluated). Note that the
filename must be quoted to be a valid string literal, unlike in other directives which take a filename argument.
For relative filenames, note that NXLog changes its working directory to "/" unless the global SpoolDir is set to
something else.

Below are three variations for specifying the same output file on a Windows system:

File 'C:\logs\logmsg.txt'
File "C:\\logs\\logmsg.txt"
File 'C:/logs/logmsg.txt'

CacheSize
In case of dynamic filenames, a cache can be utilized to keep files open. This increases performance by

1056
reducing the overhead caused by many open/close operations. It is recommended to set this to the number
of expected files to be written. Note that this should not be set to more than the number of open files
allowed by the system. This caching provides performance benefits on Windows only. Caching is disabled by
default.

CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the output directory before
opening the file for writing if it does not exist. The default is FALSE.

OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified the
default is LineBased (the module will use CRLF as the record terminator on Windows, or LF on Unix).
This directive also supports stream processors, see the description in the OutputType section.

Sync
This optional boolean directive instructs the module to sync the file after each log message is written,
ensuring that it is really written to disk from the buffers. Because this can hurt performance, the default is
FALSE.

Truncate
This optional boolean directive instructs the module to truncate the file before each write, causing only the
most recent log message to be saved. The default is FALSE: messages are appended to the output file.

123.7.2. Functions
The following functions are exported by om_file.

string file_name()
Return the name of the currently open file which was specified using the File directive. Note that this will be
the old name if the filename changes dynamically; for the new name, use the expression specified for the File
directive instead of using this function.

integer file_size()
Return the size of the currently open output file in bytes. Returns undef if the file is not open. This can
happen if File is not a string literal expression and there was no log message.

123.7.3. Procedures
The following procedures are exported by om_file.

reopen();
Reopen the current file. This procedure should be called if the file has been removed or renamed, for
example with the file_cycle(), file_remove(), or file_rename() procedures of the xm_fileop module. This does
not need to be called after rotate_to() because that procedure reopens the file automatically.

rotate_to(string filename);
Rotate the current file to the filename specified. The module will then open the original file specified with the
File directive. Note that the rename(2) system call is used internally which does not support moving files
across different devices on some platforms. If this is a problem, first rotate the file on the same device. Then
use the xm_exec exec_async() procedure to copy it to another device or file system, or use the xm_fileop
file_copy() procedure.

123.7.4. Examples

1057
Example 681. Storing Raw Syslog Messages into a File

This configuration reads log messages from socket and writes the messages to file. No additional
processing is done.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Output file>
 7 Module om_file
 8 File "/var/log/messages"
 9 </Output>
10
11 <Route uds_to_file>
12 Path uds => file
13 </Route>

1058
Example 682. File Rotation Based on Size

With this configuration, NXLog accepts log messages via TCP and parses them as BSD Syslog. A separate
output file is used for log messages from each host. When the output file size exceeds 15 MB, it will be
automatically rotated and compressed.

nxlog.conf
 1 <Extension exec>
 2 Module xm_exec
 3 </Extension>
 4
 5 <Extension syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog_bsd();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "tmp/output_" + $Hostname + "_" + month(now())
19 <Exec>
20 if file->file_size() > 15M
21 {
22 $newfile = "tmp/output_" + $Hostname + "_" +
23 strftime(now(), "%Y%m%d%H%M%S");
24 file->rotate_to($newfile);
25 exec_async("/bin/bzip2", $newfile);
26 }
27 </Exec>
28 </Output>
29
30 <Route tcp_to_file>
31 Path tcp => file
32 </Route>

123.8. Go (om_go)
This module provides support for forwarding log data with methods written in the Go language. The file specified
by the ImportLib directive should contain one or more methods which can be called from the Exec directive of
any module. See also the xm_go and im_go modules.

For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.

The Go script imports the NXLog module, and will have access to the following classes and functions.

class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).

1059
nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.

nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.

nxmodule.AddEvent()
This function adds a READ event to NXLog. This allows to call the READ event later.

nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog. This allows to call the delayed READ event later.

class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the function specified by the
ImportFunc directive.

nxlogdata.Get(field string) (interface{}, bool)


This function returns the value/exists pair for the logdata field.

nxlogdata.GetString(field string) (string, bool)


This function returns the value/exists pair for the string representation of the logdata field.

nxlogdata.Set(field string, val interface{})


This function sets the logdata field value.

nxlogdata.Delete(field string)
This function removes the field from logdata.

nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.

module
This attribute is set to the module object associated with the event.

See the list of installer packages that provide the om_go module in the Available Modules chapter of the NXLog
User Guide.

123.8.1. Installing the gonxlog.go File


NOTE This applies for Linux only.

For the Go environment to work with NXLog, the gonxlog.go file has to be installed.

1. Copy the gonxlog.go file from the


/opt/nxlog/lib/nxlog/modules/extension/go/gopkg/nxlog.co/gonxlog/ directory to the
$GOPATH/src/nxlog.co/gonxlog/ directory.

2. Change directory to $GOPATH/src/nxlog.co/gonxlog/.

3. Execute the go install gonxlog.go command to install the file.

123.8.2. Compiling the Go File


In order to be able to call Go functions, the Go file must be compiled into a shared object file that has the .so
extension. The syntax for compiling the Go file is the following.

1060
go build -o /path/to/yoursofile.so -buildmode=c-shared /path/to/yourgofile.go

123.8.3. Configuration
The om_go module accepts the following directives in addition to the common module directives.

ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.

ImportFunc
This mandatory directive calls the specified function, which must accept an unsafe.Pointer object as its only
argument. This function is called when the module tries to read data. It is a mandatory function.

123.8.4. Configuration Template

In this Go file template, the write function is called via the ImportFunc directive.

om_go Template
//export write
func write(ctx unsafe.Pointer) {
  // get logdata from the context
  if ld, ok := gonxlog.GetLogdata(ctx); ok {
  // place your code here
  }
}

123.8.5. Examples

1061
Example 683. Using om_go for Forwarding Events

This configuration connects to and sends log data to a MongoDB database.

nxlog.conf
 1 <Input in>
 2 Module im_testgen
 3 MaxCount 10
 4 </Input>
 5
 6 <Output out>
 7 Module om_go
 8 ImportLib "file/output.so"
 9 ImportFunc write
10 </Output>

om_go file Sample


//export write
func write(ctx unsafe.Pointer) {
  if collection == nil {
  gonxlog.LogDebug("not connected, skip record")
  return
  }
  if logdata, ok := gonxlog.GetLogdata(ctx); ok {
  if rawEvent, ok := logdata.GetString("raw_event"); ok {
  insertResult, err := collection.InsertOne(context.TODO(), bson.M{
  "source": "nxlog",
  "created_at": time.Now(),
  "raw_event": rawEvent,
  })
  if err != nil {
  gonxlog.LogError(fmt.Sprintf("Insert error: %v", err.Error()))
  } else {
  gonxlog.LogDebug(fmt.Sprintf("Insert '%v'", insertResult))
  }
  }
  }
}

123.9. HTTP(s) (om_http)


This module will connect to the specified URL in either plain HTTP or HTTPS mode. The module then waits for a
response containing a successful status code (200, 201, or 202). If the remote host closed the connection or a
timeout is exceeded while waiting for the response, it will reconnect and retry the delivery. This HTTP-level
acknowledgment ensures that no messages are lost during the transfer. By default, each event is transferred in a
single POST request. However, the module can be configured to send event data in batches to reduce the latency
caused by the HTTP responses, thus improving throughput.

See the list of installer packages that provide the om_http module in the Available Modules chapter of the NXLog
User Guide.

123.9.1. Configuration
The om_http module accepts the following directives in addition to the common module directives. The URL
directive is required.

1062
URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.

AddHeader
This optional directive specifies an additional header to be added to each HTTP request.

BatchMode
This optional directive sets, whether the data should be sent as a single event per POST request or a batch of
events per POST request. The default setting is none, meaning that data will be sent as a single event per
POST request. The other available values are multipart and multiline. For multipart, the generated POST
request will use the multipart/mixed content type, where each batched event will be added as a separate
body part to the request body. For multiline, batched events will be added to the POST request one per line
(separated by CRLF (\r\n) characters).

The add_http_header() and set_http_request_path() procedures may cause the current batch to
be flushed immediately. For the multiline batching mode, this happens whenever the value of
the URL path or the value of an HTTP header changes, because this requires a new HTTP
NOTE request to be built. In multipart batching mode, only set_http_request_path() will cause a
batch flush when the path value changes, because add_http_header() only modifies the HTTP
header for the HTTP body part corresponding to the event record that is currently being
processed.

ContentType
This directive sets the Content-Type HTTP header to the string specified. The Content-Type is set to text/plain
by default. Note: If the BatchMode directive is set to multipart, then the value specified here will be used as
the Content-Type header for each part of the multipart/mixed HTTP request.

FlushInterval
This directive specifies the time period after which the accumulated data should be sent out in batched mode
in a POST request. This defaults to 50 milliseconds. The configuration of this directive only takes effect, if
BatchMode is set to multipart or multiline.

FlushLimit
This directive specifies the number of events that are merged into a single POST request. This defaults to 500
events. The configuration of this directive only takes effect, if BatchMode is set to multipart or multiline.

HTTPSAllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-signed
certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of

1063
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.

HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may

1064
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should send
event data through a proxy.

The om_http module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.

ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.

SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.

123.9.2. Procedures
The following procedures are exported by om_http.

add_http_header(string name, string value);


Dynamically add a custom HTTP header to HTTP requests.

This function impacts the way batching works. See the BatchMode directive for more
NOTE
information.

set_http_request_path(string path);
Set the path in the HTTP request to the string specified. This is useful if the URL is dynamic and parameters
such as event ID need to be included in the URL. Note that the string must be URL encoded if it contains
reserved characters.

This function impacts the way batching works. See the BatchMode directive for more
NOTE
information.

123.9.3. Examples
Example 684. Sending Logs over HTTPS

This configuration reads log messages from file and forwards them via HTTPS.

nxlog.conf
 1 <Output http>
 2 Module om_http
 3 URL https://server:8080/
 4 AddHeader Auth-Token: 4ddf1d3c9
 5 HTTPSCertFile %CERTDIR%/client-cert.pem
 6 HTTPSCertKeyFile %CERTDIR%/client-key.pem
 7 HTTPSCAFile %CERTDIR%/ca.pem
 8 HTTPSAllowUntrusted FALSE
 9 BatchMode multipart
10 FlushLimit 100
11 FlushInterval 2
12 </Output>

1065
123.10. Java (om_java)
This module provides support for processing NXLog log data with methods written in the Java language. The Java
classes specified via the ClassPath directives may define one or more class methods which can be called from the
Run or Exec directives of this module. Such methods must be declared with the public and static modifiers in
the Java code to be accessible from NXLog, and the first parameter must be of NXLog.Logdata type. See also the
im_java and xm_java modules.

For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.

The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.

class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:

getField(name)
This method returns the value of the field name in the event.

setField(name, value)
This method sets the value of field name to value.

deleteField(name)
This method removes the field name from the event record.

getFieldnames()
This method returns an array with the names of all the fields currently in the event record.

getFieldtype(name)
This method retrieves the field type using the value from the name field.

class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.

saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.

loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.

Below is the list of methods for sending messages to the internal logger.

NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.

NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.

NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.

1066
NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.

123.10.1. Configuration
The NXLog process maintains only one JVM instance for all om_java, im_java or xm_java running instances. This
means all classes loaded by the ClassPath directive will be available for all running Java instances.

The om_java module accepts the following directives in addition to the common module directives.

ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.

VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.

VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.

JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.

Run
This mandatory directive specifies the static method inside the Classpath file which should be called.

123.10.2. Example of Usage


Example 685. Using the om_java Module for Processing Logs

This is an example of a configuration for adding a timestamp field and writing log processing results to a
file. The run method of the Writer Java class is being used to handle the processing.

Below is the NXLog configuration.

nxlog.conf
1 <Output javaout>
2 Module om_java
3 # The Run directive includes the full method name with
4 # the nested and outer classes
5 # The mandatory parameter will be passed automatically
6 Run Output$Writer.run
7 ClassPath /tmp/Output.jar
8 </Output>

Below is the Java class with comments.

1067
Output.java
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.*;

public class Output {


  // The Output class utilizes a nested static class
  public static class Writer {
  static String fileName = "/tmp/output.txt";

  static Date currentDate = new Date();

  static SimpleDateFormat df = new SimpleDateFormat("MM.dd.YYYY.hh:mm:ss");

  // This is the method for the output module


  // The NXLog.Logdata ld parameter is mandatory
  static public void run(NXLog.Logdata ld) {

  try {
  // 1. Retrieves the $raw_event field from the NXLog data record
  // 2. Adds the timestamp field with the current time
  // 3. Writes the results into the file
  if (((String)ld.getField("raw_event")).contains("type=")) {
  Files.write(Paths.get(fileName), ("timestamp=" + df.format(currentDate) + "
" + (String) ld.getField("raw_event") + "\n").getBytes(), StandardOpenOption.CREATE,
StandardOpenOption.APPEND);
  }
  } catch (IOException e) {
  e.printStackTrace();
  }
  }
  }
}

Below are the log samples before and after processing.

Input sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵

type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵

type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵

1068
Output Sample
timestamp=02.20.2020.09:19:58 type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"
timestamp=02.20.2020.09:19:58 type=PATH msg=audit(1489999368.711:35724): item=0
name="/root/test" inode=528869 dev=08:01 mode=040755 ouid=0 ogid=0 rdev=00:00
timestamp=02.20.2020.09:19:58 type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e
syscall=2 success=yes exit=3 a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0
uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls"
exe="/bin/ls" key=(null)

123.11. Kafka (om_kafka)


This module implements an Apache Kafka producer for publishing event records to a Kafka topic. See also the
im_kafka module.

The om_kafka module is not supported as the underlying librdkafka library is unstable on
WARNING
AIX. Use it on IBM AIX at your own risk.

The module uses an internal persistent queue to back up event records that should be pushed to a Kafka broker.
Once the module receives an acknowledgement from the Kafka server that the message has been delivered
successfully, the module removes the corresponding message from the internal queue. If the module is unable
to deliver a message to a Kafka broker (for example, due to connectivity issues or the Kafka server being down),
this message is retained in the internal queue (including cases when NXLog restarts) and the module will attempt
to re-deliver the message again.

The number of re-delivery attempts can be specified by passing the message.send.max.retries property via
the Option directive (for example, Option message.send.max.retries 5). By default, the number of retries is
set to 2 and the time interval between two subsequent retries is 5 minutes. Thus, by altering the number of
retries, it is possible to control the total time for a message to remain in the internal queue. If a message cannot
be delivered within the allowed retry attempts, the message is dropped. The maximum size of the internal queue
is controlled by the LogqueueSize directive, which defaults to 100 messages. To increase the size of the internal
queue, follow these steps:

1. Specify the required queue size value using the LogqueueSize directive.
2. Set the directive Option queue.buffering.max.messages N. When this option is not set, the default value
used by librdkafka is 10000000.

For optimum performance, the LogqueueSize directive should be set to a value that is slightly larger than the
value used for the queue.buffering.max.messages option.

See the list of installer packages that provide the om_kafka module in the Available Modules chapter of the
NXLog User Guide.

123.11.1. Configuration
The om_kafka module accepts the following directives in addition to the common module directives. The
BrokerList and Topic directives are required.

BrokerList
This mandatory directive specifies the list of Kafka brokers to connect to for publishing logs. The list should
include ports and be comma-delimited (for example, localhost:9092,192.168.88.35:19092).

Topic
This mandatory directive specifies the Kafka topic to publish records to.

1069
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote brokers. CAFile is required if Protocol is set to ssl or sasl_ssl. To trust a self-signed certificate
presented by the remote (which is not signed by a CA), provide that certificate instead.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

Compression
This directive specifies the compression types to use during transfer. Available types depend on the Kafka
library, and should include none (the default), gzip, snappy, and lz4.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.

Option
This directive can be used to pass a custom configuration property to the Kafka library (librdkafka). For
example, the group ID string can be set with Option group.id mygroup. This directive may be used more
than once to specify multiple options. For a list of configuration properties, see the librdkafka
CONFIGURATION.md file.

Passing librdkafka configuration properties via the Option directive should be done with
WARNING care since these properties are used for the fine-tuning of the librdkafka performance
and may result in various side effects.

Partition
This optional integer directive specifies the topic partition to write to. If this directive is not given, messages
are sent without a partition specified.

Protocol
This optional directive specifies the protocol to use for connecting to the Kafka brokers. Accepted values
include plaintext (the default), ssl, sasl_plaintext and sasl_ssl. If Protocol is set to ssl or sasl_ssl,
then the CAFile directive must also be provided.

SASLKerberosServiceName
This directive specifies the Kerberos service name to be used for SASL authentication. The service name is
required for the sasl_plaintext and sasl_ssl protocols.

SASLKerberosPrincipal
This specifies the client’s Kerberos principal name for the sasl_plaintext and sasl_ssl protocols. This
directive is only available and mandatory on Linux/UNIX. See note below.

SASLKerberosKeytab
Specifies the path to the kerberos keytab file which contains the client’s allocated principal name. This
directive is only available and mandatory on Linux/UNIX.

1070
The SASLKerberosServiceName and SASLKerberosPrincipal directives are only available on
Linux/UNIX. On Windows, the login user’s principal name and credentials are used for
SASL/Kerberos authentication.

NOTE For details about configuring Apache Kafka brokers to accept SASL/Kerberos authentication
from clients, please follow the instructions provided by the librdkafka project:

• For kafka brokers running on Linux and UNIX-likes: Using SASL with librdkafka
• For kafka brokers running on Windows: Using SASL with librdkafka on Windows

123.11.2. Examples
Example 686. Using the om_kafka Module

This configuration sends events to a Kafka cluster using the brokers specified. Events are published to the
first partition of the nxlog topic.

nxlog.conf
 1 <Output out>
 2 Module om_kafka
 3 BrokerList localhost:9092,192.168.88.35:19092
 4 Topic nxlog
 5 Partition 0
 6 Protocol ssl
 7 CAFile /root/ssl/ca-cert
 8 CertFile /root/ssl/client_debian-8.pem
 9 CertKeyFile /root/ssl/client_debian-8.key
10 KeyPass thisisasecret
11 </Output>

123.12. Null (om_null)


Log messages sent to the om_null module instance are discarded, this module does not write its output
anywhere. It can be useful for creating a dummy route, for testing purposes, or for Scheduled NXLog code
execution. The om_null module accepts only the common module directives. See this example for usage.

See the list of installer packages that provide the om_null module in the Available Modules chapter of the NXLog
User Guide.

123.13. Oracle OCI (om_oci)


This module can write log messages to an Oracle database.

WARNING This module is deprecated, please use the om_odbc module instead.

123.13.1. Configuration
The om_oci module accepts the following directives in addition to the common module directives. The DBname,
Password, and UserName directives are required.

DBname
Name of the database to write the logs to.

1071
Password
Password for authenticating to the database server.

UserName
Username for authenticating to the database server.

ORACLE_HOME
This optional directive specifies the directory of the Oracle installation.

SQL
An optional SQL statement to override the default.

123.13.2. Examples
Example 687. Storing Logs in an Oracle Database

This configuration reads BSD Syslog messages from socket, parses the messages, and inserts them into the
database.

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input uds>
 6 Module im_uds
 7 UDS /dev/log
 8 </Input>
 9
10 <Output oci>
11 Module om_oci
12 dbname //192.168.1.1:1521/orcl
13 username joe
14 password secret
15 SQL INSERT INTO log ("id", "facility", "severity", "hostname", \
16 "timestamp", "application", "message") \
17 VALUES (log_seq.nextval, $SyslogFacility, $SyslogSeverity, \
18 $Hostname, to_date($rcvd_timestamp, \
19 'YYYY-MM-DD HH24:MI:SS'), \
20 $SourceName, $Message)
21 Exec parse_syslog();
22 </Output>
23
24 <Route uds_to_oci>
25 Path uds => oci
26 </Route>

123.14. ODBC (om_odbc)


ODBC is a database independent abstraction layer for accessing databases. This module uses the ODBC API to
write data to database tables. There are several ODBC implementations available, and this module has been
tested with unixODBC on Linux (available in most major distributions) and Microsoft ODBC on Windows.

Setting up the ODBC data source is not in the scope of this document. Please consult the relevant ODBC guide:
the unixODBC documentation or the Microsoft ODBC Data Source Administrator guide. The data source must be

1072
accessible by the user NXLog is running under.

The "SQL Server" ODBC driver is unsupported and does not work. Instead, use the "SQL Server
NOTE Native Client" or the "ODBC Driver for SQL Server" to insert records into a Microsoft SQL Server
database.

In addition to the SQL directive, this module provides two functions, sql_exec() and sql_fetch(), which can be
executed using the Exec directive. This allows more complex processing rules to be used and also makes it
possible to insert records into more than one table.

Both sql_exec() and sql_fetch() can take bind parameters as function arguments. It is
recommended to use bind parameters instead of concatenating the SQL statement with the
value. For example, these two are equivalent but the first is dangerous due to the lack of
escaping:
NOTE

$retval = sql_exec("INSERT INTO log (id) VALUES (" + $id + ")");

$retval = sql_exec("INSERT INTO log (id) VALUES (?)", $id);

See the list of installer packages that provide the om_odbc module in the Available Modules chapter of the NXLog
User Guide.

123.14.1. Configuration
The om_odbc module accepts the following directives in addition to the common module directives.

ConnectionString
This mandatory directive specifies the ODBC data source connection string.

SQL
This optional directive can be used to specify the INSERT statement to be executed for each log message. If
the statement fails for an event, it will be attempted again. If the SQL directive is not given, then an Exec
directive should be used to execute the sql_exec() function.

123.14.2. Functions
The following functions are exported by om_odbc.

string sql_error()
Return the error message from the last failed ODBC operation.

boolean sql_exec(string statement, varargs args)


Execute the SQL statement. Bind parameters should be passed separately after the statement string. Returns
FALSE if the SQL execution failed: sql_error() can be used to retrieve the error message.

boolean sql_fetch(string statement, varargs args)


Fetch the first row of the result set specified with a SELECT query in statement. The function will create or
populate fields named after the columns in the result set. Bind parameters should be passed separately after
the statement string. Returns FALSE if the SQL execution failed: sql_error() can be used to retrieve the error
message.

123.14.3. Examples

1073
Example 688. Write Events to SQL Server

This configuration uses a DSN-less connection and SQL Authentication to connect to an SQL Server
database. Records are inserted into the dbo.test1 table’s timestamp and message columns, using the
$EventTime and $Message fields from the current event.

nxlog.conf
1 <Output mssql>
2 Module om_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; Server=MSSQL-HOST; \
4 UID=test; PWD=testpass; Database=TESTDB
5 SQL "INSERT INTO dbo.test1 (timestamp, message) VALUES (?,?)", \
6 $EventTime, $Message
7 </Output>

Example 689. Complex Write to an ODBC Data Source

In this example, the events read from the TCP input are inserted into the message column. The table has an
auto_increment id column, which is used to fetch and print the newly inserted line.

nxlog.conf
 1 <Input tcp>
 2 Module im_tcp
 3 Port 1234
 4 Host 0.0.0.0
 5 </Input>
 6
 7 <Output odbc>
 8 Module om_odbc
 9 ConnectionString DSN=mysql_ds;username=mysql;password=mysql;database=logdb;
10 <Exec>
11 if ( sql_exec("INSERT INTO log (facility, severity, hostname, timestamp, " +
12 "application, message) VALUES (?, ?, ?, ?, ?, ?)",
13 1, 2, "host", now(), "app", $raw_event) == TRUE )
14 {
15 if ( sql_fetch("SELECT max(id) as id from log") == TRUE )
16 {
17 log_info("ID: " + $id);
18 if ( sql_fetch("SELECT message from log WHERE id=?", $id) == TRUE )
19 {
20 log_info($message);
21 }
22 }
23 }
24 </Exec>
25 </Output>
26
27 <Route tcp_to_odbc>
28 Path tcp => odbc
29 </Route>

123.15. Perl (om_perl)


The Perl programming language is widely used for log processing and comes with a broad set of modules
bundled or available from CPAN. Code can be written more quickly in Perl than in C, and code execution is safer
because exceptions (croak/die) are handled properly and will only result in an unfinished attempt at log

1074
processing rather than taking down the whole NXLog process.

This module makes it possible to execute Perl code in an output module that can handle the data directly in Perl.
See also the im_perl and xm_perl modules.

The module will parse the file specified in the PerlCode directive when NXLog starts the module. The Perl code
must implement the write_data subroutine which will be called by the module when there is data to process. This
subroutine is called for each event record and the event record is passed as an argument. To access event data,
the Log::Nxlog Perl module must be included, which provides the following methods.

To use the om_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the om_perl module on Windows requires Strawberry Perl 5.28.0.1.

log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.

log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.

log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.

log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.

get_field(event, key)
Retrieve the value associated with the field named key. The method returns a scalar value if the key exists and
the value is defined, otherwise it returns undef.

For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.

See the list of installer packages that provide the om_perl module in the Available Modules chapter of the NXLog
User Guide.

123.15.1. Configuration
The om_perl module accepts the following directives in addition to the common module directives.

PerlCode
This mandatory directive expects a file containing valid Perl code. This file is read and parsed by the Perl
interpreter.

On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.

nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';

1075
Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.

If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.

Call
This optional directive specifies the Perl subroutine to invoke. With this directive, you can call only specific
subroutines from your Perl code. If the directive is not specified, the default subroutine write_data is
invoked.

123.15.2. Examples
Example 690. Handling Event Data in om_perl

This output module sends events to the Perl script, which simply writes the data from the $raw_event field
into a file.

nxlog.conf
1 <Output out>
2 Module om_perl
3 PerlCode modules/output/perl/perl-output.pl
4 Call write_data1
5 </Output>

perl-output.pl
use strict;
use warnings;

use Log::Nxlog;

sub write_data1
{
  my ($event) = @_;
  my $rawevt = Log::Nxlog::get_field($event, 'raw_event');
  open(OUT, '>', 'tmp/output') || die("cannot open tmp/output: $!");
  print OUT $rawevt, "(from perl)", "\n";
  close(OUT);
}

123.16. Named Pipes (om_pipe)


This module allows log messages to be sent to named pipes on UNIX-like operating systems.

123.16.1. Configuration
The om_pipe module accepts the following directives in addition to the common module directives.

Pipe
This mandatory directive specifies the name of the output pipe file. The module checks if the specified pipe
file exists and creates it in case it does not. If the specified pipe file is not a named pipe, the module does not
start.

1076
OutputType
This directive specifies the input data format. The default value is LineBased. See the OutputType directive in
the list of common module directives.

123.16.2. Examples
This example provides the NXLog configuration for forwarding messages to a named pipe on a UNIX-like
operating system.

Example 691. Forwarding Logs From a File to a Pipe

With this configuration, NXLog reads messages from a file and forwards them to a pipe. No additional
processing is done.

nxlog.conf
<Input in>↵
  Module im_file↵
  File "/tmp/input"↵
</Input>↵

<Output out>↵
  Module om_pipe↵
  Pipe "/tmp/output"↵
</Output>↵

123.17. Python (om_python)


This module provides support for forwarding log data with methods written in the Python language. The file
specified by the PythonCode directive should contain a write_data() method which is called by the om_python
module instance. See also the xm_python and im_python modules.

The Python script should import the nxlog module, and will have access to the following classes and functions.

nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.

nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.

nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.

nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.

class nxlog.Module
This class is instantiated by NXLog and can be accessed via the LogData.module attribute. This can be used to
set or access variables associated with the module (see the example below).

class nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the write_data() method.

1077
delete_field(name)
This method removes the field name from the event record.

field_names()
This method returns a list with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

set_field(name, value)
This method sets the value of field name to value.

module
This attribute is set to the Module object associated with the LogData event.

See the list of installer packages that provide the om_python module in the Available Modules chapter of the
NXLog User Guide.

123.17.1. Configuration
The om_python module accepts the following directives in addition to the common module directives.

PythonCode
This mandatory directive specifies a file containing Python code. The om_python instance will call a
write_data() function which must accept an nxlog.LogData object as its only argument.

Call
This optional directive specifies the Python method to invoke. With this directive, you can call only specific
methods from your Python code. If the directive is not specified, the default method write_data is invoked.

123.17.2. Examples

1078
Example 692. Forwarding Events With om_python

This example shows an alerter implemented as an output module instance in Python. First, any event with
a normalized severity less than of 4/ERROR is dropped; see the Exec directive (xm_syslog and most other
modules set a normalized $SeverityValue field). Then the Python function generates a custom email and
sends it via SMTP.

nxlog.conf
1 <Output out>
2 Module om_python
3 PythonCode /opt/nxlog/etc/output.py
4 Exec if $SeverityValue < 4 drop();
5 </Output>

output.py (truncated)
from email.mime.text import MIMEText
import pprint
import smtplib
import socket

import nxlog

HOSTNAME = socket.gethostname()
FROM_ADDR = 'nxlog@{}'.format(HOSTNAME)
TO_ADDR = 'you@example.com'

def write_data(event):
  nxlog.log_debug('Python alerter received event')

  # Convert field list to dictionary


  all = {}
  for field in event.get_names():
  all.update({field: event.get_field(field)})
[...]

123.18. Raijin (om_raijin)


This module allows logs to be stored in a Raijin server. It will connect to the URL specified in the configuration in
either plain HTTP or HTTPS mode. Raijin accepts HTTP POST requests with multiple JSON records in the request
body, assuming that there is a target database table already created on the Raijin side. Note that Raijin only
suports flat JSON (i.e. a list of key-value pairs) and does not accept nested data structures such as arrays and
maps. Raijin currently does not support authorization/SSL but the om_raijin module supports TLS since TLS can
be enabled with an HTTP proxy. For more information, see Raijin website.

This module requires the xm_json extension module to be loaded in order to convert the
payload to JSON. If the $raw_event field is empty the fields will be automatically converted to
NOTE
JSON. If $raw_event contains a valid JSON string it will be sent as-is, otherwise a JSON record will
be generated in the following structure: { "raw_event": "escaped raw_event content" }

See the list of installer packages that provide the om_raijin module in the Available Modules chapter of the NXLog
User Guide.

123.18.1. Configuration
The om_raijin module accepts the following directives in addition to the common module directives. The URL
directive is required.

1079
DBName
This mandatory directive specifies the database name to insert data into.

DBTable
This mandatory directive specifies the database table to insert data into.

URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.
The URL should point to the _bulk endpoint, otherwise Raijin will return 400 Bad Request.

FlushInterval
The module will send an INSERT command to the defined endpoint after this amount of time in seconds,
unless FlushLimit is reached first. This defaults to 5 seconds.

FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send an INSERT command to the endpoint defined in URL. This defaults to 500 events. The FlushInterval
directive may trigger sending the INSERT request before this limit is reached if the log volume is low to ensure
that data is promptly sent.

HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-
signed certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.

1080
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should connect
to the Raijin server through a proxy.

The om_raijin module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.

ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.

SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.

123.18.2. Examples

1081
Example 693. Sending Logs to a Raijin Server

This configuration reads log messages from file and forwards them to the Raijin server on localhost.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output raijin>
 6 Module om_raijin
 7 URL http://localhost:9200/_bulk
 8 FlushInterval 2
 9 FlushLimit 100
10 </Output>

Example 694. Sending Logs to a Raijin Server with Failover

This configuration sends logs to a Raijin server in a failover configuration (multiple URLs defined). The
actual destinations used in this case are http://localhost:9200/_bulk,
http://192.168.1.1:9200/_bulk, and http://example.com:9200/_bulk.

nxlog.conf
 1 <Extension json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Output raijin>
 6 Module om_raijin
 7 URL http://localhost:9200/_bulk
 8 URL http://192.168.1.1:9200/_bulk
 9 URL http://example.com:9200/_bulk
10 </Output>

123.19. Redis (om_redis)


This module can store data in a Redis server. It issues RPUSH commands using the Redis Protocol to send data.

The input counterpart, im_redis, can be used to retrieve data from a Redis server.

See the list of installer packages that provide the om_redis module in the Available Modules chapter of the NXLog
User Guide.

123.19.1. Configuration
The om_redis module accepts the following directives in addition to the common module directives. The Host
directive is required.

Host
This mandatory directive specifies the IP address or DNS hostname of the Redis server to connect to.

Channel
This directive is interpreted the same way as the Key directive (can be an expression which evaluates to a
string), except that its evaluated value will be used as the name of the Redis channel to which this module will

1082
publish records. The usage of this directive is mutually exclusive with the usage of the LPUSH, RPUSH LPUSHX
and RPUSHX commands in the Command directive.

Command
This optional directive specifies the command to be used. The possible commands are LPUSH, RPUSH (the
default), LPUSHX, RPUSHX and PUBLISH.

Key
This specifies the Key used by the RPUSH command. It must be a string type expression. If the expression in
the Key directive is not a constant string (it contains functions, field names, or operators), it will be evaluated
for each event to be inserted. The default is nxlog. The usage of this directive is mutually exclusive with the
usage of the PUBLISH command in the Command directive.

OutputType
See the OutputType directive in the list of common module directives. If this directive is unset, the default
Dgram formatter function is used, which writes the value of $raw_event without a line terminator. To
preserve structured data Binary can be used, but it must also be set on the other end.

Port
This specifies the port number of the Redis server. The default is port 6379.

123.20. Ruby (om_ruby)


This module provides support for forwarding log data with methods written in the Ruby language. See also the
xm_ruby and im_ruby modules.

The Nxlog module provides the following classes and methods.

Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.

Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.

Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.

Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.

class Nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the method specified by the Call
directive.

field_names()
This method returns an array with the names of all the fields currently in the event record.

get_field(name)
This method returns the value of the field name in the event.

set_field(name, value)
This method sets the value of field name to value.

1083
See the list of installer packages that provide the om_ruby module in the Available Modules chapter of the NXLog
User Guide.

123.20.1. Configuration
The om_ruby module accepts the following directives in addition to the common module directives. The
RubyCode directive is required.

RubyCode
This mandatory directive specifies a file containing Ruby code. The om_ruby instance will call the method
specified by the Call directive. The method must accept an Nxlog.LogData object as its only argument.

Call
This optional directive specifies the Ruby method to call. The default is write_data.

123.20.2. Examples
Example 695. Forwarding Events With om_ruby

This example uses a Ruby script to choose an output file according to the severity of the event. Normalized
severity fields are added by most modules; see, for example, the xm_syslog $SeverityValue field.

TIP See Using Dynamic Filenames for a way to implement this functionality natively.

nxlog.conf
1 <Output out>
2 Module om_ruby
3 RubyCode ./modules/output/ruby/proc2.rb
4 Call write_data
5 </Output>

proc2.rb
def write_data event
  if event.get_field('SeverityValue') >= 4
  Nxlog.log_debug('Writing out high severity event')
  File.open('tmp/high_severity', 'a') do |file|
  file.write("#{event.get_field('raw_event')}\n")
  file.flush
  end
  else
  Nxlog.log_debug('Writing out low severity event')
  File.open('tmp/low_severity', 'a') do |file|
  file.write("#{event.get_field('raw_event')}\n")
  file.flush
  end
  end
end

123.21. TLS/SSL (om_ssl)


The om_ssl module uses the OpenSSL library to provide an SSL/TLS transport. It behaves like the om_tcp module,
except that an SSL handshake is performed at connection time and the data is received over a secure channel.
Log messages transferred over plain TCP can be eavesdropped or even altered with a man-in-the-middle attack,
while the om_ssl module provides a secure log message transport.

1084
See the list of installer packages that provide the om_ssl module in the Available Modules chapter of the NXLog
User Guide.

123.21.1. Configuration
The om_ssl module accepts the following directives in addition to the common module directives. The Host
directive is required.

Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.

Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
514.

Port directive for will become deprecated in this context from NXLog EE 6.0. Provide the
IMPORTANT
port in Host.

AllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE the connection will be allowed even if the remote server presents an unknown or self-signed certificate.
The default value is FALSE: the remote socket must present a trusted certificate.

CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.

CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.

CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.

CertFile
This specifies the path of the certificate file to be used for the SSL handshake.

CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.

1085
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.

CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.

CRLFile
This specifies the path of the certificate revocation list (CRL) which will be used to check the certificate of the
remote socket against.

KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.

LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.

OutputType
See the OutputType directive in the list of common module directives. The default is LineBased_LF.

Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.

SNI
This optional directive specifies the host name used for Server Name Indication (SNI).

SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.

SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the

1086
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

TCPNoDelay
This boolean directive is used to turn off the network optimization performed by Nagle’s algorithm. Nagle’s
algorithm is a network optimization tweak that tries to reduce the number of small packets sent out to the
network, by merging them into bigger frames, and by not sending them to the other side of the session
before receiving the ACK. If this directive is unset, the TCP_NODELAY socket option will not be set.

123.21.2. Procedures
The following procedures are exported by om_ssl.

reconnect();
Force a reconnection. This can be used from a Schedule block to periodically reconnect to the server.

123.21.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

Example 696. Sending Binary Data to Another NXLog Agent

This configuration reads log messages from socket and sends them in the NXLog binary format to another
NXLog agent.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS tmp/socket
 4 </Input>
 5
 6 <Output ssl>
 7 Module om_ssl
 8 Host localhost:23456
 9 LocalPort 15014
10 CAFile %CERTDIR%/ca.pem
11 CertFile %CERTDIR%/client-cert.pem
12 CertKeyFile %CERTDIR%/client-key.pem
13 KeyPass secret
14 AllowUntrusted TRUE
15 OutputType Binary
16 </Output>
17
18 # old syntax
19 #<Output ssl>
20 # Module om_ssl
21 # Host localhost
22 # Port 23456
23 # CAFile %CERTDIR%/ca.pem
24 # CertFile %CERTDIR%/client-cert.pem
25 # CertKeyFile %CERTDIR%/client-key.pem
26 # KeyPass secret
27 # AllowUntrusted TRUE
28 # OutputType Binary
29 #</Output>

1087
Example 697. Sending Logs to Another NXLog Agent with Failover

This configuration sends logs to another NXLog agent in a failover configuration (multiple Hosts defined).

nxlog.conf
1 <Output ssl>
2 Module om_ssl
3 Host localhost:23456
4 Host 192.168.1.1:23456
5 Host example.com:1514
6 LocalPort 15014
7 </Output>

123.22. TCP (om_tcp)


This module initiates a TCP connection to a remote host and transfers log messages. Or, in Listen mode, this
module accepts client connections and multiplexes data to all connected clients. The TCP transfer protocol
provides more reliable log transmission than UDP. If security is a concern, consider using the om_ssl module
instead.

See the list of installer packages that provide the om_tcp module in the Available Modules chapter of the NXLog
User Guide.

123.22.1. Configuration
The om_tcp module accepts the following directives in addition to the common module directives. The Host or
ListenAddr directive is required.

IMPORTANT Use either Host for the connect or ListenAddr the listen mode.

Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.

ListenAddr
The module will listen for connections on this IP address or DNS hostname. The default is localhost. Add
the port number to listen on to the end of a host using a colon as a separator (host:port).

Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
514. Alternatively, if Listen is set to TRUE, the module will listen for connections on this port.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.

1088
Listen
If TRUE, this boolean directive specifies that om_tcp should listen for connections at the local address
specified by the Host and Port directives rather than opening a connection to the address. The default is
FALSE: om_tcp will connect to the specified address.

Listen directive for will become deprecated in this context from NXLog EE 6.0. Use either
IMPORTANT
Host for the connect (FALSE) or ListenAddr the listen (TRUE) mode.

LocalPort
This optional directive specifies the local port number of the connection. This directive only applies if Listen is
set to FALSE. If this is not specified a random high port number will be used, which is not always ideal in
firewalled network environments.

OutputType
See the OutputType directive in the list of common module directives. The default is LineBased_LF.

QueueInListenMode
If set to TRUE, this boolean directive specifies that events should be queued if no client is connected. If this
module’s buffer becomes full, the preceding module in the route will be paused or events will be dropped,
depending on whether FlowControl is enabled. This directive only applies if Listen is set to TRUE. The default
is FALSE: om_tcp will discard events if no client is connected.

Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.

TCPNoDelay
This boolean directive is used to turn off the network optimization performed by Nagle’s algorithm. Nagle’s
algorithm is a network optimization tweak that tries to reduce the number of small packets sent out to the
network, by merging them into bigger frames, and by not sending them to the other side of the session
before receiving the ACK. If this directive is unset, the TCP_NODELAY socket option will not be set.

123.22.2. Procedures
The following procedures are exported by om_tcp.

reconnect();
Force a reconnection. This can be used from a Schedule block to periodically reconnect to the server.

123.22.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

1089
Example 698. Transferring Raw Logs over TCP

With this configuration, NXLog will read log messages from socket and forward them via TCP.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Output tcp>
 7 Module om_tcp
 8 Host 192.168.1.1:1514
 9 </Output>
10
11 # old syntax
12 #<Output tcp>
13 # Module om_tcp
14 # Host 192.168.1.1
15 # Port 1514
16 #</Output>
17
18 <Route uds_to_tcp>
19 Path uds => tcp
20 </Route>

Example 699. Sending logs over TCP with Failover

This configuration sends logs via TCP in a failover configuration (multiple Hosts defined). The actual
destinations used in this case are localhost:1514, 192.168.1.1:1514, and example.com:1234.

nxlog.conf
 1 <Output tcp>
 2 Module om_tcp
 3 Host localhost:1514
 4 Host localhost:1234
 5 Host 127.0.0.1:1514
 6 Host 127.0.0.1:1234
 7 Host [::1]:1514
 8 Host [::1]:1234
 9 Host example.com:1234
10 Port
11 </Output>

123.23. UDP (om_udp)


This module sends log messages as UDP datagrams to the address and port specified. UDP is the transport
protocol of the legacy BSD Syslog standard as described in RFC 3164, so this module can be particularly useful to
send messages to devices or Syslog daemons which do not support other transports.

See the list of installer packages that provide the om_udp module in the Available Modules chapter of the NXLog
User Guide.

1090
123.23.1. Configuration
The om_udp module accepts the following directives in addition to the common module directives. The Host
directive is required.

Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.

Because of the nature of the UDP protocol and how ICMP messages are handled by various
network devices, the failover functionality in this module is considered as "best effort".
WARNING
Detecting hosts going offline is not supported. Detecting the receiving service being
stopped - while the host stays up is supported.

Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination in the Host directive which does not have a port specified. If no
port is configured for a destination in either directive, the default port is used, which is port 514.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.

LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.

OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified, the
default is Dgram.

Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.

SockBufSize
This optional directive sets the socket buffer size (SO_SNDBUF) to the value specified. If this is not set, the
operating system default is used.

123.23.2. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

1091
Example 700. Sending Raw Syslog over UDP

This configuration reads log messages from socket and forwards them via UDP.

nxlog.conf
 1 <Input uds>
 2 Module im_uds
 3 UDS /dev/log
 4 </Input>
 5
 6 <Output udp>
 7 Module om_udp
 8 Host 192.168.1.1:1514
 9 LocalPort 1555
10 </Output>
11 # old syntax
12 #<Output udp>
13 # Module om_udp
14 # Host 192.168.1.1
15 # Port 1514
16 #</Output>
17
18 <Route uds_to_udp>
19 Path uds => udp
20 </Route>

Example 701. Sending Logs over UDP with Failover

This configuration sends logs via UDP in a failover configuration (multiple Hosts defined). The actual
destinations used in this case are localhost:1514, 192.168.1.1:1514, and example.com:1234.

nxlog.conf
1 <Output udp>
2 Module om_udp
3 Host localhost:1514
4 Host 192.168.1.1:1514
5 Host example.com:1234
6 </Output>

123.24. UDP with IP Spoofing (om_udpspoof)


This module sends log messages as UDP datagrams to the address and port specified and allows the source
address in the UDP packet to be spoofed in order to make the packets appear as if they were sent from another
host. This is particularly useful in situations where log data needs to be forwarded to another server and the
server uses the client address to identify the data source. With IP spoofing the UDP packets will contain the IP
address of the originating client that produced the message instead of the forwarding server.

This module is very similar to the om_udp module and can be used as a drop-in replacement. The SpoofAddress
configuration directive can be used to set the address if necessary. The UDP datagram will be sent with the local
IP address if the IP address to be spoofed is invalid. The source port in the UDP datagram will be set to the port
number of the local connection (the port number is not spoofed).

The network input modules (im_udp, im_tcp, and im_ssl) all set the $MessageSourceAddress field, and this value
will be used when sending the UDP datagrams (unless SpoofAddress is explicitly set to something else). This
allows logs to be collected over reliable and secure transports (like SSL), while the om_udpspoof module is only
used for forwarding to the destination server that requires spoofed UDP input.

1092
See the list of installer packages that provide the om_udpspoof module in the Available Modules chapter of the
NXLog User Guide.

123.24.1. Configuration
The om_udpspoof module accepts the following directives in addition to the common module directives. The Host
directive is required.

Host
The module will send UDP datagrams to this IP address or DNS hostname. Add the destination port number
to the end of a host using a colon as a separator (host:port).

Port
The module will send UDP packets to this port. The default port is 514 if this directive is not specified.

Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.

LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used which is not always ideal in firewalled network environments.

MTU
This directive can be used to specify the maximum transfer size of the IP data fragments. If this value exceeds
the MTU size of the sending interface, an error may occur and the packet be dropped. The default MTU value
is 1500.

OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified, the
default is Dgram.

SockBufSize
This optional directive sets the socket buffer size (SO_SNDBUF) to the value specified. If this is not set, the
operating system default is used.

SpoofAddress
This directive is optional. The IP address rewrite takes place depending on how this directive is specified.

Directive not specified


The IP address stored in the $MessageSourceAddress field is used so the module should work
automatically when the SpoofAddress directive is not specified.

Constant literal value


The literal value may be a string or an ipaddr type. For example, SpoofAddress '10.0.0.42' and
SpoofAddress 10.0.0.42 are equivalent.

Expression
The expression specified here will be evaluated for each message to be sent. Normally this can be a field
name, but anything is accepted which evaluates to a string or an ipaddr type. For example, SpoofAddress
$MessageSourceAddress has the same effect as when SpoofAddress is not set.

1093
123.24.2. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.

Example 702. Simple Forwarding with IP Address Spoofing

The im_tcp module will accept log messages via TCP and will set the $MessageSourceAddress field for each
event. This value will be used by om_udpspoof to set the UDP source address when sending the data to
logserver via UDP.

nxlog.conf
 1 <Input tcp>
 2 Module im_tcp
 3 ListenAddr 0.0.0.0:1514
 4 </Input>
 5
 6 <Output udpspoof>
 7 Module om_udpspoof
 8 Host logserver.example.com:1514
 9 </Output>
10
11 # old syntax
12 #<Output udpspoof>
13 # Module om_udpspoof
14 # Host logserver.example.com
15 # Port 1514
16 #</Output>
17
18 <Route tcp_to_udpspoof>
19 Path tcp => udpspoof
20 </Route>

1094
Example 703. Forwarding Log Messages with Spoofed IP Address from Multiple Sources

This configuration accepts log messages via TCP and UDP, and also reads messages from a file. Both im_tcp
and im_udp set the $MessageSourceAddress field for incoming messages, and in both cases this is used to
set $sourceaddr. The im_file module instance is configured to set the $sourceaddr field to 10.1.2.3 for
all log messages. Finally, the om_udpspoof output module instance is configured to read the value of the
$sourceaddr field for spoofing the UDP source address.

nxlog.conf (truncated)
 1 <Input tcp>
 2 Module im_tcp
 3 Host 0.0.0.0:1514
 4 Exec $sourceaddr = $MessageSourceAddress;
 5 </Input>
 6
 7 <Input udp>
 8 Module im_udp
 9 Host 0.0.0.0:1514
10 Exec $sourceaddr = $MessageSourceAddress;
11 </Input>
12
13 <Input file>
14 Module im_file
15 File '/var/log/myapp.log'
16 Exec $sourceaddr = 10.1.2.3;
17 </Input>
18
19 <Output udpspoof>
20 Module om_udpspoof
21 # destination port: 1514
22 Host 10.0.0.1:1514
23 # originating port: 15000
24 LocalPort 15000
25 SpoofAddress $sourceaddr
26 </Output>
27
28 # old syntax
29 [...]

123.25. Unix Domain Sockets (om_uds)


This module allows log messages to be sent to a Unix domain socket. Unix systems traditionally have a /dev/log
or similar socket used by the system logger to accept messages. Applications use the syslog(3) system call to
send messages to the system logger. NXLog can use this module to send log messages to another Syslog
daemon via the socket.

This module supports SOCK_DGRAM type sockets only. SOCK_STREAM type sockets may be
NOTE
supported in the future.

See the list of installer packages that provide the om_uds module in the Available Modules chapter of the NXLog
User Guide.

123.25.1. Configuration
The om_uds module accepts the following directives in addition to the common module directives.

1095
UDS
This specifies the path of the Unix domain socket. The default is /dev/log.

UDSType
This directive specifies the domain socket type. Supported values are dgram, stream, and auto. The default is
auto.

OutputType
See the OutputType directive in the list of common module directives. If UDSType is set to Dgram or is set to
auto and a SOCK_DGRAM type socket is detected, this defaults to Dgram. If UDSType is set to stream or is set
to auto and a SOCK_STREAM type socket is detected, this defaults to LineBased.

123.25.2. Examples
Example 704. Using the om_uds Module

This configuration reads log messages from a file, adds BSD Syslog headers with default fields, and writes
the messages to socket.

nxlog.conf
 1 <Extension syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Input file>
 6 Module im_file
 7 File "/var/log/custom_app.log"
 8 </Input>
 9
10 <Output uds>
11 Module om_uds
12 # Defaulting Syslog fields and creating Syslog output
13 Exec parse_syslog_bsd(); to_syslog_bsd();
14 UDS /dev/log
15 </Output>
16
17 <Route file_to_uds>
18 Path file => uds
19 </Route>

123.26. WebHDFS (om_webhdfs)


This module allows logs to be stored in Hadoop HDFS using the WebHDFS protocol.

See the list of installer packages that provide the om_webhdfs module in the Available Modules chapter of the
NXLog User Guide.

123.26.1. Configuration
The om_webhdfs module accepts the following directives in addition to the common module directives. The File
and URL directives are required.

File

1096
This mandatory directive specifies the name of the destination file. It must be a string type expression. If the
expression in the File directive is not a constant string (it contains functions, field names, or operators), it will
be evaluated before each request is dispatched to the WebHDFS REST endpoint (and after the Exec is
evaluated). Note that the filename must be quoted to be a valid string literal, unlike in other directives which
take a filename argument.

URL
This mandatory directive specifies the URL of the WebHDFS REST endpoint where the module should POST
the event data. The module operates in plain HTTP or HTTPS mode depending on the URL provided, and
connects to the hostname specified in the URL. If the port number is not explicitly indicated in the URL, it
defaults to port 80 for HTTP and port 443 for HTTPS.

FlushInterval
The module will send the data to the endpoint defined in URL after this amount of time in seconds, unless
FlushLimit is reached first. This defaults to 5 seconds.

FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send the data to the endpoint defined in URL. This defaults to 500 events. The FlushInterval may trigger
sending the write request before this limit is reached if the log volume is low to ensure that data is sent
promptly.

HTTPSAllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-signed
certificate. The default value is FALSE: the remote must present a trusted certificate.

HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.

HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.

HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.

HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.

HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.

HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.

1097
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.

HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL), which will be consulted when checking the
certificate of the remote HTTPS server.

HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.

HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.

HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.

HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).

Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.

HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.

QueryParam
This configuration option can be used to specify additional HTTP Query Parameters such as BlockSize. This
option may be used to define more than one parameter:

QueryParam blocksize 42
QueryParam destination /foo

123.26.2. Examples

1098
Example 705. Sending Logs to a WebHDFS Server

This example output module instance forwards messages to the specified URL and file using the WebHDFS
protocol.

nxlog.conf
1 <Output hdfs>
2 Module om_webhdfs
3 URL http://hdfsserver.domain.com/
4 File "myfile"
5 QueryParam blocksize 42
6 QueryParam destination /foo
7 </Output>

123.27. ZeroMQ (om_zmq)


This module provides message transport over ZeroMQ, a scalable high-throughput messaging library.

See the im_zmq for the input pair of this module.

See the list of installer packages that provide the om_zmq module in the Available Modules chapter of the NXLog
User Guide.

123.27.1. Configuration
The om_zmq module accepts the following directives in addition to the common module directives. The Address,
ConnectionType, Port, and SocketType directives are required.

Address
This directive specifies the ZeroMQ socket address.

ConnectionType
This mandatory directive specifies the underlying transport protocol. It may be one of the following: TCP, PGM,
or EPGM.

Port
This directive specifies the ZeroMQ socket port.

SocketType
This mandatory directive defines the type of the socket to be used. It may be one of the following: REP,
ROUTER, PUB, XPUB, or PUSH. This must be set to PUB if ConnectionType is set to PGM or EPGM.

Interface
This directive specifies the ZeroMQ socket interface.

Listen
If this boolean directive is set to TRUE, om_zmq will bind to the Address specified and listen for connections. If
FALSE, om_zmq will connect to the Address. The default is FALSE.

OutputType
See the OutputType directive in the list of common module directives. The default value is Dgram.

1099
SockOpt
This directive can be used to set ZeroMQ socket options. For example, SockOpt ZMQ_BACKLOG 2000. This
directive may be used more than once to set multiple options.

123.27.2. Examples
Example 706. Using the om_zmq Module

This example configuration reads log messages from file and forwards them via ZeroMQ PUSH socket over
TCP.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File "/var/log/messages"
 4 </Input>
 5
 6 <Output zmq>
 7 Module om_zmq
 8 SocketType PUSH
 9 ConnectionType TCP
10 Address 10.0.0.1
11 Port 1514
12 </Output>
13
14 <Route file_to_zmq>
15 Path file => zmq
16 </Route>

1100
NXLog Manager

1101
Chapter 124. Introduction
Managing a log collection system where agents are scattered around the whole network can be a daunting task
especially if there are multiple teams in charge of each system.

The NXLog Manager is a log management solution which provides a web based administration interface to
configure all parameters for the log collection and enables the log management administrator to efficiently
monitor and manage the NXLog agents securely from a central console. The NXLog Manager is able to operate in
clustered mode if the network topology requires multiple manager nodes.

This document provides information about the following topics:

• Installation steps for the core NXLog Manager system.


• Installation steps for the NXLog agents to be deployed on the client machines.
• Details about each component of the NXLog Manager system accessible from the web interface.

124.1. Requirements
To use and administer NXLog Manager, the user is expected to be familiar with the following:

• Using Mozilla Firefox or compatible web browser.


• Regular expressions.
• Concept of X.509 certificates and public key cryptography.
• Log management basics.
• Networking concepts.

The web interface supports the following browsers:

• Mozilla Firefox 3.5 or higher.


• Google Chrome 10 or higher.

There are known problems with Microsoft Internet Explorer and it is not supported.

124.2. Architecture
NXLog Manager web application
NXLog Manager is a Java based web application which can communicate with the NXLog agents.

NXLog
NXLog is the log collector with no frontend. NXLog can be used in both server and client mode. When running
as a client (agent), NXLog will collect local log sources and will forward the data over the network. NXLog can
also operate as a server to store messages locally or as a relay to forward messages to another instance.

The architecture of NXLog Manager allows log collection to function even if NXLog Manager is not running or the
control channel is not functional, thus an NXLog Manager upgrade will not cause any interruption to the log
collection process.

1102
Chapter 125. System Requirements
In order to function efficiently, NXLog Manager requires a certain amount of available system resources on the
host system. The table below provides general guidelines to use when planning an NXLog Manager installation.
Actual system requirements will vary based on the number of agents to be managed; therefore, both minimum
and recommended requirements are listed. Always thoroughly test a deployment to verify that the desired
performance can be achieved with the system resources available.

Table 68. NXLog Manager Requirements

Minimum Recommende
d
Processor cores 2 2

Memory/RAM 2048 MB 4096 MB

Disk space 300 MB 1024 MB

The NXLog Manager memory/RAM requirement increases by 2 MB for each managed agent. For
example, if an NXLog Manager instance monitors 100 agents, the recommended memory/RAM
requirement is 4296 MB. These requirements are in addition to the operating system’s
NOTE
requirements, and the requirements should be combined cumulatively with the NXLog
Enterprise Edition’s System Requirements for systems running both NXLog Enterprise Edition
and NXLog Manager.

1103
Chapter 126. Supported Platforms
NXLog Manager requires either OpenJDK 7 JRE or OpenJDK 8 JRE to be run on the following GNU/Linux operating
systems.

Table 69. Supported GNU/Linux Platforms

Operating System Virtual Machine Version


RedHat Enterprise Linux 6 java-1.7.0-openjdk-headless

RedHat Enterprise Linux 7 java-1.7.0-openjdk-headless

CentOS 6 java-1.8.0-openjdk-headless

CentOS 7 java-1.8.0-openjdk-headless

Debian 8 openjdk-7-jre

Debian 9 openjdk-8-jre

Ubuntu 16.04 openjdk-7-jre, openjdk-8-jre

Ubuntu 18.04 openjdk-8-jre

NOTE
NXLog Manager is only supported on the 64-bit version of Java.

1104
Chapter 127. Installation
127.1. Installing on Debian Wheezy
Install the DEB package with the following commands:

# dpkg -i nxlog-manager_X.X.XXX_amd64.deb
# apt-get -f install

127.1.1. Requirements
• nxlog-manager-4.x requires openjdk-7-jre and
• nxlog-manager-5.x requires either openjdk-7-jre or openjdk-8-jre.

If Java is not installed or the correct version of Java is not selected, NXLog Manager will refuse to start. To select
the default version of Java on your system, use the command:

# update-alternatives --config java

Make sure that your hostname and DNS settings are setup correctly, to avoid problems
WARNING
with NXLog Manager. Refer to host setup common issues for more information.

127.2. Installing on RHEL 6 & 7


The .rpm package of NXLog Manager is signed with a PGP key. For details on how to verify you
NOTE
package, see the Digital Signature Verification section in the NXLog User Guide.

Install the .rpm package with the following command:

# yum install nxlog-manager-X.X.XXXX-1.noarch.rpm

nxlox-manager-4.x requires java-1.7.0-openjdk and nxlox-manager-5.x requires either java-1.7.0-


NOTE openjdk or java-1.8.0-openjdk. If Java is not installed or the correct version of Java is not selected
NXLog Manager will refuse to start.

To select the default version of Java on your system, use the command:

# alternatives --config java

To access the web interface from another host, the firewall rules should allow access to port 9090 from the
external network:

# iptables -A INPUT -p tcp --dport 9090 -j ACCEPT

Or completely remove all firewall rules while testing:

# iptables -F

Make sure that your hostname and DNS settings are setup correctly, to avoid problems
WARNING
with NXLog Manager. Refer to host setup common issues for more information.

127.3. Installing as Docker Application


To install NXLog Manager as a Docker application, Docker Engine and Docker Compose tool is required. The

1105
procedure is identical on all platforms supported by Docker (Linux, Windows and MacOS). Extract the files from
the compressed Docker archive.

$ tar zxf nxlog-manager-X.X.XXXX-docker.tar.gz

To build, (re)create and start the container execute the following command.

$ docker-compose up -d

By default the Dockerized NXLog Manager listens on port 9090. The port configuration is defined as
HOST:CONTAINER. To change this setting, edit the docker-compose.yml file by modifying the HOST port number
preceding the colon (9080 in the below example). The port number for the CONTAINER, following the colon
should be left at 9090.

docker-compose.yml
ports:
  - "4041:4041"
  - "9080:9090"
restart: always

For the configuration change to take effect, the Docker container needs to be stopped and started with the
following commands.

$ sudo docker-compose down

$ sudo docker-compose up

The NXLog Manager Docker container includes MySQL. Therefore there is no need to install and
NOTE configure MySQL separately. After installing, you may proceed with the NXLog Manager
configuration.

127.4. Deploying on AWS


NXLog Manager can be deployed in a cloud environment such as Amazon Web Services. Cloud services can be
easily leveraged in order to provide high availability and disaster recovery capabilities. In such a scenario, NXLog
Manager will be deployed in a distributed setup across multiple availability zones.

1106
127.4.1. Setting up NXLog Manager on AWS
1. To start with, the database needs to be prepared. The Amazon Relational Database Service (RDS) works well
with NXLog Manager. For data redundancy, create a database (MySQL or MariaDB) in Multi-AZ deployment
mode. This option will create a read replica.

2. Install NXLog Manager from the DEB or RPM package, depending on the operating system. At least the EC2
"t2.small" instance type is recommended.
3. Edit /opt/nxlog-manager/db_init/db.conf. Add the RDS hostname and the database master
username/password to the MYSQLOPTS variable.

MYSQLOPTS="-h RDS_INSTANCE.rds.amazonaws.com -P 3306 -u DB_MASTER_USER -pDB_PASSWORD"

4. Execute the database initialization script. This should only be done once for the cluster!

# cd /opt/nxlog-manager/db_init
# ./dbinit.sh

5. Configure NXLog Manager to run in a distributed manner by editing the INSTANCE_MODE in /opt/nxlog-
manager/conf/nxlog-manager.conf.

INSTANCE_MODE=distributed-manager

6. In /opt/nxlog-manager/conf/jetty-env.xml, provide details for Java Message Service (JMS)


communication.
a. Set JMSBrokerAddress to the instance private IP, used for communications inside the VPC.

<Set name="jmsBrokerAddress">172.31.9.100</Set>

b. Set database details in jdbcUrl by providing the RDS endpoint.

<Set name="jdbcUrl">jdbc:mysql://RDS_INSTANCE.rds.amazonaws.com:3306/nxlog-
manager5?useUnicode=true&amp;characterEncoding=UTF-8&amp;characterSetResults=UTF-8
&amp;autoReconnect=true</Set>

c. Update Log4ensicsDatabaseAccess.

<New class="co.nxlog.manager.data.bean.common.DatabaseAccessBean">
  <Set name="databaseName">nxlog-manager5</Set>
  <Set name="username">nxlog-manager5</Set>
  <Set name="password">nxlog-manager5</Set>
  <Set name="location">RDS_INSTANCE.rds.amazonaws.com:3306</Set>
</New>

7. From the EC2 service dashboard, go to Security Groups. Allow TCP traffic on ports 20000 and 34200-34300
to allow JMS communications inside the security group created for NXLog Manager EC2 instances. Please
note that the security group ID should be used in the Source field.

1107
8. The nxlog-manager service can now be started.

# service nxlog-manager start

127.4.2. Configuring Load Balancing


In order to access NXLog Manager from a single URL, as well as benefit from application redundancy, a Load
Balancer is needed.

1. From the EC2 service dashboard, go to Load Balancing. Click Create Load Balancer. Select Application
Load Balancer and click Create.
2. Configure the load balancer. Set the listener to use port 9090 (the same as the backend application).

3. Choose availability zones and configure a security group in order to limit access to the load balancer.
Configure routing to forward requests to port 9090.

1108
4. Configure the health check path to /nxlog-manager. In Advanced health check settings, set the Success
codes to 302, as it is the default reply from the nxlog-manager service.

5. Select instances for the target group and finish creation of the load balancer. From the EC2 dashboard, go to
Target Groups (in the LOAD BALANCING section). Select the target group and click Edit attributes. Enable
Stickiness to prevent breaking user sessions. This will create a cookie named AWSALB with encrypted
contents.

6. Edit security groups to allow traffic between the load balancer and its target group. After this step, the
solution is ready.

1109
127.5. Configuring NXLog Manager for Standalone Mode
To operate in standalone mode, NXLog Manager requires MySQL or MariaDB v5.5.

127.5.1. Installing MySQL Server Debian or Ubuntu


Install the mysql-server package:

# apt-get install mysql-server

NOTE MariaDB has replaced MySQL in more recent versions such as Debian (Stretch).

Start the mysql-server service:

# service mysqld start

NOTE systemctl has replaced service in more recent versions such as Debian (Stretch).

Now you may proceed with the Database Initialization step.

127.5.2. Installing MySQL Server on CentOS 6 or RHEL 6


Install the mysql-server package:

# yum install mysql-server

Start the mysql-server service:

# service mysqld start

Now you may proceed with the Database Initialization step.

127.5.3. Installing MariaDB Server CentOS 7 or RHEL 7


MariaDB has replaced MySQL as the default package on CentOS7 and RHEL 7. MariaDB is a fork of MySQL and
should work seamlessly in place of MySQL. Install the mariadb-server package:

# yum install mariadb-server

Start the mysql-server service:

# systemctl start mariadb

Now you may proceed with the Database Initialization step.

1110
127.6. Configuring NXLog Manager for Cluster Mode
It is possible to run multiple instances of NXLog Manager so that a group of agents connect to a specific Manager
instance and all agents can be managed at the same time from either NXLog Manager instance no matter which
one they are connected to. This mode is referred to as distributed mode or cluster mode.

The following needs to be set in the /opt/nxlog-manager/conf/nxlog-manager.conf configuration file on


each instance:

nxlog-manager.conf
INSTANCE_MODE=distributed-manager

The NXLog Manager instances communicate over JMS (Java Message Service) API. Please set the public IP
address of the interface in /opt/nxlog-manager/conf/jetty-env.xml and make sure to replace the value
127.0.0.1 set for JMSBrokerAddress with the public IP:

jetty-env.xml
<Set name="jmsBrokerAddress">10.0.0.42</Set>

To operate in clustered mode, NXLog Manager requires MariaDB Galera Cluster v5.5.

127.6.1. Installing MariaDB Galera Cluster on Debian or Ubuntu


There is a very good installation guide here. The MariaDB Galera Cluster installation and configuration steps are
summarized below.

Add the package repository:

# apt-get install python-software-properties


# apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db

For Debian Wheezy:

# add-apt-repository 'deb http://mirror3.layerjet.com/mariadb/repo/5.5/debian wheezy main'

For Ubuntu Precise:

# add-apt-repository 'deb http://mirror3.layerjet.com/mariadb/repo/5.5/ubuntu precise main'

Resynchronize the package index files:

# apt-get update

Install the packages:

# DEBIAN_FRONTEND=noninteractive apt-get install -y rsync galera mariadb-galera-server

Add the following to /etc/mysql/conf.d/galera.cnf:

1111
galera.cnf
[mysqld]
#mysql settings
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
#galera settings
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://<IP1>,<IP2>,...,<IPN>"
wsrep_sst_method=rsync

Here IP1,…,IPN are the addresses of all nodes in the Galera cluster. Distribute this file to all nodes.

Start the Galera cluster.

First stop all nodes:

On node1:

# service mysql stop

On node2:

# service mysql stop

On nodeN:

# service mysql stop

Start the central node:

On node1:

# service mysql start --wsrep-new-cluster

Then start on all other nodes:

On node2:

# service mysql start


 [ ok ] Starting MariaDB database server: mysqld . . . . . . . . . ..
 [info] Checking for corrupt, not cleanly closed and upgrade needing tables..

On nodeN:

# service mysql start


 [ ok ] Starting MariaDB database server: mysqld . . . . . . . . . ..
 [info] Checking for corrupt, not cleanly closed and upgrade needing tables..

Verify all nodes are running:

# mysql -u root -e 'SELECT VARIABLE_VALUE as "cluster size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS


WHERE VARIABLE_NAME="wsrep_cluster_size"'

This command should return N, i.e. the number of cluster nodes.

1112
127.6.2. Installing MariaDB Galera Cluster on RHEL
There is an installation guide here. The MariaDB Galera Cluster installation and configuration steps are
summarized below.

To add the MariaDB repository create the file /etc/yum.repos.d/mariadb.repo with the following content:

mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Now install MariaDB and Galera:

# yum install MariaDB-Galera-server MariaDB-client galera

You can download and install 'socat' here in case of the following error:

Error: Package: MariaDB-Galera-server-5.5.40-1.el6.x86_64 (mariadb)


  Requires: socat
 You could try using --skip-broken to work around the problem

To create an initial MariaDB configuration, execute these commands and follow the instructions:

# service mysql start


# mysql_secure_installation
# mysql -u root -p
 MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT
OPTION;
 MariaDB [(none)]> FLUSH PRIVILEGES;
 MariaDB [(none)]> exit
# service mysql stop

On each cluster node, edit /etc/my.cnf.d/server.cnf and make sure to add the following content:

server.cnf
[mysqld]
pid-file = /var/lib/mysql/mysqld.pid
port = 3306

[mariadb]
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://IP2,...,IPN
wsrep_cluster_name='cluster1'
wsrep_node_address='IP1'
wsrep_node_name='db1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Make sure to set the values appropriately on each node.

Start the cluster node using following command:

1113
# /etc/init.d/mysql bootstrap
Bootstrapping the cluster.. Starting MySQL.... SUCCESS!

On the other nodes start, with the following command:

# service mysql start


Starting MySQL.... SUCCESS!

SELinux may block MariaDB from binding on the cluster port and it will print the following error in the MariaDB
error log in /etc/selinux/config:

140805 7:56:00 [Note] WSREP: gcomm: bootstrapping new group '’cluster1’'


140805 7:56:00 [ERROR] WSREP: Permission denied
140805 7:56:00 [ERROR] WSREP: failed to open gcomm backend connection: 13: error while trying to
listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Permission denied': 13 (Permission
denied)

This can be solved by running setenforce 0 and setting SELINUX=permissive. Then repeat the above
installation steps on each node.

127.7. Database Initialization


The NXLog Manager needs its initial configuration data to be loaded into the configuration database.

If you are installing NXLog Manager in clustered mode, this only needs to be executed once for
NOTE
the DB cluster - i.e. only on the first node.

If a root password is set for the MySQL/MariaDB database, edit /opt/nxlog-manager/db_init/my.cnf and
provide the password:

my.cnf
[client]
password=

Execute the database initialization script (only once for the Galera cluster):

$ cd /opt/nxlog-manager/db_init
# ./dbinit.sh

To ensure that the MySQL/MariaDB database is started on boot on CentOS/RHEL distributions, execute the
following command:

# chkconfig mysqld on

or

# chkconfig mariadb on

The size of the maximum packet allowed by MySQL/MariaDB can be raised by adding the following to the global
configuration options, typically /etc/my.cnf or /etc/mysql/my.cnf. Raising the size of the maximum allowed
packet will eliminate any max_allowed_packet exceeded error messages from the log files.

my.cnf
[mysqld]
max_allowed_packet = 256M

1114
127.8. Starting NXLog Manager
1. Start NXLog Manager with the following command:
◦ Starting NXLog Manager on Debian Wheezy or RHEL6/CentOS6

# service nxlog-manager start

◦ Starting NXLog Manager on Debian Stretch or RHEL7/CentOS7

# systemctl start nxlog-manager

2. Connect to the web interface. Launch a web browser and navigate to http://x.x.x.x:9090/nxlog-manager in
order to make sure the start was successful.

Check the logs under /opt/nxlog-manager/logs if you are having trouble accessing the web
NOTE
interface.

Running NXLog Manager directly can provide additional information if the NXLog Manager
NOTE
service fails to start. Run cd /opt/nxlog-manager/bin/, then ./jetty.sh.

127.9. NXLog Agent Installation


127.9.1. Installing on Debian Wheezy
# dpkg -i nxlog_X.X.XXXX_amd64.deb
# apt-get -f install

127.9.2. Installing on RHEL


To install the NXLog agent on RHEL, issue the following command:

# yum install nxlog-X.X.XXXX-1.x86_64.rpm

Depending on the package there may be additional dependencies required to be installed:

# yum install dialog apr perl perl-DBI perl-JSON openssl pcre zlib expat libcap libdbi

127.9.3. Installing on Windows


On Windows, run the MSI installer. You should be greeted by the following screens:

1115
Simply click Next, accept the license agreement, then finish the installation.

It is possible to automate the installation on Windows using msiexec:

> msiexec /i nxlog-xxx.msi /quiet

The MSI can be also installed from Group Policy.

127.10. NXLog Manager Configuration


Once the nxlog-manager service is running, you should be able to access the web interface at
http://x.x.x.x:9090/nxlog-manager.

The default access credentials are as follows:

User ID: admin


Password: nxlog123

127.10.1. Managing Encryption Key for Administrators


Understanding the encryption mechanism in NXLog Manager is crucial for its stable operation. This section
explains what the encryption key is and what practices should be followed in this regard.

During the first login of the admin user, NXLog Manager generates the encryption key. The key is shared among
all administrative accounts with ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles within the same
application session.

The application session is an instance of the started service which is always new at each start of NXLog Manager.

The encryption key encrypts private keys of user accounts. The application session cannot function without the
key. This is the reason why it should always be available either in the system database and/or in the application
session.

After each administrator login, this key is decrypted and stored in the application session.

1116
For each new administrative user as well as the admin user, during the first login the key is copied, encrypted
with the user’s password, and stored in the NXAuthSettings table. After a user changes its password, the key is
encrypted with the new password. Additional details about encryption of certificates are provided in the
Certificates Encryption section.

If for any reason it is required, the encryption of private keys can be disabled. See the
content about the Don’t encrypt agent manager’s private key checkbox in the Agent
WARNING Manager Configuration section. If this checkbox is unchecked, encryption is applied and the
suggestions from the Best Practices for Managing Encryption Keys section below are
recommended to be followed.

127.10.1.1. Best Practices for Managing Encryption Keys


Follow the rules below when dealing with encryption keys in order to ensure the smooth and trouble free
operation of NXLog Manager.

• The NXLog Manager instance should always have at least one account with the assigned
ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles. Otherwise it does not function.

• It is recommended to keep the default admin account in the system. Instead of deleting, it can be kept in a
disabled state and used only in specific situations. This account can provide the application session with the
encryption key after NXLog Manager is restarted.

If keeping the admin account is not possible, another account with the ROLE_ADMINISTRATOR and/or
ROLE_CERTIFICATE roles should be created and logged into the same application session with the admin.
This action will share the encryption key from the admin to the new account and makes it available for all
future accounts. After this action is taken, the admin account can be deleted because the new administrative
account can now log in and share the encryption key within the application session.

• It is strongly recommended to have a backup for the NXLog Manager database while taking any actions with
administrative accounts.
• After each restart of NXLog Manager, at least one administrative account with the ROLE_ADMINISTRATOR
and/or ROLE_CERTIFICATE roles should always be available for login, decryption of the encryption key, and
sharing it within the application session. If the only administrative account is deleted or unassigned from its
roles, the decryption key is also deleted from the application session and the database.

WARNING This situation immediately leads to an unrecoverable loss of system and data control.

• Accounts with the ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles should be assigned only to trusted
system administrators. Other users should employ other roles which are available in NXLog Manager.
• Each new account with the ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles should share the same
application session with the other administrative accounts. There is no point in logging into another session
if it has no encryption key from a logged user.
• The same encryption key should be available for all administrative accounts on the running instance of
NXLog Manager.

In case all administrative accounts are deleted, the decryption key is also destroyed and
WARNING
NXLog Manager terminates. This immediately leads to a complete data and system loss.

127.10.2. Set Session and Screen Lock (User Interface) Timeouts


It is sometimes desirable to limit the time an NXLog Manager session remains active, or how long the screen
remains unlocked, after authentication.

1117
To set how long the NXLog Manager UI waits before requiring user re-authentication, find the
screenLockIdleTimeB block in conf/jetty-env.xml and then set screenLockIdleTime to the desired value.
To control the active session length, find the applicationSettingsB block and then set sessionTimeout to the
desired value. Note that sessionTimeout must be larger than screenLockIdleTime for screen lock to work.
Values are in minutes.

The following example shows the directives in context. Note that other directives have been omitted from the
example to aid readability.

jetty-env.xml
<New id="applicationSettingsB" class="org.eclipse.jetty.plus.jndi.Resource">
  <Arg/>
  <Arg>bean/applicationSettingsBean</Arg>
  <Arg>
  <New class="com.nxsec.log4ensics.data.bean.common.ApplicationSettingsBean">
  <!--Session Timeout set to 15 minutes-->
  <Set name="sessionTimeout">10</Set>
  </New>
  </Arg>
</New>
<New id="screenLockIdleTimeB" class="org.eclipse.jetty.plus.jndi.Resource">
  <Arg/>
  <Arg>bean/screenLockIdleTimeBean</Arg>
  <Arg>
  <New class="com.nxsec.log4ensics.data.bean.common.ScreenLockIdleTimeBean">
  <!--Screen Lock Idle Time set to 10 minutes-->
  <Set name="screenLockIdleTime">10</Set>
  </New>
  </Arg>
</New>

127.10.3. Installation Wizard


When the administrator logs in the first time, a dialog window will be displayed to help with the initial
configuration: 'You don’t have a default CA and AgentManager certificate. Do you want to create a CA and a
CERT?' Click Yes to proceed with the CA setup.

1118
Fill in the form and then click Create.

The next dialog window will ask to create a certificate for the Agent manager.

Fill in the form and then click Create.

Finally the NXLog Manager settings need to be provided.

1119
The default values should be sufficient for most users; click Finish.

The initial settings can be changed any time later under the menu items Admin > Settings > Agent Manager
and Admin > Settings > Certificates.

127.10.4. Agent Manager Configuration


Navigate to Admin > Settings > Agent Manager and fill out the following form accordingly.

If you have already configured the Agent manager with the Wizard as described in the previous
NOTE
section then you will not need to modify anything here. Just make sure your settings are correct.

Select whether you would like the agents or the agent manager to initiate the connection. This can be useful
when special firewall and zone rules apply. Make sure that the agent manager certificate is properly set. Click
Save & Restart to apply settings.

The Don’t encrypt agent manager’s private key checkbox disables using of the encryption key if activated.
For more information, see the NXLog Manager Configuration section.

1120
127.10.5. Connecting Agents
To ensure that the NXLog agents can only be controlled by the NXLog Manager, NXLog agents are controlled over
a secure trusted SSL and each NXLog agent needs its own private key and certificate.

127.10.5.1. Automated Deployment


The requirement of a private key and certificate pair for each NXLog agent would prevent automated installation.
Fortunately it is possible to install the NXLog agents with only the CA certificate and an initial configuration
containing only the details on how to establish the control connection with the NXLog Manager.

The installation steps for automated agent deployment consist of the following:

1. Install the NXLog package.


2. Copy the initial configuration file.
3. Copy the CA certificate file.
4. Start the NXLog service and verify that it is connected.

These steps are discussed below.

To export the CA certificate, navigate to Admin > Certificates and select the CA with the checkbox as shown in
the screenshot below.

1121
Click Export. The CA certificate should be exported using the 'Certificate in PEM format' option. Save the file as
agent-ca.pem.

The default installation of NXLog will create a file that is to be updated by NXLog Manager. On Windows systems
this is C:\Program Files (x86)\nxlog\conf\log4ensics.conf, on GNU/Linux systems this configuration file
is /opt/nxlog/var/lib/nxlog/log4ensics.conf. When doing an automated deployment, this file should be
replaced with the following default configuration.

log4ensics.conf
 1 # Please set the following values to suit your environment and make
 2 # sure agent-ca.pem is copied to %CERTDIR% with the proper ownership
 3
 4 define NXLOG_MANAGER_ADDRESS X.X.X.X
 5 define NXLOG_MANAGER_PORT 4041
 6
 7 LogLevel INFO
 8 LogFile %MYLOGFILE%
 9
10 <Extension agent_managment>
11 Module xm_soapadmin
12 Connect %NXLOG_MANAGER_ADDRESS%
13 Port %NXLOG_MANAGER_PORT%
14 SocketType SSL
15 CAFile %CERTDIR%/agent-ca.pem
16 # CertFile %CERTDIR%/agent-cert.pem
17 # CertKeyFile %CERTDIR%/agent-key.pem
18 AllowUntrusted TRUE
19 RequireCert FALSE
20 <ACL conf>
21 Directory %CONFDIR%
22 AllowRead TRUE
23 AllowWrite TRUE
24 </ACL>
25 <ACL cert>
26 Directory %CERTDIR%
27 AllowRead TRUE
28 AllowWrite TRUE
29 </ACL>
30 </Extension>

Please make sure to replace X.X.X.X with the proper IP address of the NXLog Manager instance that the NXLog
agent needs to be connected to.

The CA certificate file agent-ca.pem must be also copied to the proper location as referenced in the above
configuration which is normally under C:\Program Files (x86)\nxlog\cert\ on Windows systems and under
/opt/nxlog/var/lib/nxlog/cert/ on GNU/Linux systems.

1122
When the configuration and certificate files are updated remotely, NXLog must have
NOTE permissions to overwrite these files when it is running as a regular (i.e. nxlog) user. Please make
sure that the ownership is correct:

# chown -R nxlog:nxlog /opt/nxlog/var/lib/nxlog

Now start the NXLog service. The nxlog.log file should contain the following if the NXLog agent has successfully
connected.

nxlog.log
2014-10-24 17:24:46 WARNING no functional input modules!↵
2014-10-24 17:24:46 WARNING no routes defined!↵
2014-10-24 17:24:46 INFO nxlog-2.8.1281 started↵
2014-10-24 17:24:46 INFO connecting to agent manager at X.X.X.X:4041↵
2014-10-24 17:24:46 INFO successfully connected to agent manager at X.X.X.X:4041 in SSL mode↵

Click the AGENTS menu to see the list of agents. You should see the newly connected agent with an UNTRUSTED
(yellow) status. If you don’t see the agent there, check the logs for error diagnostics.

The name of the untrusted agent should be the reverse DNS of its IP address.

In order to establish a mutually trusted connection between the NXLog agent and NXLog Manager, a certificate
and private key pair needs to be issued and transferred to the agent. Select the untrusted agent in the list and
click Issue certificate. When Update connected agents is enabled, the newly issued certificate and the
configuration will be pushed to the agent. The agent will need to reload the configuration in order to reconnect
with the certificate, select the agent and click Reload.

After the agent has successfully reconnected and the agent list is refreshed the agent status should be 'online'
showing a green sphere.

At this stage the NXLog agent should be operational and can now be managed and configured from the NXLog
Manager interface.

127.10.5.2. Manual Deployment


Manual deployment requires adding an agent using 'Add' on the interface. After the agent is configured and has
its certificate issued, select its checkbox in the agent list and click Download config.

1123
On GNU/Linux systems, extract the agents-config.zip and put the files under /opt/nxlog/var/lib/nxlog.
Make sure the files have the proper ownership:

# chown -R nxlog:nxlog /opt/nxlog/var/lib/nxlog

On Windows systems, place the certificates in C:\Program Files (x86)\nxlog\cert. After restarting the
NXLog service you should now see your agent as Online under AGENTS.

127.10.6. Configuring Agents


Once the agent is connected and is shown as Online, it can be remotely configured from the NXLog Manager
web interface.

1. To configure the log collection, click on your agent in the agent list and then select the Configure tab.
2. Click 'Routes' and add a route. Add a TCP input module for testing purposes:

Name: tcptest
Module: TCP Input (im_tcp)
Listen On: 0.0.0.0
Port: 1514
Input Format: line based

3. Add an output module. For test purposes we will use a null output that discards the data.

Name: out
Module: Null Output (om_null)

1124
4. Now click Update config on the Info tab, then click Reload.

After the agent is restarted the newly configured modules are visible on the Modules tab.

5. Test the data collection:

telnet x.x.x.x 1514


type something

6. On the Modules tab check all modules and click Refresh status. The count under the Received column is 1
(or more).

The system is now be ready to be further configured as per your requirements.

127.10.7. Configuring Logger Settings


By default, NXLog Manager keeps log files in the /opt/nxlog-manager/log directory. Log priorities, levels and
log rotation can be configured as per your requirements in the /opt/nxlog-manager/conf/log4j.xml file. The
default configuration will create two separate files nxlog-manager.log and nxlog-manager.err where only
information level messages will be logged on the first file and error level messages will be logged on the second
file. Log rotation is set by default to rotate both files at the beginning of each month. The frequency of log
rotation can be controlled by the DatePattern parameter, as shown below.

log4j.xml
<appender name="internalAppender" class="org.apache.log4j.DailyRollingFileAppender">
  <param name="File" value="${logs.root}.log"/>
  <param name="Threshold" value="INFO"/>
  <param name="DatePattern" value="'.'yyyy-MM"/>
  <layout class="co.nxlog.manager.common.logging.ContextPatternLayout">
  <param name="ConversionPattern" value="%d %p $host $user $component [%c] - %m %n"/>
  </layout>
</appender>

The following table summarizes different DatePattern options.

Table 70. DatePattern options

DatePattern Rollover schedule


'.'yyyy-MM Rollover at the beginning of each month

'.'yyyy-ww Rollover at the first day of each week. The first day of
the week depends on the locale.

'.'yyyy-MM-dd Rollover at midnight each day.

'.'yyyy-MM-dd-a Rollover at midnight and midday of each day.

'.'yyyy-MM-dd-HH Rollover at the top of every hour.

'.'yyyy-MM-dd-HH-mm Rollover at the beginning of every minute.

The /opt/nxlog-manager/conf/log4j.xml defines three different files, the two mentioned


above as well as a debug file that needs to be enabled separately. Log rotation can be controlled
NOTE
individually for each log file by altering the DatePattern parameter at each of the three
appender sections.

To enable the debug logging:

• Change the priority level from INFO to DEBUG,

1125
• Change WARN level to DEBUG in the loggers you require,

• Remove the comment from the debugAppender reference, as shown below.

log4j.xml
<root>
  <priority value="DEBUG"/>
  <appender-ref ref="internalAppender"/>
  <appender-ref ref="errorAppender"/>
  <appender-ref ref="debugAppender"/>
</root>

127.11. Enabling HTTPS for NXLog Manager


127.11.1. Obtaining Certificate and Private Key
To enable HTTPS, NXLog Manager requires either a certificate issued by a certificate authority (CA) or a self-
signed certificate. The self-signed certificate and private key are already contained in each NXLog Manager
installation and stored under the following paths:

Table 71. Paths for the Certificate and Private Key

Version of Manager Path Private Key Certificate


5.x <NXLogManager_HOME>/co jetty9-key.pem jetty9-cert.pem
nf/

6.x <NXLogManager_HOME>/et keystore.p12


c/

Theses files are good for testing purposes, however, they remain the same through all NXLog Manager
installations and should be replaced with valid versions.

The examples below explain how to obtain certificates and private keys.

1126
Example 707. Obtaining a CA Certificate for Versions 5.x

For NXLog Manager versions 5.x, a private key and certificate signing request (CSR) can be generated on a
server with the following command:

$ openssl req –out request.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key

This command will create two files:

• The request.csr file containing the certificate signing request.

• The privatekey.key file containing the 2048-bit RSA private key.

The request.csr file can be verified with the following command:

openssl req -in request.csr -noout -text

This command will output the information which had been input while creating the CSR.

The request.csr file can be submitted to a corporate CA and a proper certificate can then be obtained.

After the certificate is obtained, the existing jetty9-cert.pem and jetty9-key.pem files in the NXLog
Manager directory need to be replaced with the new certificate. For more information, see the NXLog
Manager SSL Keys for Versions 5.x section.

Example 708. Obtaining a CA Certificate for Versions 6.x

For NXLog Manager versions 6.x, a package with a private key and self-signed certificate can be generated
using the following command:

keytool -genkeypair -keyalg RSA -keystore keystore.p12 -validity 365 -keysize 3072

This command will create a keystore.p12 package with the 3072-bit RSA private key and self-signed
certificate with the 365-day validity. The password from the package will be used later in NXLog Manager
settings. See the NXLog Manager SSL keys for Versions 6.x section.

This package can be verified with the following command:

keytool -list -keystore keystore.p12 -v

Using the created package, the certificate signing request can be generated with the following command:

keytool -certreq -file request.csr -keystore keystore.p12

This command will create a separate request.csr file which can be submitted to a corporate CA and a
proper certificate can then be obtained.

After the certificate.cer file is obtained, it can be imported into the existing keystore.p12 file with the
following command:

keytool -import -trustcacerts -file certificate.cer -keystore keystore.p12

The existing keystore.p12 package in the NXLog Manager directory can now be replaced with the new
one. For more information about using the password from the package, see the NXLog Manager SSL Keys
for Versions 6.x section.

Using a self-signed certificate is insecure. Nevertheless, such a certificate can be generated and utilized for

1127
HTTPS connections as well.

Example 709. Generating a Self-Signed Certificate for Versions 5.x and 6.x

For NXLog Manager versions 5.x, the command below will generate the key.pem private key file and the
cert.pem certificate with 365-day validity.

openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out cert.pem

The existing jetty9-cert.pem and jetty9-key.pem files in the NXLog Manager directory can now be
replaced with the created certificate and private key files. For more information, see the NXLog Manager
SSL Keys for Versions 5.x section.

For NXLog Manager versions 6.x, the command below will generate the keystore.p12 package file with a
3072-bit RSA private key and self-signed certificate with a 365-day validity:

keytool -genkeypair -keyalg RSA -keystore keystore.p12 -validity 365 -keysize 3072

The password from the package will be used later in NXLog Manager settings.

The created package can be verified with the following command:

keytool -list -keystore keystore.p12 -v

The existing keystore.p12 package in the NXLog Manager directory can now be replaced with the new
one. For more information about using the password from the package, see the NXLog Manager SSL Keys
for Versions 6.x section.

The below sections explain how to enable HTTPS after the proper certificate and private key have been obtained.

127.11.2. NXLog Manager Version 5.x


To enable HTTPS for secure connections you need to uncomment three sections in
<NXLogManager_HOME>/conf/jetty-config.xml which are shown below:

1128
jetty-config.xml
  <New id="sslContextFactory"
class="com.nxsec.log4ensics.web.common.server.util.ssl.SslContextFactory">
  <Set name="ServerCertificate"><Property name="jetty.home" default=".." />/conf/jetty9-
cert.pem</Set>
  <Set name="ServerKey"><Property name="jetty.home" default=".." />/conf/jetty9-key.pem</Set>
  <Set name="ServerKeyPassword"></Set>
  <Set name="EndpointIdentificationAlgorithm"></Set>
  <Set name="NeedClientAuth"><Property name="jetty.ssl.needClientAuth" default="false"/></Set>
  <Set name="WantClientAuth"><Property name="jetty.ssl.wantClientAuth" default="false"/></Set>
  <Set name="ExcludeCipherSuites">
  <Array type="String">
  <Item>SSL_RSA_WITH_DES_CBC_SHA</Item>
  <Item>SSL_DHE_RSA_WITH_DES_CBC_SHA</Item>
  <Item>SSL_DHE_DSS_WITH_DES_CBC_SHA</Item>
  <Item>SSL_RSA_EXPORT_WITH_RC4_40_MD5</Item>
  <Item>SSL_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
  <Item>SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
  <Item>SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA</Item>
  <Item>SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA</Item>
  <Item>SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA</Item>
  <Item>TLS_DHE_RSA_WITH_AES_256_CBC_SHA256</Item>
  <Item>TLS_DHE_DSS_WITH_AES_256_CBC_SHA256</Item>
  <Item>TLS_DHE_RSA_WITH_AES_256_CBC_SHA</Item>
  <Item>TLS_DHE_DSS_WITH_AES_256_CBC_SHA</Item>
  <Item>TLS_DHE_RSA_WITH_AES_128_CBC_SHA256</Item>
  <Item>TLS_DHE_DSS_WITH_AES_128_CBC_SHA256</Item>
  <Item>TLS_DHE_RSA_WITH_AES_128_CBC_SHA</Item>
  <Item>TLS_DHE_DSS_WITH_AES_128_CBC_SHA</Item>
  </Array>
  </Set>
  </New>

jetty-config.xml
<New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration">
  <Arg><Ref refid="httpConfig"/></Arg>
  <Call name="addCustomizer">
  <Arg><New class="org.eclipse.jetty.server.SecureRequestCustomizer"/></Arg>
  </Call>
  </New>

1129
jetty-config.xml
  <Call name="addConnector">
  <Arg>
  <New id="sslConnector" class="org.eclipse.jetty.server.ServerConnector">
  <Arg name="server"><Ref refid="Server" /></Arg>
  <Arg name="factories">
  <Array type="org.eclipse.jetty.server.ConnectionFactory">

  <!-- uncomment to support proxy protocol


  <Item>
  <New class="org.eclipse.jetty.server.ProxyConnectionFactory"/>
  </Item>-->

  <Item>
  <New class="org.eclipse.jetty.server.SslConnectionFactory">
  <Arg name="next">http/1.1</Arg>
  <Arg name="sslContextFactory"><Ref refid="sslContextFactory"/></Arg>
  </New>
  </Item>
  <Item>
  <New class="org.eclipse.jetty.server.HttpConnectionFactory">
  <Arg name="config"><Ref refid="sslHttpConfig" /></Arg>
  </New>
  </Item>
  </Array>
  </Arg>

  <Set name="host"><Property name="jetty.host" /></Set>


  <Set name="port"><Property name="jetty.https.port" default="9443" /></Set>
  <Set name="idleTimeout"><Property name="ssl.timeout" default="30000"/></Set>
  </New>
  </Arg>
  </Call>

  <Call class="java.lang.System" name="setProperty">


  <Arg>org.apache.jasper.compiler.disablejsr199</Arg>
  <Arg>true</Arg>
  </Call>

  <!-- Fix for java.lang.IllegalStateException: Form too large 207624>200000 -->


  <Call name="setAttribute">
  <Arg>org.eclipse.jetty.server.Request.maxFormContentSize</Arg>
  <Arg><Property name="jetty.maxFormContentSize" default="1000000"/></Arg>
  </Call>

127.11.3. NXLog Manager Version 6.x


To enable HTTPS for secure connections you must enable jetty’s ssl and https modules. To do this, uncomment
these lines in <NXLogManager_HOME>/start.ini shown here:

start.ini
#--module=ssl

start.ini
#--module=https

1130
127.11.4. NXLog Manager SSL Keys for Versions 5.x
This version of NXLog Manager is bundled with a default key pair in PEM format to be used for the secure
connection in <NXLogManager_HOME>/conf/, namely jetty8-cert.pem and jetty8-key.pem. These can be
customized in jetty-config.xml by editing the ServerCertificate and ServerKey properties of
sslContextFactory. Provide the ServerKeyPassword if the private key is password protected.

Now NXLog Manager can be restarted with HTTPS enabled on the default port 9443. The port number can also
be customized in jetty-config.xml.

127.11.5. NXLog Manager SSL Keys for Versions 6.x


Version 6 of NXLog Manager is bundled with a default keystore in PKCS12 format which is password protected
and contains the keys to be used for the secure connection in <NXLogManager_HOME>/etc/, namely
keystore.p12. This can be customized in start.ini by editing the jetty.sslContext.keyStorePath and
jetty.sslContext.trustStorePath properties of the ssl module. You can change the keystore password for
jetty.sslContext.keyStorePassword if the keystore is password protected. Jetty9 supports hashed
passwords in the configuration file, which can be generated by using Jetty’s password utility. For example, enter
the following command to generate a secured version of the password newpass for user myuser:

> java -cp <NXLogManager_HOME>/jetty-util-xxx.jar org.eclipse.jetty.util.security.Password myuser


newpass

where -xxx signifies the version of Jetty installed in NXLog-Manager. The following output will be generated:

blah
OBF:20771x1b206z
MD5:639bae9ac6b3e1a84cebb7b403297b79
CRYPT:me/ks90E221EY

The first line is the plain text password. Copy and paste only one of the secured versions of your choice, including
the prefix, for the property jetty.sslContext.keyStorePassword in start.ini. Creating/managing a keystore
is out of scope of this document.

Now NXLog Manager can be restarted with HTTPS enabled on the default port 9443. The port number can also
be customized in start.ini by editing jetty.ssl.port.

127.12. Increasing the Open File Limit for NXLog Manager


Using systemd
This section explains how to adjust the limit for the number of files that the nxlog-manager service can open
in Linux. To achieve this, a configuration file needs to be created under the systemd directory, which will
permanently hold the required open file limit setting. This way, package upgrades in the Linux system will never
overwrite this configuration.

Procedure

1. Create the directory that will hold the changes for the nxlog-manager service.

# sudo mkdir -p /etc/systemd/system/nxlog-manager.service.d/

2. Create the /etc/systemd/system/nxlog-manager.service.d/limits.conf file with the following content:

[Service]
LimitNOFILE=10000

In this configuration, 10000 represents to the number of files that NXLog Manager is allowed to open.

1131
Change this number to suit the requirements of your environment.

3. For the configuration to take effect, reload the systemd deamon.

# sudo systemctl daemon-reload

4. To tell NXLog Manager about the changes, the nxlog-manager service needs to be restarted.

a. On Debian Wheezy or RHEL6/CentOS6

# service nxlog-manager restart

b. On Debian Stretch or RHEL7/CentOS7

# systemctl restart nxlog-manager

127.13. Upgrading NXLog Manager


Upgrading from earlier versions of NXLog Manager will require changes to the database structure. To complete
the upgrade it is required to stop the NXLog Manager service before proceeding.

# service nxlog-manager stop

It is always advisable and good practice to create a backup before upgrading. This enables the
NOTE process to be rolled back if something goes wrong. Use mysqldump or phpMyAdmin to backup
MySQL/MariaDB.

127.13.1. Upgrade to version 4.0.6223 or later


Follow this procedure if you are running a version of NXLog Manager earlier than 4.0.6223, and you are planning
to upgrade to version 4.0.6223 or later, but not to version 5x.

After stopping the NXLog Manager service, upgrade NXLog Manager but do not start the service. Navigate to
/opt/nxlog-manager/db_init/upgrade/ and execute the command:

# mysql -u root nxlog-manager4 < upgrade_to_6223.sql

The upgraded version of NXLog Manager service can now be started.

127.13.2. Upgrade from Version 4.x to Version 5.x


Follow this procedure, to upgrade from version 4 of NXLog Manager to version 5.x.

After stopping the NXLog Manager service, upgrade NXLog Manager but do not start the service. The upgraded
NXLog Manager requires first a database initialization. Do not start the NXLog Manager service as part of the
initialization. After initializing the database, navigate to /opt/nxlog-manager/db_init/upgrade/ and execute
the command:

# mysql -u root -p < upgrade_v4_to_v5.sql

The command will copy all the relevant information from the earlier version of NXLog Manager database to the
new database without altering the old database. The upgraded version of NXLog Manager service can now be
started.

127.13.3. Upgrade Version 5.x to Later 5.x version


Follow this procedure, to upgrade version 5.x installation of NXLog Manager.

1132
After stopping the NXLog Manager service, upgrade the NXLog Manager packages through dpkg/apt or rpm/yum
and then start the service.

On SysV Init based systems


# service nxlog-manager start

On systemd based systems


# systemctl start nxlog-manager

127.13.4. Upgrade the Docker Application


Upgrading NXLog Manager as a Docker application can be successfully done in case of minor changes between
versions. For example, the version 5.4 can be upgraded to 5.5 but not 6.0.

Upgrading NXLog Manager migrates existing settings to the new version. Nonetheless, it is highly recommended
to create a database backup before upgrading.

The following steps should be performed to upgrade NXLog Manager as a Docker application.

1. Docker containers should be stopped with the following command in the NXLog Manager directory:

$ docker-compose down

2. The archive with the new version of NXLog Manager should be unpacked with the following command:

$ tar zxf nxlog-manager-X.X.XXXX-docker.tar.gz

3. The .deb package from the unpacked archive should be put to the NXLog Manager directory and the existing
package file should be deleted.
4. Docker images and containers should be built and started with the following command in the NXLog
Manager directory:

$ docker-compose up --build -d

127.14. Host Setup Common Issues


This section describes some common issues that may prevent NXLog Manager from working correctly. These are
not related to the installation or configuration of the Manager itself.

127.14.1. Hostname Resolution Issues


For NXLog Manager to work correctly, the hostname of the host must resolve to the correct IP address. The
following error will be present in /opt/nxlog-manager/log/nxlog-manager.err, if that is not the case.

nxlog-manager.err
2016-11-21 16:14:31,015 ERROR manager-host unknown nxlog-manager [net.sf.ehcache.Cache] - Unable to
set manager-host. This prevents creation of a GUID. Cause was:↵
 java.net.UnknownHostException: nxlogmgr.domain.local↵

To set the hostname to myname, the line containing the host IP address along with the FQDN and the name
aliases should be added to the /etc/hosts file.

/etc/hosts
172.16.183.1 myname.example.com myname

Any of the locally bound IP address may be used as the Manager hostname.

1133
Configuring the /etc/hosts file works for both Debian and RHEL versions of Linux.

By default, a Docker container inherits DNS settings from the Docker daemon, including the contents of the
/etc/resolv.conf and /etc/hosts files.

These settings can be overridden on a per-container basis with the following flags to docker commands:

Table 72. Flags for Container Hostname and DNS Settings

Flag Description

--dns The IP address of a DNS server. Multiple --dns flags specify


multiple DNS servers. If the container cannot reach any of the
specified IP addresses, Google’s public DNS server 8.8.8.8 is
added automatically, so that the container can resolve internet
domains.

--dns-opt A key-value pair representing a DNS option and its value. See the
operating system’s documentation for the resolv.conf file for
valid options.

--hostname The hostname a container uses for itself. Defaults to the


container’s ID if not specified.

127.14.2. DNS Lookup Issues


Make sure that your DNS is setup correctly and it is functioning properly. DNS timeouts and errors can cause
various issues, mainly because TLS certificate verification is using DNS lookups.

It is important to understand, that there is no single method of getting a DNS lookup


IMPORTANT
configuration done on Linux.

The most common way is to edit the /etc/resolv.conf file. Usually up to three nameservers can be set up
using the nameserver keyword, followed by the IP address of the nameserver.

To test whether your configuration functions correctly, use the host or dig programs to perform both a DNS
lookup and a reverse DNS lookup (by querying with an IP address). Make sure that participating hosts on your
NXLog collection system are resolved correctly.

For more information, visit the Manually configuring the /etc/resolv.conf file in the RedHat documentation for
RHEL related distributions or visit the Defining the (DNS) Nameservers section in the Debian Wiki for Debian
based distrubutions.

There is also a comprehensive article on the Anatomy of a Linux DNS Lookup on the zwischenzugs website that
discusses the different methods and tools that can be used.

1134
Chapter 128. Dashboard and Menu
128.1. Logging in
After installing and starting NXLog Manager, open a browser and go to the URL of the application
(http://localhost:9090, if the default values were used during installation). The login screen is displayed:

NXLog Manager ships with one built-in administrator account. The User ID is admin and the password is
nxlog123. This default password should be changed as soon as possible.

128.2. The Menu Bar


After logging in, the dashboard and main menu are displayed.

HOME
Displays the dashboard.

PATTERNS
CREATE PATTERN
Create a new pattern.

LIST PATTERNS
Display a list of all available patterns.

1135
SEARCH PATTERN
Display a search page for patterns.

CREATE GROUP
Create a new pattern group.

LIST GROUPS
Display a list of all available pattern groups.

IMPORT PATTERN
Open the dialog to import a pattern database file.

CREATE FIELD
Create a new field.

LIST FIELDS
Display a list of all available fields.

CORRELATION
Open the correlation rules and rulesets management page.

LIST RULESETS
List available rulesets.

IMPORT RULESET
Import a ruleset file. See Exporting and Importing Correlation Rules.

AGENTS
Display the nxlog agents management page.

ADMIN
USERS
Load the user management interface.

ROLES
Load the roles management interface.

CERTIFICATES
Display a list of certificates available in the built-in PKI.

SETTINGS
Display system-wide settings and personal preferences page.

LOGOUT
Log out of the NXLog Manager web application and terminate your session.

Menu items are shown or hidden based on the current user’s configured roles. See Roles for
NOTE
more information about access control in NXLog Manager.

The successive chapters are organized to cover each of these components which can be accessed from the
menu.

128.3. Dashboard
On the first login, the following screen appears.

1136
The dashboard can be customized and displays content accessible to the logged in user. After clicking the Add
button, an empty dashboard item will appear:

The following item types can be selected with the combo box:

Agent list
The number of agents are displayed for each category:

• Online
• Offline
• Error
• Unmanaged

See the Agents chapter for more information about agent statuses.

Jobs summary
Will display a summary about scheduled jobs.

Certificate summary
Will display a summary about certificates grouping them by the following categories:

• Expired

1137
• To be expired in the next 10 days
• Revoked
• Valid

See the Certificates chapter for more information.

Agent chart
Will display one of these agent charts for an agent:

• Load average and memory usage


• Overall event count
• Event count of a separate modules
• Module variables and statistical counters, when such are configured in agent statistics page

After the required parameters are filled in, click Save to add the item to the dashboard. Click Cancel if you wish
to discard the dashboard item.

The header bar of the dashboard item can be clicked to drag and move the dashboard item around.

The following items are on the header bar, from left to right:

Up arrow (▲)
Click to maximize the dashboard item.

Edit
Click to edit the dashboard item.


Click to remove the dashboard item.

Title
The title provided in the last edit of the dashboard item.

Down arrow (▼)


Click the down arrow in the top right corner to minimize the dashboard item.

1138
Chapter 129. Fields
Log messages commonly contain important data such as user names, IP addresses, application names, and
more. An event is represented as a list of key-value pairs, or "fields". The name of the field is the key, and the field
data is the value. This metadata is sometimes referred to as event properties or message tags.

NXLog Manager comes with a set of predefined fields which are suitable for typical cases. These fields can also
be extended, and new fields created, to suit custom requirements. Fields in NXLog Manager are typed (the kind of
data permitted in a key value is pre-defined), which allows complex operations and efficient storage of event log
data.

The field list is kept in the configuration database. All of the major components used throughout NXLog Manager
depend on fields, including Patterns, Correlation and Agent configuration.

To list the available fields, click on the LIST FIELDS menu item under the PATTERN menu. A list similar to the
following should appear:

The field properties will be explained shortly as we look at creating and modifying fields. To do this, click on
Create or Edit under the field list.

1139
The field properties are as follows:

Name
The name of the field will be used to refer to the field from various places in NXLog Manager and NXLog.

Type
The following types can be chosen for a field:

• STRING
• INTEGER
• BINARY
• DATETIME
• IPV4ADDR
• IPV6ADDR
• IPADDR
• BOOLEAN

Starting from version 6.0, NXLog Manager provides the new IPADDR type which configures both
NOTE IPv4 and IPv6 addresses. The IPV4ADDR and IPV6ADDR types will still be supported for backward
compatibility.

Persist
If this option is not enabled, the field value is available to the NXLog agent only for correlation and pattern
matching. Fields should be persisted if the information is needed in additional functions.

Lookup
This special property only takes effect when the field is persistent and is a string type. The lookup property
should be enabled for fields whose values are highly repetitive such as user names, enumerations, host

1140
names etc. This enables the storage engine to map the value to an integer which yields significant
compression and performance boost.

Description
The user can store additional information about the field in the description. It is not used by NXLog Manager.

1141
Chapter 130. Patterns
Patterns provide a way to extract important information (e.g. user names, IP addresses, URLs, etc.) from free-
form log messages.

Many sources, Syslog for example, generate log messages in an unstructured but human readable format,
typically a short sentence or sentence fragment. Consider the following message generated by the SSH server
when an authentication failure occurs:

Failed password for john from 127.0.0.1 port 1542 ssh2

To create a report about authentication failures, the username (john in the above example) needs to be
extracted. Patterns support simple string matching and also allow use of regular expressions for this purpose.
Moreover, it can leverage regular expressions in ways which outstrip simple string extraction.

• The matching executed against the field(s) can be an exact match or a regular expression.
• Patterns contain match criteria to be executed against one or more fields, matching the pattern only if all
fields match. This technique allows patterns to be used with structured logs as well.
• Patterns can extract data from strings using captured substrings and store these in separate fields.
• Patterns can modify the log by setting additional fields. This is useful for message classification.
• Patterns can contain test cases for validation.
• Patterns can be collected into Pattern Groups, greatly simplifying their application to specific sources.

Patterns are used by the NXLog agent. This makes it possible to distribute pattern matching tasks to the agents,
and receive pre-processed, ready-to-store logs instead of parsing all logs at the central log server—which can
yield a significant reduction in CPU load on the server.

For more information about the patterns used by the NXLog agent, please refer to the pm_pattern module
documentation in the NXLog Reference Manual.

130.1. Pattern Groups


Pattern groups are used to collect together those patterns which are used to match log messages generated by a
particular application or log source. Some pattern groups are not applicable to specific log sources. With pattern
groups it is easy to exclude (or include) patterns which cannot match at all because the source would never
generate such log messages. For example, when there is no SSH service on a system, there is no need to match
patterns in the SSHd group against the logs coming from this system.

Pattern groups also serve an optimization purpose. They can have an optional match criteria. One or more fields
can be specified using either EXACT or REGEXP match. The log message is first checked against this match
criteria. If it matches, only then will be the patterns belonging to the group matched against the log message.

To create a pattern group, the following form needs to be filled out.

1142
After form submission, the pattern group can be viewed:

1143
In the above example the ssh patterns will only be checked against the log if the field SourceName matches the
string sshd. The SourceName field must be extracted from the Syslog message with a syslog parser prior to
running the logs through the pattern matcher.

130.2. Creating a Pattern


Patterns can be created directly by clicking on the CREATE PATTERN menu item. In this case an empty form must
be filled out.

1144
Here, enter the basic pattern information. Make sure the Pattern Group is set.

Next, define at least one field and value to match. For example, a message field:

This can be made more generic as needed so that, for example, the pattern can extract the user name and the
destination IP address from the message:

Those parts of the pattern are replaced with regular expression constructs, (\S+) in the above example, which
are not static. Captured substrings are stored in the selected fields. In the above example AccountName and
DestinationIPv4Address are used to store the values extracted with (\S+).

If it is necessary, add more than one field to execute the matching operation against. The match type can be
either an EXACT or a REGEXP match. If this is toggled to REGEXP, the NXLog Manager will offer to escape special
characters:

If the regular expression does not start with the caret (^), the regular expression engine will try to find an
occurrence anywhere in the subject string. This is a costly operation. Typically, the regular expression is intended
to match the start of the string, and for this reason the interface shows a hint:

1145
The regular expressions are compiled and executed by the NXLog engine using the PCRE library.
NOTE
The regular expression must be PCRE compatible in order to work.

The last block is for optional test cases:

This built-in testing interface is extremely useful for verifying the functionality of pattern definitions, without the
costly overhead of loading the pattern into the agent and running it against a set of logs.

After clicking the Calculate Fields button, the captured field values appear. Field values are populated with the
content of the log message used when the pattern was created.

If the field values are not appearing or if the values are unexpected, closely review the regular
NOTE expression(s) in use. The syntax of regular expressions is very compact and oversights are not
uncommon.

130.3. Message Classification with Patterns


Patterns load values from captured substrings into fields, but can also used to create additional fields and
populate them with values. This feature can be used for message classification, and to tag log messages with
special values, which can then be used later in the processing chain.

Event taxonomy fields allow events to be handled in a uniform manner, regardless of their source.

NXLog Manager comes with five special fields for this purpose. Their names all begin with Taxonomy. A dictionary
of permissible values for these fields is provided.

These fields are optional, however it is strongly recommended they be used. Custom fields, with their own
permissible values, can also be created.

If there is no need to classify the event with a Taxonomy field, click Delete to remove it.

1146
130.4. Searching Patterns
The pattern list has a simple search input box in the upper right corner. This can search for entries in the list and
will show rows which contain the specified keyword.

There is a more powerful search interface which allows searching in any of the patterns' properties (fields, test
cases, etc). Click on the SEARCH PATTERN menu item under the PATTERN menu.

1147
130.5. Exporting and Importing Patterns
NXLog Manager can export and import patterns in an XML format. This is the same format used by the NXLog
agent. To export a pattern or a pattern group, check its checkbox in the list and click Export. Import a pattern
database file by clicking on the IMPORT PATTERN menu item or the Import button under the pattern list.

130.6. Using Patterns


Patterns are used and executed by the NXLog engine. Unlike other log analysis solutions which utilize a single
pattern matcher in the central engine, the architecture of NXLog Manager allows patterns to be used on the
agents as well.

To use the patterns in an NXLog agent, add a pm_pattern processor module and select the appropriate pattern
groups:

The patterns will be pushed to the NXLog agent after clicking Update config and they will take effect after a
restart. See the Agents chapter for more information about agent configuration details.

Some patterns work with a set of fields and this requires some preprocessing (e.g. syslog
parsing) in some cases. Instead of writing a regular expression to match a full Syslog line which
includes the header (priority, timestamp, hostname etc), it is a lot more efficient to write the
NOTE
regular expression to match the Message field (instead of the raw_event field) and have a syslog
parser store the header information in separate fields before the pattern matching. These
patterns will be usable when the same message is collected over a different protocol.

1148
Chapter 131. Correlation
Event correlation is an important concept in log analysis. Each log message contains information about an event
which occurred at some point. There are cases when the occurrence (or absence) of one or more events must be
treated specially.

A trivial example for this is to detect 3 failed login attempts into a system. When this happens, the user will be
likely locked out. If the log analysis system is capable to detect such situation, a lot of things can be automated.
You will not need to wait for the user to come asking for a new password.

Event correlation in NXLog Manager is architected similarly to the Pattern system. It is performed in real-time by
the NXLog agents. This way it is possible to do local event correlation on the client side (at the agent). This will
not only reduce load on the central server but the system can send alerts over another channel (e.g. SMS) even if
the network is down and the log messages would not reach the central log server.

For more information about the correlation capabilities of NXLog Manager, please consult the NXLog Reference
Manual and see the documentation about the pm_evcorr module.

131.1. Correlation Rulesets


Correlation rules are grouped into rulesets. Because there can be different correlation rules depending on the
agent or log source, rulesets ease the use of correlation rules. To view the list of correlation rulesets, click the
CORRELATION menu item. The list of existing correlation rulesets will appear such as in the following screenshot:

To create a new ruleset, click the Add button.

131.2. Correlation Rules


Correlation rules check whether the conditions specified in the rule are satisfied and execute an action.
Correlation rules are evaluated linearly.

Clicking the name of a correlation ruleset will show a list of correlation rules within the ruleset:

1149
The order of the rules within the ruleset matter because they are evaluated by NXLog’s
NOTE pm_evcorr module in the order they appear. To change the order of the rules, use the Up and
Down buttons.

131.2.1. Creating Correlation Rules


To create a new correlation rule, click the Add button on the bottom of the list. The following dialog window will
appear:

Each correlation rule has a mandatory Name, Type and Action parameter and one ore more type specific
parameters where the conditions can be specified. The following correlation rule types are available:

• Simple
• Suppressed
• Pair
• Absence
• Thresholded

Please consult the NXLog Reference Manual and see the documentation about these rule types provided by the
pm_evcorr module. There are two modes available to specify a condition.

Matched pattern is
This will generate a simple test to check whether the specified pattern matched. The generated NXLog config
will contain a similar snippet:

1150
if $PatternID == 42 {\
  ACTION \
}

For this to work, the pm_pattern module must be configured and must be in front of the
NOTE pm_evcorr module in the route. The pm_pattern module is responsible for setting the
PatternID field.

Expert
This field expects a statement (a boolean condition) which evaluates to TRUE or FALSE. The above expressed
in Expert form would look like the following:

$PatternID == 42

Using the language constructs provided by NXLog, it is possible to specify more complex conditions here, for
example:

($EventTime > now() - 10000) and ($PatternID == 42 or $PatternID == 142)

Example 710. Correlation rule for ssh bruteforce attack detection

In this example, a correlation rule is created which will detect SSH brute force attempts. The rule defines
this attempt as 5 login failures within a 20 second interval. In this example, only an internal warning
message is generated, but it is possible to trigger any other action such as executing an external script to
block the IP or send an email alert.

This correlation rule depends on a pattern which matches the SSH authentication failure events. See the
Creating patterns section on how to do this. Once the pattern is available in the database, the correlation
rule should be configured as shown on the following screenshot:

131.3. Exporting and Importing Correlation Rules


NXLog Manager can export and import correlation rules and rulesets in an XML format. To export a correlation
rule or a ruleset, select its checkbox in the list and click Export. You can import a correlation rule XML file by
clicking on the Import button under the ruleset list.

1151
Unlike patterns, correlation rules used by NXLog are not in XML. Correlation rules exported from
NXLog Manager cannot by used by NXLog because the NXLog agent uses Apache-style
NOTE
configuration for the rules and this is part of (or included in) the pm_evcorr module
configuration block in nxlog.conf.

131.4. Using Correlation Rules


Similar to patterns, correlation rules are also used and executed by the NXLog engine. Unlike other log analysis
solutions which utilize a single pattern correlation engine in the central log server, the architecture of NXLog
Manager allows correlation rules to be evaluated on the agents as well.

To use the correlation rules in an NXLog agent, add a pm_evcorr processor module and select the appropriate
correlation ruleset:

It is recommended to use only one ruleset per agent. The correlation rules are pushed to the NXLog agent by
clicking Update config and they take effect after a restart. See the Agents chapter for more information about
agent configuration details.

In many cases correlation rules depend on patterns (and the PatternID field). For this reason a
NOTE
pm_pattern module should be in the processor chain before the pm_evcorr module.

1152
Chapter 132. Agents
NXLog agents are used to collect and store event log data. This chapter discusses the GUI configuration and
management frontend provided by NXLog Manager. For more information about the NXLog agent, please refer
to the Agent-Based Collection chapter in the User Guide.

NXLog agent instances can be managed, monitored, and configured remotely over a secure channel. The
management component in NXLog Manager is called the agent manager. There are two operation modes:

• The NXLog agent initiates the connection and connects to the agent manager.
• The agent manager initiates the connection and connects to the NXLog agent.

Mutual X.509 certificate-based authentication is used over a trusted, secure TLS/SSL channel in order to
guarantee that only an authorized NXLog agent can connect to the agent manager. The agent manager queries
the status information of each NXLog agent every 60 seconds.

Each agent instance is provided with a special configuration file coming from the agent manager. The file name is
log4ensics.conf and it is located under the path /opt/nxlog/var/lib/nxlog/ on Linux and C:\Program
Files\nxlog\conf on Windows platforms.

This file contains a BASE64-encoded blob in the header that stores the configuration of the agent. NXLog
Manager can restore the configuration of the agent in case the agent configuration gets lost from the manager’s
database.

It is important that this file should not be modified manually or deleted. In case the manager cannot read the
blob, the below error message is generated:

Failed to unmarshal agent configuration

This message is recorded under the path /opt/nxlog-manager/log/nxlog-manager.err.

132.1. Managing Agents


To list the available NXLog agents, click AGENTS. A list similar to the following appears:

This list displays the following information:

Status
A coloured sphere shows the agent’s status:

Green - Online
The agent is connected and its latest status response is successful. The agent is functioning normally.

Grey - Offline
The agent is not connected to the agent manager. Check the NXLog agent’s LogFile for error diagnostics.

1153
Red - Error
One or more modules which the agent is configured to use are not running or not configured correctly.
For network output modules, there is likely a connection issue and the module is unable to send. Check
the NXLog agent’s LogFile for error diagnostics.

Yellow - Unmanaged
The agent is configured to be Unmanaged, and it is not possible to administer it remotely.

Yellow - Untrusted
The agent is connected to the manager without its own certificate. An agent must be issued a valid,
unique certificate if it was installed without one.

Yellow - Forged
The agent certificate has a CN (common name) that does not match the reverse DNS of this agent’s IP
address. Certificates must be issued to each agent—they must not be copied or configured from another
agent.

Once the agent configuration is updated centrally (by the application), or locally (on the
agent side), changes must be deployed via the Update config command in order to apply the
NOTE
central configuration. If the configuration has been changed locally, a confirmation will be
requested.

Agent name
Since the agent name is taken from the certificate subject name, the same name must be used for the agent
name as is used in the certificate subject. Click the agent name to load the Agent information page.

Template
The identifier of the template assigned to this agent. Agents inherit all template configuration settings. For
more information about templates, see the chapter on Templates.

This column is not displayed by default. To enable or disable column visibility, click the
NOTE round, grey Configuration button on the top left of the table, then check (or uncheck) the box
by the name of the column.

Tags
The tags assigned to this agent. Tags may also be assigned to a template. For more information about
configuring tags for templates, see Tags section.

This column is not displayed by default. To enable or disable column visibility, click the
NOTE round, grey Configuration button on the top left of the table, then check (or uncheck) the box
by the name of the column.

Version
The NXLog agent version number.

Host
The IP address of the remote host from which the NXLog agent is connected. Not available when the agent is
Offline or Unmanaged.

Started
Shows the time the NXLog agent last started. Not available when the agent is Offline or Unmanaged. This value
is set when the NXLog service is started or restarted, but it is not set when using the Reload button.

Load
The system load as reported by the NXLog agent’s host operating system. If this is not implemented on a
platform (e.g. Microsoft Windows) Unknown will be displayed. A small graph displays the last 10 average

1154
values. This information is not available when the agent is Offline or Unmanaged.

This value represents the system load of the host operating system, not the NXLog agent.
NOTE
Due to other resource intensive processes, this can be high even if the NXLog agent is idle.

Mem. usage
The amount of memory used by the NXLog agent. On some platforms, Unknown is shown if the information is
not available. A small graph displays the last 10 average values. This information is not available when the
agent is Offline or Unmanaged.

Received
The sum of log messages received by all input modules since the agent has been started. A small graph
displays the last 10 average values. This information is not available when the agent is Offline or Unmanaged.

Received today
The sum of log messages received by all input modules in the last 24 hours. A small graph displays the last 10
average values. This information is not available when the agent is Offline or Unmanaged.

Processing
Each NXLog agent module has a separate queue. This number shows the sum of messages in all modules'
queues. A small graph displays the last 10 average values. This information is not available when the agent is
Offline or Unmanaged.

Sent
The sum of log messages written or sent by all output modules since the agent has been started. A small
graph displays the last 10 average values. This information is not available when the agent is Offline or
Unmanaged.

If there are two output modules writing or sending logs from a single input, the number
NOTE
under Sent will be double of the value under Received.

Sent today
The sum of log messages written or sent by all output modules in the last 24 hours. A small graph displays
the last 10 average values. This information is not available when the agent is Offline or Unmanaged.

If there are two output modules writing or sending logs from a single input, the number
NOTE
under Sent will be double of the value under Received.

The information shown in the agent list is refreshed every 60 seconds or when Refresh status is clicked.

On the top left, the Filter agents button and the Show n entries drop-down menu are used to reduce the
number of items displayed.

Click Filter agents to display the following dialog:

1155
The agents list can be filtered by 3 criteria:

• Agent Status
• Agent Name
• Template(s) assigned

Click Apply filter to refresh the agent list with only agents which match the filtering criteria. For example,
selecting ONLINE status will show the following:

When a filter is applied, click Clear filter to discard the applied filter and show all agents.

On the Filter Agents dialog, there is an option to save the current filter in the configuration database as an
Agents View. Click Create View to enter the view name:

The view name must be unique, and not contain any special characters or spaces. Saved views appear as tabs
next to the Agent templates tab. A newly created view is applied to the current list immediately:

1156
At the bottom of the agent list is a row of actions used to manage agents.

The NXLog process cannot be stopped or started from the NXLog Manager management
NOTE
interface.

Refresh status
Send a query to the agent to retrieve latest status information. At least one Online agent must be selected to
use this action.

Start
Start all stopped modules. At least one Online agent must be selected to use this action. The NXLog process
cannot be stopped or started from the NXLog Manager management interface.

The xm_soapadmin module responsible for the agent manager connection is always running
NOTE
and is not affected by this action.

Stop
Stop all modules. At least one Online agent must be selected to use this action. The NXLog process cannot be
stopped or started from the NXLog Manager management interface.

The xm_soapadmin module responsible for the agent manager connection is always running
NOTE
and is not affected by this action.

Export
Export agent configuration in XML text format. When activated on the selected agent the export action dialog
appears. In this dialog, the manager allows separate parts to be exported. When the export is finished, the
browser downloads it.

Import
Import an agent configuration, typically one previously exported.

There is the option to override and define new global configuration such as the new
NOTE
manager address.

1157
When triggered, the browser redirects to import options (if global config has been overridden, this section is
skipped):

Similar to the "Clone" agent function, choose the XML file to import the new agent(s) configuration. When this
is done, the manager also allows separate parts of the configuration to be imported:

1158
Update config
After agent settings are changed, use this action to push the new configuration to the agent. All configuration
related files, including pattern database files and certificates, will be pushed to the agent. At least one Online
agent must be selected to enable this action.

Reload
Click to stop the agent, shut down all modules, reload the configuration, then reinitalize and start them all
again. This should be used after a new configuration is pushed to the agent in order for the new settings to
take effect. At least one Online agent must be selected to enable this action.

This is not a process/service level restart but rather a reload. The xm_soapadmin module
responsible for the agent manager connection must be always running, so this module is not
NOTE
affected by this action. The NXLog process cannot be stopped and/or started from the
NXLog Manager management interface.

Configure
Load the Agent configuration page. One and only one Online agent must be selected to enable this action.

Add
Add a new agent.

An agent will appear in the list without a configuration after successfully connecting to the
agent manager even if it does not exist in the agent list. It is possible to add a new agent by
NOTE
creating a certificate, deploying the installer and starting the NXLog service. The new agent
entry should appear automatically.

Delete
Delete the agent. There is no confirmation dialog for agent deletion. Be careful using this action.

The agent will reappear if it has a valid certificate and can successfully authenticate to the
agent manager. Make sure to revoke the certificate and stop the NXLog service before you
NOTE delete the entry with this button. If the NXLog service is not stopped and removed, it will
continue to execute based on its configuration settings, including reconnecting to the agent
manager.

Clone
Clone the agent. The cloned agent(s) will have all the modules and routes of the original. One and only one
Online agent must be selected to enable this action.

Download config
Downloads the agent configuration in a zip file to ease agent’s deployment locally. Each agent will have a
folder in the archive with its name containing all the necessary configuration files and certificates.

View log
View the log of an agent. By the default, it is limited to 100K of the last log. One and only one Online agent
must be selected to enable this action.

Assign template
Assign a template to the selected agent(s). The selected agents' configuration will be replaced with the
configuration of the assigned template.

NOTE This button is only visible if there are existing templates.

Issue certificate
Issue a certificate for the selected ageens. If the checkbox Update connected agents remains checked, the

1159
manager will issue the Update config command. At least one Online agent (which doesn’t have a certificate
assigned) must be selected to enable this action.

Renew certificate
Renew certificate for the selected ageens). If the checkbox Update connected agents remains checked, the
manager will also issue the Update config command. At least one Online agent must be selected to enable
this action.

NOTE If a selected agent already has a valid certificate, it will be revoked.

132.1.1. Agent Information


The following agent information page is loaded when an Online agent is selected by clicking the agent’s name.

The page will show less information if the agent is not connected to the agent manager. The action buttons on
this page function similarly to those on the agent list page, discussed above.

If the agent is Online and some of its modules have variables or statistical counters, they will appear on this page
in a table.

132.1.1.1. Modules
Click the Modules tab to show detailed information about each module as shown in the following image.

This information is only available when the agent is Online. The table contains the following information.

Name
The name of the module instance.

Module
The type of loadable module which was used to create the module instance.

Type
The type of module:

1160
• INPUT
• PROCESSOR
• OUTPUT

Extension module instances are not shown.

Status
The status of the module:

• STOPPED
• RUNNING
• PAUSED
• UNINITIALIZED

The module may become PAUSED if it cannot send or forward the output. This is caused
by the built-in flow control and is perfectly normal unless you see the module in this
NOTE
status for a longer period and the number of sent messages does not increase. You do
not need to start the module when it is PAUSED, it will resume operations automatically.

Received
The number of log messages received or read.

Processing
The number of log messages in the module’s queue waiting to be processed.

Sent
The number of log messages written or sent by the module.

Dropped
The module may drop some messages. This number will be shown here. This is calculated using the values
reported in Received and Sent.

132.1.1.2. Statistics
Click the Statistics tab to display several fully interactive graphs. There is a graph for each of the following
parameters:

• System load & memory usage.


• Total event count.
• Event counts for each module.

Optionally, additional graphs can be added for module variables and statistical counters by clicking the Add
chart button.

Select a module and fill in the name of the variable. Regular expressions of the name are also supported.

Select the graph’s interval from the following values displayed in the drop-down menu:

• Six hours
• One day
• One week
• One year

1161
132.1.2. Agent Configuration
To load the agent configuration form, click the Configure button on the agent list page or the Configure tab at
the top of the agent page. The global configuration tab appears.

The list of parameters are explained below.

Agent name
Set this to the certificate subject name. It is automatically filled out when the agent is connected and
automatically added.

Connection type
Unmanaged
Set the connection type to Unmanaged if you do not want to administer the agent remotely over a secure
connection.

Listen (accept agent manager connection)


The NXLog agent will listen on the IP address and port for incoming TLS/SSL connections. You must also
configure the agent manager to initiate connection to the agents.

Connect to agent manager


The NXLog agent will initiate the connection to the agent manager.

Address
Either the address to which the agent should connect or the address to which the agent is listening,
depending on the Connection type setting.

1162
Port
Either the port number to which the agent should connect or the port to which the agent is listening,
depending on the Connection type setting.

Certificate
The certificate to be presented by the NXLog agent during the mutual authentication phase when the
connection is established with the agent manager. The agent manager will check whether the agent
certificate has been signed with the CA configured on the Agent Manager settings tab.

Log level
The level of detail to use when sending internal messages to the logfile and the im_internal input module.

Log to file
Enable this to use a local nxlog.log file where NXLog agent internal events are written. This method is more
efficient and error resistant than using the im_internal module, and it also works with the DEBUG log level.

Verbatim config
Verbatim configuration text for this agent. This configuration will be placed in the log4ensics.conf file as is.

The list of modules can be managed independently regardless of the route they belong to. The following
screenshot shows an example list of modules.

Add
Click Add to add a new module. The module configuration dialog will pop up.

Remove
To remove a module, click the checkbox after the module’s name. Modules which are already part of a route
cannot be removed.

Routes
Go to the Routes tab to remove or add modules to a route. On the other hand, modules not part of a route
can only be removed on this list. Configuration will not be generated for modules which are not part of a
route.

Copy
Click Copy to copy this module configuration to other agents. A popup will appear to select them. Click the
module’s name to modify its configuration.

To configure the flow of log data in the NXLog agent, click the Routes tab. A freshly created agent does not have

1163
any routes. Click Add route to add a route.

Enter the name and select the priority. Data will be processed in priority order among routes. Lower gets
processed first. This is only useful if you have more routes which contain different input sources. Select default if
you do not wish to assign a priority value.

After the route is added, you can now add modules to it. A route requires at least one input and one output
module. The following screenshot shows an example of a route with one module for each type.

Click the Add button inside the input/processor/output block to add a module instance. The module
configuration dialog will pop up. If there is already an existing module instance, you will be able to select that
also. It is possible to add more module instances to each block. To remove a module, uncheck the checkbox after
its name. The module instance is only removed from the route. To fully delete it, click the Modules tab and
remove the module.

As with modules, an entire route can be copied to other agents. Click the Copy link on the top right of a route to
select one or more agents to copy to.

The last tab contains the generated NXLog configuration which will be pushed to the NXLog agent when Update
config is clicked, as shown in the following screenshot.

1164
132.1.3. Module Configuration
When a new module instance is created, the following dialog window is shown.

132.1.3.1. Parameters
The module configuration dialog Parameters tab consists of two blocks: Common parameters and Module
specific parameters. The Common parameters are as follows:

Name
The name of the module instance.

Module
The loadable module which is used to create the module instance.

132.1.3.2. Expert
Click the Expert tab for advanced configuration.

1165
The module configuration dialog Expert tab consists of:

Actions
The Actions text area can be used to input statements in NXLog’s configuration language. It is possible to add
multiple Action input widgets. Add each action with the Add action button. Click Verify to verify the
statement(s). The contents of the Action block are copied into the module’s Exec directive. Newline characters
will be replaced with the backslash escape character.

The statement entered in the example screenshot above is highlighted below in the generated NXLog
configuration.

Verbatim config
The following is generated into nxlog.conf from the above:

1166
Module specific parameters are not discussed in this user manual. Please consult the NXLog Enterprise Edition
Reference Manual for more information about each module and its capabilities.

1167
Chapter 133. Templates
NXLog templates automate the creation, configuration, tagging, and deployment of agents. This chapter
discusses the configuration and management front-end for templates provided by NXLog Manager.

133.1. Templates
Go to the NXLog Manager main menu and click Agents, then on the Agents page, click the Agent templates tab.
A list of templates is displayed including the following data fields:

133.1.1. Template Data


Template name
The unique name of the template. Click the name to load the Template configuration page.

Description
A detailed description of the template.

Connection address
The IP4 address of the device running NXLog agents and associated with the template configured for
connection with NXLog Manager. Not available when the template is Unmanaged.

Connection port
The connection port of the device running NXLog agents and associated with the template configured for
connection with NXLog Manager. Not available when the template is Unmanaged.

Created
The date and time the template was added.

Last modified
The date and time the template was last edited.

133.1.2. Template Actions


At the bottom of the list of templates there is a menu of actions used to manage the selected template(s).

Add
Creaste a new template and open the editing dialog.

Export
Save the template configuration to an external file. Similar to Export agent configuration, this action exports
template configurations.

Import
Read a template configuration from an external file. Similar to Import agent configuration, this action imports
template configurations.

1168
Delete
Delete the selected template(s).

Deleted templates must be unassigned from the agents belonging to it. If there are any, a
confirmation dialog will appear:

NOTE

Clone
Create an exact copy of the selected template.

Create agents
Create agents and automatically assign the selected template to them.

133.2. Template Configuration


As templates are used for grouping NXLog agent’s configuration, template configuration is almost the same as
the agent configuration. The only difference is no certificate settings are needed as certificate settings are
specific to the agents themselves.

133.2.1. Tags
NXLog templates and agents can be managed by tags. Tags have role and user access permissions. To list them
for a template, click on Tags tab under Template configuration page:

This list contains the following information:

133.2.1.1. Tag Data


Name
The unique name of a tag.

1169
Description
A detailed description of the tag.

Permissions by Role
Shows the access permissions of each role allowed to manage NXLog agents.

Permissions by User
Shows the access permissions of each user allowed to manage NXLog agents.

133.2.1.2. Tag Actions


On the bottom of the list there is a menu of actions which can be used to manage the selected tag(s).

Add
Add a new tag.

Edit
Edit a tag.

Assign
Assign (or unassign) tags to this template.

To add a new tag to the system (and in parallel assign to this template), click on the Add button. An Add tag
dialog will appear:

Fill in the Name and optional Description for this tag. Each new tag is created with default access permissions, is
assigned to this template and will appear on the list:

A tag can be then edited by selecting it and clicking the Edit button. The Edit tag dialog has two tabs—Tag and
Permissions:

1170
If permissions need to be changed, click the Permissions tab, then by User:

After editing permissions, click Update permissions, then Save.

If tags needed to be assigned/unassigned to the current template, click on the Assign button on the tag list page.
The following dialog will appear:

Select the tags needed from the multi-select box and Assign them.

1171
Chapter 134. Agent Groups
NXLog agent groups are used for agent management by grouping them using Agent tags. This chapter discusses
the GUI configuration and management frontend provided by NXLog Manager.

134.1. Agent Groups


To list the available NXLog agent groups, click the Agent groups tab on the AGENTS page. A list of agent groups
similar to the following appears:

This list contains the following information:

Group name
The group name is the unique name of a NXLog agent group. Clicking on the name will load the agents which
are tagged by it.

Description
This is the NXLog agent group detailed description.

On the bottom of the list there is a row of actions which can be used to manage the groups.

134.2. Agent List in a Group


On the tab Agent groups, click a group name in the Group name column. An agent list similar to the following
appears:

In the agent list there are additional actions which can be used to manage this group.

Delete group
Delete this group/tag.

Add agents
Add agents to this group.

To add agents to this group, click on the Add agents button. An Add agents dialog will appear:

1172
Select the desired agents and click Add. The selected agents will be added to the list in this group.

1173
Chapter 135. Certificates
NXLog Manager uses X.509 certificates for various security purposes and has a built-in PKI system to manage
them.

135.1. Listing Certificates


To list the available certificates, click CERTIFICATES under the ADMIN menu. A list similar to the following
appears.

The table contains the following information.

Name
The certificate subject name.

Type
The type is either CA or CERT.

Activation
The time and date after which the certificate is valid.

Expiration
The time and date before which the certificate is valid.

Status
This status is either VALID, REVOKED, or EXPIRED.

Private Key
This field indicates whether the private key pair of the certificate is available or not.

The certificate list shows entries in a hierarchical (tree) structure. Certificates (and sub-CAs) will be rooted under
the CA which was used to sign it.

If the PKI system does not have any certificates, you will need to create a CA first.

135.2. Creating a CA
The certificate authority is used to issue and sign certificates and, subsequently, to verify the associated trust
relationships. To be able to create certificates, a CA is required. To create a CA cert, click Add new CA on the
certificate list page. The certificate creator dialog is displayed.

1174
Some field values are pre-filled if certificate settings are already configured. After clicking Create, the new CA
appears.

135.3. Creating a Certificate

Some field values are pre-filled if the certificate settings are already configured. Fill in the name (certificate
subject) and expiry and select the certificate purpose. It is possible to customize the certificate purpose flags, but
this is not required if the certificate is only used within NXLog Manager with NXLog. After clicking Create, the
new certificate appears, displaying information similar to the following screenshot.

1175
135.4. Exporting
To export a certificate, click Export on a certificate’s general information page or below the certificate list after
selecting one certificate. The following options appear.

In order to support external certificate tools and PKI systems, certificates can be exported in different formats.
The NXLog agents use PEM formatted X.509 certificates.

To export selected certificates from the list in PKCS#12 key store format, click on the export PKCS#12 button
from the certificates page. It will ask for an optional password to protect the PKCS#12 key store:

1176
135.5. Importing
In order to support external certificate tools and PKI systems, certificates can be imported in different formats.

135.6. Revoking and Deleting Certificates


It is not possible to delete a certificate unless it is revoked. If the PKI system does not contain the certificate of an
NXLog agent and the presented certificate is authenticated, the connection will be accepted.

If the PKI system does contain the certificate of an NXLog agent and the certificate is found to be revoked, the
connection will be refused.

NOTE Deleting certificates is not recommended.

135.7. Renewing a Certificate


This operation will issue a new certificate. It can be used to replace an existing certificate which has already
expired, will shortly expire, or is revoked.

Generally, it is not a good idea to have multiple valid certificates with the same subject. If a
WARNING certificate has been superseded by a new one (e.g. already pushed to the agent), make sure
to revoke the former.

135.8. Certificates Encryption


By default, NXLog-Manager encrypts the private keys of certificates in the database, to prevent stealing them if
the database is hacked. This is 2-phase encryption with predefined algorithms:

• The first time the 'admin' user logs-in, NXLog-Manager generates a random encryption key with predefined
length. This key is only kept in application memory space and certificate keys will be encrypted with it.
• NXLog-Manager then encrypts this key with the authorized user password and saves it in the user settings in
the database. When NXLog-Manager is restarted, the authorized user with a key must login to decrypt this
key with the password to make it available for encryption/decryption of certificate keys.

An authorized user is eligible for this key when it has the role of ROLE_ADMINISTRATOR or
NOTE
ROLE_CERTIFICATE. By default an 'admin' user has ROLE_ADMINISTRATOR.

1177
When a new authorized user is added, the encryption key must also be encrypted with the new
user’s password and saved in the database. Currently, this can only happen if the user logs in to
NOTE the same application session for which this key is already available (an authorized user with a
stored key has logged in to unlock the key). In the future, there will be enhancements in NXLog
Manager to skip the log-in step for new authorized users.

There is an option Do not encrypt agent manager’s private key in Agent Manager. With this option active,
when the NXLog Manager has to be restarted for some reason it is not necessary for the administrator to log in
to decrypt their keys so the manager can (re)start.

135.9. Reset Certificates and Encryption Key


If the encryption key cannot be decrypted due to some configuration problem or defect, NXLog Manager offers a
recover option to reset all the certificates with encrypted keys and reset the encryption key for them. When the
encryption key is not available after log in with authorized user, a dialog similar to this one is displayed:

If reset certificates and encryption key is the last resort and there is no other option, this action can be done
from this dialog. This will update the certificates for the connected agents with renewed certificates. It is a good
outcome if there are as many agents as possible connected and the manager should be already running with
non-encrypted keys.

All offline agents during this operation must be updated locally with the new certificates. They will not be able to
connect to the manager when it is running with new certificates for authentication.

There will be notifications for each change/failure in the UI and also in the logs.

1178
Chapter 136. Settings
To configure the system components, click on the SETTINGS menu item under the ADMIN menu. Each tab is
discussed below.

136.1. Agent Manager


The agent manager is responsible for connecting to the NXLog agents or accepting connections to establish a
secure trusted channel which is used to manage and administer the agents remotely. Each NXLog agent is
queried by the agent manager every 60 seconds for status information.

The above screenshot shows the Agent manager tab where its parameters can be configured.

The agent manager can accept and initiate connections to the agents. Enable the Accept agent connections
checkbox to let the agent manager accept incoming connections from agents. Enable the Initiate connection to
the agent’s checkbox to let the agent manager initiate the connection.

For these settings to work, the agents must be configured accordingly. See the Agent
NOTE
Connection type configuration parameter.

These options have the following configuration parameters:

Listen address
When Accept agent connections are requested, the IP address of the interface must be specified. Use 0.0.0.0

1179
to listen on all interfaces.

Port
When Accept agent connections are requested, the port number must be specified. This is the port the agent
manager will listen to for incoming connections.

CA
The CA configured here is used to verify the certificate presented by the NXLog agent during the TLS/SSL
handshake.

Certificate
The certificate configured here will be used to authenticate to the NXLog agent during the TLS/SSL
handshake.

For security reasons, certificate private keys in the database are stored in encrypted form. These are encrypted
with a master key which is accessible to users with ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE access
rights. The agent manager’s private key is required to be able to establish the trusted control connection with the
agents. Enable the "Don’t encrypt agent manager’s private key" option for the system to be able to operate in an
unattended mode. Otherwise, the agent manager connection will only work after a reboot/restart following a
successful admin login.

Another security option is Subject Name Authorization. Subject Name Authorization refers to the check that
happens when the TLS connection is established with the agent and the manager looks at the CN in the
certificate subject and checks whether this matches the reverse DNS. This is to prevent malicious agents
connecting with a stolen certificate. The Agent name setting that follows is to select what data to use as the "agent
name". These two options are useful on networks with DHCP assigned addresses where the agent may have
different IP addresses.

There are 3 options:

Warn if untrusted.
When this option is selected, agent manager will accept agents which try to authorize with Subject Name
other than their reverse DNS, and will mark them as Forged.

Reject agent.
When this option is selected, agent manager will reject agents which try to authorize with Subject Name other
than their reverse DNS.

Disable.
When this option is selected, agent manager will ignore the mismatch between Subject Name and reverse
DNS for connected agents.

Due to Subject Name Authorization and the specifics of some networks like NAT, agent manager must have a
policy for names of connected agents which will appear on the Agent list. Agent manager supports 3 options for
Agent name:

1180
Use reverse DNS name, else IP address.
When this option is selected, agent manager will try to resolve the Fully Qualified Domain Name of connected
agents. If resolving fails, it will use the agent’s IP address.

Use reverse DNS name, else certificate subject name.


When this option is selected, agent manager will try to resolve the Fully Qualified Domain Name of connected
agents. If resolving fails, it will use the agent certificate’s Subject Name.

Use certificate subject name.


When this option is selected, agent manager will always use the agent certificate’s Subject Name. This option
is the only reasonable choice for NAT networks.

If one of the last 2 options is selected and the NXLog agent does not authorize with valid client
NOTE
certificate (but the manager demands Subject Name), the agent will be rejected.

Per agent rules for Subject Name Authorization and Agent Name can be defined by clicking the button "Add
override". The following dialog will appear:

There are 3 types of hosts which can be defined: exact name or IP address; name or IP address regular
expression and IP address range. An option exists to verify host definition against real host. The overridden rules
will appear as list under the global manager rules.

Later on, these specific rules can be modified and/or deleted.

Click Save & Restart to apply the changes. The Status field will display the status of the agent manager.

136.2. Certificates
This form is divided in two sections—Certificate defaults and Certificates provider.

1181
Certificate defaults
This form can be used to set common parameters which are used during certificate creation. Most of these
attributes are common, though there are some that deserve a direct mention:

Encrypt private keys


If this is enabled, certificate keys will be stored encrypted in the database. See certificates encryption for
more information.

By default on a new system with a blank database, this setting will be disabled. If
this setting is enabled, you must always have an available administrator which can
IMPORTANT unlock the keys after log in. Losing the encryption key will lose access to private
keys, making certificates unusable. This feature must be taken very seriously. Practice
special care when enabling it.

Keystore type
This is the default keystore type that NXLog Manager will use when dealing with certificates. BKS is
considered more secure than default Java keystore JKS. If BKS does not have enough support for
certificates such as Elliptic Curve Certificates, there is an option to change the keystore type to JCEKS
instead.

NXLog Manager uses RSA encryption by default until another type of certificate is used.
NOTE
For example, if EC certificate is imported in the system, EC encryption is used.

Signature hash algorithm


SHA256 is the default. The hash algorithm can be changed.

Key size
2048 is the default key size. It is recommended that a longer length be used. Currently, 3072 is considered
safe until year 2030 with existing hardware architectures. A length of 4096 is practically unbreakable.

1182
Certificates provider
The Certificates provider option makes it possible to use a PKCS11 compliant backend to store certificates
and private keys instead of using the default configuration database. The PKCS11 API is implemented by most
smart cards and HSM devices, which can be used to securely store private keys.

136.3. Mail
To be able to send notification emails, an SMTP server is required. The Mail tab provides a form where the SMTP
server settings can be specified.

136.4. Config Backup


The full NXLog Manager configuration can be backed up to an encrypted file. The configuration can be restored
using a backup file on the same form. This configuration backup can be scheduled to make it run automatically
at a specific time. The system will send an email notification if an email address is provided.

1183
136.5. License
The License tab provides a form to upload and show the license file and license details.

If the license is invalid or expired, a warning will be displayed as shown in the following image.

1184
136.6. User Settings
This form is divided in two sections—Settings and Change password:

Settings
The logged in user can change their name, email address, user interface language, and theme. The email
address will be used for system notifications.

Change password
This section allows to the logged in user to change their password.

1185
Chapter 137. Users, Roles, and Access Control
NXLog Manager’s user and role management system allows administrators to grant access to functions and
resources in a flexible and customizable framework.

137.1. Users
To access user management, from the main menu go to ADMIN, then click USERS. The Manage Users page is
displayed.

The default installation creates an admin user.

To add a user, click Add User at the bottom left of the page, as shown below.

The Add User dialog appears. Enter the new user’s details and credentials. You can also enable the user, assign
roles to the user, or toggle the user’s LDAP status (see also LDAP and LDAPS below).

These settings can be edited after adding the user.

1186
Click Add. The new user appears in the Users list on the Manage Users page.

If the user will manage certificates, it is recommended that either ROLE_ADMINISTRATOR or


ROLE_CERTIFICATE be assigned to the user immediately. This ensures the new user will receive
NOTE
the encryption key needed to encrypt and decrypt certificate private keys (certificates
encryption), if this encryption is enabled (encryption setting).

If ROLE_ADMINISTRATOR or ROLE_CERTIFICATE is assigned, the application requires the user’s


NOTE password in order to generate the corresponding certificates encryption key. This is not required
for LDAP users.

To change the user’s information and role assignments, select the user from the Users list and click Edit. The
User Edit dialog appears.

1187
Edit the user information and assigned roles, then click Save.

By default, all roles have read-write permissions. To restrict certain roles to read-only, click the
NOTE selected role name. Notice the marker after the role name toggle between RW (read/write
access) and RO (read only access).

137.2. Roles
To access role management, from the main menu go to ADMIN, then click ROLES. The Manage Roles page is
displayed.

The default installation creates a set of built-in (read-only) roles. They are listed in the Roles pane on the left of
the Manage Roles page.

1188
137.2.1. Built-in User Roles
Built-in roles provide a solid basis for most user management scenarios. Each role grants the user access to the
functionality described. Built-in roles may not be modified or deleted.

ROLE_ADMINISTRATOR
The user can access and execute all administrative functions.

ROLE_FIELD
The user can access and execute all field administration functions.

ROLE_PATTERN
The user can access and execute all PATTERNS functions.

ROLE_CORRELATION
The user can access and execute all CORRELATION rule functions.

ROLE_AGENT
The user can access and execute all AGENTS functions.

ROLE_CERTIFICATE
The user can access and execute all CERTIFICATES functions.

ROLE_READONLY
This is a special role which denies any modification to the system by the user.

Additional roles for more sophisticated user management scenarios are easily created. Click Add Role at the
bottom left of the Manage Roles page. The Add Role dialog appears.

1189
Enter the new role’s name, then click Submit. The new role’s name appears in the list on the Manage Roles
page.

137.2.2. LDAP and LDAPS


LDAP and LDAPS (LDAP over SSL) are supported.

137.2.2.1. LDAP Configuration


To use LDAP, update the jetty environment configuration found in /opt/nxlog-manager/conf/jetty-env.xml
and then restart nxlog-manager.

• Init style services: service nxlog-manager restart

• Systemd style services: systemctl restart nxlog-manager

• Systemd on OSX: launchctl restart nxlog-manager

Review the file to locate these default LDAP attribute settings.

jetty-env.xml (default attributes for LDAP)


<Set name="ldapEnabled">false</Set>
<Set name="ldapServerURL">ldap://hostname.nxlog.org/dc=nxlog,dc=org</Set>
<Set name="ldapUserSearchBase">ou=nxlogAccounts</Set>
<Set name="ldapUserSearchFilter">(uid={0})</Set>
<Set name="ldapUserDn"></Set>
<Set name="ldapPassword"></Set>

Edit the values to match those in your environment.

Below is an example configuration from a working Active Directory setup.

jetty-env.xml (fragment 1)
<Set name="ldapEnabled">true</Set>
<Set name="ldapServerURL">ldap://192.168.1.10/dc=nxlog,dc=org</Set>
<Set name="ldapUserSearchBase">cn=users</Set>
<Set name="ldapUserSearchFilter">(sAMAccountName={0})</Set>
<Set name="ldapUserDn">nxlog</Set>
<Set name="ldapPassword">PASSWORD</Set>

Below is another example showing how to configure an additional filter for the search function, in this case using
nested groups. This example is also from a working Active Directory setup, which you can tell from the use of
sAMAccountName for user search settings in this and the previous example.

jetty-env.xml (fragment 2)
<Set name="ldapUserSearchFilter">
  (&(sAMAccountName={0})(memberOf:1.2.840.113556.1.4.1941:=CN=NXLog_Admins,OU=Admin
Groups,OU=Level1,dc=domain,dc=local))
</Set>

1190
137.2.2.2. LDAPS Configuration
To use LDAP over SSL, your LDAPS certificate trust store must be imported into the JRE’s key store, using the
keytool command:

keytool -keystore <PATH_TO_JRE>/lib/security/cacerts -import -alias \


certificate -file <PATH_TO_CERTIFICATE>/certificate.cer

After updating the key store, ensure the protocol in jetty-env.xml is changed from ldap:// to
NOTE
ldaps://.

For troubleshooting the LDAP configuration, review /opt/nxlog-manager/log/nxlog-


NOTE
manager.log.

137.3. Audit Trail


NXLog Manager retains a chronological record of all events (processed and internal) in what is known as an audit
trail.

To access the audit trail, from the main menu go to ADMIN, then click AUDIT TRAIL. The Audit Trail page is
displayed.

The page presents a table of events in chronological order. Each row is an event and each column is a field
corresponding to the event. The fields include the event date and type, username, manager address, user
address, and details about the event.

Click any of the column headers to sort the events by that field’s values, in ascending order. Click the column
header again to toggle the sort order between descending and ascending.

To toggle display of event details, click the plus (+) in the details field in the corresponding event row.

Typically, there are a large number of event entries. To filter the events list, go to the top of the Audit Trail page
and click Filter audit trail. The Filter Audit Trail dialog appears:

Audit events can be filtered by the following criteria: the event type, the time range in which the event occurred,
and the username associated with the event.

1191
Click Apply filter to apply the filtering criteria to the event list. For example, selecting DELETE event type and
applying the filter criteria displays a list similar to the following:

To discard the applied filter criteria, click Clear filter (at the top of the page). The unfiltered list of all audit events
is displayed.

1192
Chapter 138. RESTful Web Services
NXLog manager provides a *RE*presentational *S*tate *T*ransfer (REST) interface, or RESTful web service API,
that makes it possible to access agent information or configure the system without using the UI.

The base URL to access the REST API is http://hostname:port/nxlog-manager/mvc/restservice. Depending


on the service, either GET, POST, PUT or DELETE requests should be used. The API responses are in XML or JSON.
You will need to send REST_USER and REST_PASSWORD HTTP headers in order to authenticate the requests.

NXLog Manager is distributed with an embedded documentation of its REST API with detailed specification of all
supported RESTful services. Once the NXLog Manager instance is up and running, the documentation is available
at http://hostname:port/nxlog-manager/swagger-ui.html (for viewing in a web browser). If you want to get
this documentation programmatically, use the http://hostname:port/nxlog-manager/v2/api-docs endpoint
that returns information about NXLog Manager RESTful services as raw JSON.

Using GET to Pull appinfo Using wget


$ wget -q --header "REST_USER: $USER" --header "REST_PASSWORD: $PASS" -O - \
  "http://$HOST:$PORT/nxlog-manager/mvc/restservice/appinfo"

NOTE Throughout this chapter the base URL is substituted with [B_URL].

The following services are available:

• agentmanager: Verify that NXLog Manager is up and running (GET).


• appinfo: Get information about the NXLog Manager (GET).
• agentinfo: Get information about the NXLog Agents (GET).
• addagent: Add a new Agent (POST).
• modifyagent: Modify an existing Agent (PUT).
• deleteagent: Delete an existing Agent (DELETE).
• certificateinfo: Retrieve certificate information (GET).
• createfield: Create NXLog fields (POST).

138.1. agentmanager
This service is useful to verify the NXLog Manager is up and running. This is a GET request with URL
[B_URL]/agentmanager and no additional parameters. This service can also be used if the Don’t encrypt agent
manager’s private key setting is not enabled on the Settings tab and the NXLog Manager service has been
restarted (or after a reboot). A REST call of a user who has ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE
access rights can decrypt the master key, enabling the agent manager to establish the trusted control connection
with the agents.

138.2. appinfo
This service provides information about a running NXLog Manager. This is a GET request with URL
[B_URL]/appinfo and no additional parameters. This service provides the uptime, license state and expiration
date, version, and revision.

1193
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="appinfo">
  <values>
  <applicationinfo>
  <nxmUptime>17143265</nxmUptime>
  <licenseState>LICENSED/Expired</licenseState>
  <licenseExpireDate>2011-12-30 22:00:00.0 UTC</licenseExpireDate>
  <appVersion>5.0</appVersion>
  <appRevisionNumber>4895</appRevisionNumber>
  </applicationinfo>
  </values>
</result>

138.3. agentinfo
This service provides information about NXLog Agents registered with the NXLog Manager. This is a GET request
with URL [B_URL]/agentinfo that can take additional parameters. The response can be filtered by the name or
the state of the agent with the options agentname and agentstate. Those two parameters cannot be combined,
unlike the third parameter agentwithmodules which will also include module information with the agent
information. For example, to get information for both the agents and the modules for all agents with state
ONLINE the following REST call can be used:
[B_URL]/agentinfo?agentstate=ONLINE&agentwithmodules=true. Refer to the Agents chapter for more
information.

1194
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="agentinfo">
  <values>
  <agent>
  <name>192.168.122.1</name>
  <version>3.99.2866</version>
  <status>ONLINE</status>
  <load>0.16</load>
  <address>192.168.122.1</address>
  <started>2017-12-15 16:36:23.974 UTC</started>
  <memUsage>7442432.0</memUsage>
  <received>2</received>
  <processing>0</processing>
  <sent>2</sent>
  <sysinfo>OS: Linux, Hostname: voyager, Release: 4.4.0-103-generic, Version: #126-Ubuntu SMP
Mon Dec 4 16:23:28 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory</sysinfo>
  <modules>
  <module>
  <name>in_int</name>
  <module>im_internal</module>
  <type>INPUT</type>
  <isRunning>true</isRunning>
  <received>2</received> <processing>0</processing>
  <sent>2</sent>
  <dropped>0</dropped>
  <status>RUNNING</status>
  </module>
  <module>
  <name>null_out</name>
  <module>om_null</module>
  <type>OUTPUT</type>
  <isRunning>true</isRunning>
  <received>2</received>
  <processing>0</processing>
  <sent>2</sent>
  <dropped>0</dropped>
  <status>RUNNING</status>
  </module>
  </modules>
  </agent>
  </values>
</result>

138.4. addagent
This service adds a new NXLog Agent to the list of existing Agents. This is a POST request with URL
[B_URL]/addagent that can take several additional parameters. The only mandatory parameter is agentname,
which is the name for the new agent. The optional parameter connectionmode can be used to change the
connection type of the Agent, from the default CONNECT_TO, to either UNMANAGED or LISTEN_FROM. The
connectionport parameter can be used to change the default port of the manager from 4041; this parameter
can only be used for managed connection types. The connectionaddress parameter can be used to set the IP
address the manager will either CONNECT_TO or LISTEN_FROM; the default value is localhost. The loglevel
parameter can be used to set the log level; values can be DEBUG, INFO, WARNING, ERROR, or CRITICAL. The
logtofiled parameter is used to enable the agent to use the local nxlog.log file. To create agent clones, the
agentname parameter can be specified more than once with unique agent names. Refer to the Agents chapter
for more information.

To create an Agent template instead of an Agent, include the agenttemplate parameter as

1195
agenttemplate=true. If multiple agent names are specified when creating a template, the first one will be the
name of the template and the rest will be agents based on this template. Refer to the Templates chapter for
more information.

Creating a new Agent, for example, can be done with this REST call:
[B_URL]/addagent?agentname=Justatest&connectionmode=LISTEN_FROM. This will return the following XML
message that includes the Agent configuration in base64 encoded format.

Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="addagent">
  <values>
  <addagent>
  <configuration># PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPGFnZW50PgogICAgPG5hbWU+
# SnVzdGF0ZXN0PC9uYW1lPgogICAgPG5zMTpnbG9iYWwtY29uZmlnIHhtbG5zOm5zMT0iaHR0cDov
# L2Nhc3Rvci5leG9sYWIub3JnLyI+CiAgICAgICAgPGNlcnQtaWQ+MTg8L2NlcnQtaWQ+CiAgICAg
# ICAgPGxvZy1sZXZlbCB4bWxuczp4c2k9Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1h
# LWluc3RhbmNlIgogICAgICAgICAgICB4bWxuczpqYXZhPSJodHRwOi8vamF2YS5zdW4uY29tIiB4
# c2k6dHlwZT0iamF2YTpqYXZhLmxhbmcuU3RyaW5nIj5JTkZPPC9sb2ctbGV2ZWw+CiAgICAgICAg
# PGlzLWxvZy10by1maWxlPnRydWU8L2lzLWxvZy10by1maWxlPgogICAgICAgIDxjb25uZWN0aW9u
# LW1vZGUKICAgICAgICAgICAgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNj
# aGVtYS1pbnN0YW5jZSIKICAgICAgICAgICAgeG1sbnM6amF2YT0iaHR0cDovL2phdmEuc3VuLmNv
# bSIgeHNpOnR5cGU9ImphdmE6amF2YS5sYW5nLlN0cmluZyI+TElTVEVOX0ZST008L2Nvbm5lY3Rp
# b24tbW9kZT4KICAgICAgICA8Y29ubmVjdGlvbi1wb3J0PjA8L2Nvbm5lY3Rpb24tcG9ydD4KICAg
# IDwvbnMxOmdsb2JhbC1jb25maWc+CjwvYWdlbnQ+Cg==

</configuration>
  </addagent>
  </values>
<result>

138.5. modifyagent
This service modifies the configuration of an existing Agent. This is a PUT request with URL
[B_URL]/modifyagent. This service has the same parameters as the addagent service, except for the
agenttemplate parameter.

138.6. deleteagent
This service deletes an existing Agent. This is a DELETE request with URL [B_URL]/deleteagent. The only
parameter required for this service is the agentname parameter.

138.7. certificateinfo
This safe service can retrieve certificate information from the NXLog Manager. This is a GET request with URL
[B_URL]/certificateinfo. Without any parameters the service will list all certificate information. Parameter
expirein can be used to list only certificates that will expire in the given number of days.

As an example, this call will list certificates expiring in one month: [B_URL]/certificateinfo?expirein=30. If
no certificates are expiring in that time period, an empty result is returned.

1196
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="certificateinfo">
  <values>
  <ok>
  <message>The result is empty!</message>
  </ok>
  </values>
</result>

138.8. createfield
This service will create fields in NXLog Manager. This is a POST request with URL [B_URL]/createfield and
there are several parameters. The parameter name is the name of the field and must be a unique identifier. The
parameter type is the field type and must be one of the following types: STRING, INTEGER, BINARY, IPADDR,
BOOLEAN or DATETIME. The parameter description is a short description of the field. The parameters, 'persist'
and 'lookup' can be TRUE or FALSE. For more information, see the fields chapter.

The following REST call will create a TEST field of type STRING, both, persistent and lookup enabled too.

[B_URL]/createfield?name=TEST&type=STRING&description=Just a string&persist=true&lookup=true

Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="createfield">
  <values>
  <ok>
  <message>OK</message>
  </ok>
  </values>
</result>

1197
NXLog Add-Ons
Various add-ons are available for NXLog, which provide specialized integration with various software and
services.

1198
Chapter 139. Amazon S3
This add-on can be downloaded from the nxlog-public/contrib repository according the license and terms
specified there.

NXLog can both receive events from and send events to Amazon S3 cloud storage. The NXLog Python modules
for input and output (im_python and om_python) are used for this, as well as Boto3, the AWS SDK for Python. For
more information about Boto3, see AWS SDK for Python (Boto3) on Amazon AWS.

139.1. Setting Up Boto3


1. Boto3 can be installed with pip or the system package manager.
◦ pip: pip install boto3

◦ APT on a Debian-based distribution: apt-get install python-boto3

◦ Yum on a Red Hat-based distribution: yum install python2-boto3

NOTE The python2-boto3 package requires the installation of the EPEL repository.

2. Make sure an AWS service account has been created.


3. Set the default region and credentials in ~/.aws/. This can be done interactively, if the AWS CLI is installed.
Or, edit the files shown below. Credentials for the AWS account can be found in the IAM Console. A new user
can be created, or an existing user can be used. Go to "manage access keys" and generate a new set of keys.
More information about the initial setup and the credentials can be found in the Boto3 Quickstart and
Credentials documents.

~/.aws/config
[default]
region=eu-central-1

~/.aws/credentials
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY

The region and credential configuration can also be hardcoded in the scripts, but this is not
NOTE
recommended.

139.2. AWS S3 Buckets, Objects, Keys, and Structure


Amazon S3 stores objects inside containers called buckets. There is a finite number of buckets available to the
user and an infinite number of objects can be stored. More general information about Amazon S3 can be found
at Getting Started with Amazon Simple Storage Service on Amazon AWS.

Both the input and output Python scripts interact with a single bucket on Amazon S3. The scripts will not create,
delete, or alter the bucket or any of its properties, permissions, or management options. It is the responsibility of
the user to create the bucket, provide the appropriate permissions (ACL), and further configure any lifecycle,
replication, encryption, or other options. Similarly, the scripts do not alter the storage class of the objects stored
or any other properties or permissions.

We selected a schema where we store events on a single bucket and each object has a key that references the
server (or service) name, the date, and the event received time. Though Amazon S3 uses a flat structure to store

1199
objects, objects with similar key prefixes are grouped together resembling the structure of a file system. The
following is a visual representation of the naming scheme used. Note that the key name in the deepest level
represents a time—however, Amazon S3 uses the colon (:) as a special character and to avoid escaping we
selected the dot (.) character to substitute it.

• MYBUCKET/
◦ SERVER01/
▪ 2018-05-17/
▪ 12.36.34.1
▪ 12.36.35.1
▪ 2018-05-18/
▪ 10.46.34.1
▪ 10.46.35.1
▪ 10.46.35.2
▪ 10.46.36.1
◦ SERVER02/
▪ 2018-05-16/
▪ 14.23.12.1
▪ 2018-05-17/
▪ 17.03.52.1
▪ 17.03.52.2
▪ 17.03.52.3

139.3. Sending Events to S3


Events can be sent to Amazon S3 cloud object storage as follows.

Events are stored in the Amazon S3 bucket with object key names comprised from the server name, date in
YYYY-MM-DD format, time in HH.MM.SS format, and a counter (since multiple events can be received during the
same second).

1. Copy the s3_write.py script to a location that is accessible by NXLog.

2. Edit the BUCKET and SERVER variables in the code.

3. Configure NXLog with an om_python instance.

1200
Example 711. Sending Events From File to S3

This configuration reads raw events from a file with im_file and uses om_python to forward them, without
any additional processing, to the configured S3 storage.

nxlog.conf
 1 <Input file>
 2 Module im_file
 3 File "input.log"
 4 # These may be helpful for testing
 5 SavePos FALSE
 6 ReadFromLast FALSE
 7 </Input>
 8
 9 <Output s3>
10 Module om_python
11 PythonCode s3_write.py
12 </Output>
13
14 <Route file_to_s3>
15 Path file => s3
16 </Route>

139.4. Retrieving Events From S3


Events can be retrieved from Amazon S3 cloud object storage as follows.

The script keeps track of the last object retrieved from Amazon S3 by means of a file called lastkey.log, which
is stored locally. Even in the event of an abnormal termination, the script will continue from where it stopped.
The lastkey.log file can be deleted to reset that behavior (or edited if necessary).

1. Copy the s3_read.py script to a location that is accessible by NXLog.

2. Edit the BUCKET, SERVER, and POLL_INTERVAL variables in the code. The POLL_INTERVAL is the time the script
will wait before checking again for new events. The MAXKEYS variable should be fine in all cases with the
default value of 1000 keys.
3. Configure NXLog with an im_python instance.

1201
Example 712. Reading Events From S3 and Saving to File

This configuration collects events from the configured S3 storage with im_python and writes the raw events
to file with om_file (without performing any additional processing).

nxlog.conf
 1 <Input s3>
 2 Module im_python
 3 PythonCode s3_read.py
 4 </Input>
 5
 6 <Output file>
 7 Module om_file
 8 File "output.log"
 9 </Output>
10
11 <Route s3_to_file>
12 Path s3 => file
13 </Route>

139.4.1. Serialization and Compression


In the previous examples, only the $raw_event field was stored in the objects. An easy way to store more than
one field is to "pickle" (or "serialize" or "marshal") all the fields of an event.

Pickling Events
import pickle

all = {}
for field in event.get_names():
  all.update({field: event.get_field(field)})

newraw = pickle.dumps(all)

client.put_object(Body=newraw, Bucket=BUCKET, Key=key)

Compressing the events with gzip is also possible.

Compressing Events With gzip


import StringIO
import gzip

out = StringIO.StringIO()
with gzip.GzipFile(fileobj=out, mode="w") as f:
  f.write(newraw)

gzallraw = out.getvalue()

client.put_object(Body=gzallraw, Bucket=BUCKET, Key=key)

1202
Chapter 140. Box
This add-on is available for purchase. For more information, please contact us.

The Box add-on can be used to pull events from Box using their REST API. Events will be passed to NXLog in
Syslog format with the JSON event in the message field.

To set up the add-on, follow these steps.

1. Copy the box-pull.pl script to a location that is accessible by NXLog.

2. Edit the configuration entries in the script as necessary, or use arguments to pass configuration to the script
as shown in the example below.
3. Configure NXLog to collect events with the im_exec module.

The script saves the current timestamp to a state file in order to properly resume when it is terminated. If the
state file does not exist, the script will collect logs beginning with the current time. To manually specify a starting
timestamp (in milliseconds since the epoch), pass it as an argument: ./box-pull.pl
--stream_position=1440492435762.

1203
Example 713. Collecting Events From Box

This configuration uses the im_exec module to run the script, which connects to Box and returns Syslog-
encapsulated JSON. The xm_syslog parse_syslog() and xm_json parse_json() procedures are used to parse
each event into internal NXLog fields. Additional modification to the fieldset can be added, as required, in
the Input instance Exec block.

For the sake of demonstration, all internal fields are then converted back to JSON and written to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input box>
10 Module im_exec
11 Command /opt/nxlog/lib/nxlog/box-pull.pl
12 Arg --client_id=YEKigehUh0u4pXeKSgKzwTbfii2stCwU
13 Arg --client_secret=3VRiqMuPDuUYeTXA5Ds9R0B4TnL35WRy
14 Arg --enterprise_id=591376
15 Arg --oauthurl=https://api.box.com/oauth2/token
16 Arg --certkeyfile=privkey.pem
17 Arg --baseurl=https://api.box.com/2.0
18 Arg --pollinterval=5
19 Arg --statefile=/opt/nxlog/var/lib/nxlog/box-pull.dat
20 Arg --syslogpri=⑬
21 <Exec>
22 parse_syslog();
23 parse_json($Message);
24 </Exec>
25 </Input>
26
27 <Output file>
28 Module om_file
29 File '/tmp/output'
30 Exec to_json();
31 </Output>

1204
Chapter 141. Cisco FireSIGHT eStreamer
This add-on is available for purchase. For more information, please contact us.

The eStreamer add-on can be used with NXLog to collect events from a Cisco FireSIGHT System. The Cisco Event
Streamer (eStreamer) API is used for communication between NXLog and the FireSIGHT System. This section
describes how to set up FireSIGHT and NXLog and start collecting events.

For more information about eStreamer, see FireSIGHT System eStreamer Integration Guide v5.4 on Cisco.com. To
download the full Firepower eStreamer SDK, see eStreamer SDK Version 6.1 on Cisco Community.

141.1. Configuring the Cisco Defense Center


To receive events from the Cisco Defense Center, a client must be added to the eStreamer Event Configuration.

1. Log in to the Management Center and navigate to System → Integration → eStreamer.

Depending on the Cisco system, the eStreamer configuration and client creation page
NOTE location may differ. In other systems, the same page can be found under System → Local →
Registration → eStreamer.

2. Select the event types that should be sent and then click [ Save ].

3. Enter an IP address or a resolvable name in the Hostname field and optionally a password. Click [ Save ].

4. Click on the download arrow to download the certificate for the client. Place the PKCS12 certificate in the
same directory as the Perl client.

1205
141.2. Configuring the eStreamer Script
The estreamer.pl client is based on Cisco’s ssl_test.pl reference client which is included in the FireSIGHT
eStreamer SDK.

1. Make sure the following required Perl modules, which are part of the FireSIGHT eStreamer SDK, are present
in the same directory: SFStreamer.pm, SFPkcs12.pm, SFRecords.pm, and SFRNABlocks.pm.
2. Edit the script and set the configuration options. The available options include the following.
◦ The server address and port
◦ The file name and password used for the PKCS12 certificate
◦ Enable/disable verbose output
◦ The start time for receiving events: using bookmarks (by setting to bookmark) ensures that no events will
be lost or duplicated.
◦ The output mode: typically this is set to \&send_nxlog; however there is a \&send_stdout_raw output
where all data and meta-data is printed to standard output (for debugging purposes).
3. In the $OUTPUT_PLUGIN section of the script, the type of event request can be customised. Refer to the
FireSIGHT System eStreamer Integration Guide for more information.
4. Finally, the output mode subroutine \&send_nxlog might require modification if the presentation of the data
needs to be altered or alternative data or metadata need to be included or excluded. The \&send_stdout
subroutine can be used to show the output sent to NXLog and the \&send_stdout_raw can be used to show
the full contents of the data stream. Remember to set the $conf_opt->{output} variable to the appropriate
subroutine.

141.3. Configuring NXLog


The im_perl module is used to execute the Perl script, which in turn connects to the server and receives events.
The Perl script directly sets various NXLog internal fields from the event data data collected from eStreamer.

1206
Example 714. Collecting Events From eStreamer

This configuration uses the im_perl module to execute the Perl script. The resulting internal NXLog fields
are then converted to JSON format before being written to file with om_file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Input estreamer>
 6 Module im_perl
 7 PerlCode /opt/nxlog/bin/estreamer.pl
 8 </Input>
 9
10 <Output file>
11 Module om_file
12 File '/tmp/output.log'
13 Exec to_json();
14 </Output>
15
16 <Route estreamer_to_file>
17 Path estreamer => file
18 </Route>

The following is a sample output of intrusion detection events in JSON format.

Output Sample
{
  "EventTime": "2018-1-24 11:50:23.939847",
  "AlertPriority": 3,
  "SourceIp": "192.168.99.2",
  "SourcePort": 0,
  "DestinationIp": "192.168.98.2",
  "DestinationPort": 0,
  "EventMessage": "PROTOCOL-ICMP Echo Reply",
  "EventReceivedTime": "2018-01-24 11:50:29",
  "SourceModuleName": "perl",
  "SourceModuleType": "im_perl"
}
{
  "EventTime": "2018-1-24 11:50:34.499867",
  "AlertPriority": 3,
  "SourceIp": "192.168.98.2",
  "SourcePort": 0,
  "DestinationIp": "192.168.99.2",
  "DestinationPort": 0,
  "EventMessage": "PROTOCOL-ICMP Echo Reply",
  "EventReceivedTime": "2018-01-24 11:50:35",
  "SourceModuleName": "perl",
  "SourceModuleType": "im_perl"
}

An ICMP Echo Reply is not typically an intrusion detection event; however, it was a
NOTE
convenient way to simulate one.

1207
Chapter 142. Cisco Intrusion Prevention Systems
(CIDEE)
This add-on is available for purchase. For more information, please contact us.

The Cisco IPS add-on supports collection of alerts from an IPS-enabled device. The Security Device Event
Exchange (SDEE) API is used for communication between NXLog and the IPS.

142.1. Setup
1. Install the add-on.
2. Set the correct connection details in the script by editing the sdee("cisco","cisco","192.168.100.254",
"http","cgi-bin/sdee-server/","yes"); line in the read_data() subroutine. Set the appropriate
username, password, hostname or IP address, protocol, path, and force subscription.
◦ For username and password, a suitable user with the appropriate privilege level must be selected.
◦ The protocol can be http or https; however, HTTPS requires that the appropriate SSL options are
enabled further down in the sdee() subroutine.
◦ The default path for the SDEE service can be changed if necessary.
◦ We recommend using force subscription, but the default of yes can be changed to no if required.

3. Upon start-up, the script will open a connection to the device and request a subscription ID. It will then
periodically ask for new alerts. The interval that the device is queried for new alerts can be set by changing
the set_read_timer() NXLog function in the script.

Once alerts are available on the device the script will parse the XML source, format the alert, and pass it to
NXLog.

The script only collects alerts, but it can be modified to collect status and error messages too.

The primary subroutine that sorts out the information received is idsmxml_parse_alerts(). If
NOTE
the device uses a different CIDEE version, or to filter/modify information, modify the code there.

The final format of the alert messages is specified in the generate_raw_event() subroutine.

142.2. NXLog Configuration


The im_perl module is used to execute the Perl script, which in turn connects to the device, requests a new
subscription, and periodically collects any new alerts.

Example 715. Collecting Cisco IPS Alerts

The configuration below collects IPS alerts from the configured Cisco IPS device. For simplicity, the output is
saved to a file in this example.

1208
nxlog.conf
 1 <Input perl>
 2 Module im_perl
 3 PerlCode /opt/nxlog/bin/cisco-ips.pl
 4 </Input>
 5
 6 <Output file>
 7 Module om_file
 8 File '/tmp/output.log'
 9 </Output>
10
11 <Route perl_to_file>
12 Path perl => file
13 </Route>

Input Sample
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
  <env:Body>
  <sd:events
  xmlns:cid="http://www.cisco.com/cids/2003/08/cidee"
  xmlns:sd="http://example.org/2003/08/sdee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://example.org/2003/08/sdee sdee.xsd
http://www.cisco.com/cids/2003/08/cidee cidee.xsd">
  <sd:evIdsAlert eventId="15117815226791" vendor="Cisco" severity="medium">
  <sd:originator>
  <sd:hostId>R1</sd:hostId>
  </sd:originator>
  <sd:time offset="0" timeZone="UTC">1511781522011779176</sd:time>
  <sd:signature description="SYN Flood DOS" id="6009" version="S593">
  <cid:subsigId>0</cid:subsigId>
  <cid:sigDetails>SYN Flood DOS</cid:sigDetails>
  </sd:signature>
  <cid:protocol>tcp</cid:protocol>
  <cid:riskRatingValue>63</cid:riskRatingValue>
  <sd:participants>
  <sd:attacker>
  <sd:addr>192.168.100.1</sd:addr>
  <sd:port>53760</sd:port>
  </sd:attacker>
  <sd:target>
  <sd:addr>192.168.99.10</sd:addr>
  <sd:port>2717</sd:port>
  </sd:target>
  <sd:vrf_name>NONE</sd:vrf_name>
  </sd:participants>
  <sd:actions></sd:actions>
  <cid:interface>Fa0/0</cid:interface>
  <cid:vrf_name>NONE</cid:vrf_name>
  </sd:evIdsAlert>
  <sd:evIdsAlert eventId="15117815236793" vendor="Cisco" severity="informational">
  <sd:originator>
  <sd:hostId>R1</sd:hostId>
  </sd:originator>
  <sd:time offset="0" timeZone="UTC">1511781523475744440</sd:time>
  <sd:signature description="Back Door Probe (TCP 1234)" id="9007" version="S256">
  <cid:subsigId>0</cid:subsigId>
  <cid:sigDetails>SYN to TCP 1234</cid:sigDetails>
  </sd:signature>

1209
  <cid:protocol>tcp</cid:protocol>
  <cid:riskRatingValue>18</cid:riskRatingValue>
  <sd:participants>
  <sd:attacker>
  <sd:addr>192.168.100.1</sd:addr>
  <sd:port>57422</sd:port>
  </sd:attacker>
  <sd:target>
  <sd:addr>192.168.99.10</sd:addr>
  <sd:port>1234</sd:port>
  </sd:target>
  <sd:vrf_name>NONE</sd:vrf_name>
  </sd:participants>
  <sd:actions></sd:actions>
  <cid:interface>Fa0/0</cid:interface>
  <cid:vrf_name>NONE</cid:vrf_name>
  </sd:evIdsAlert>
  </sd:events>
  </env:Body>
</env:Envelope>

Output Sample
2017-11-28 22:29:41 UTC+0; eventid="15119009816528; hostId="R1"; severity="medium";
app_name=""; appInstanceId=""; signature="6009"; subSigid="0"; description="SYN Flood DOS";
attacker="192.168.100.1"; attacker_port="40784""; target="192.168.99.10"; target_port="4003;
protocol="tcp"; risk_rating="63"; target_value_rating=""; interface="Fa0/0";
interface_group=""; vlan=""↵
2017-11-28 22:29:44 UTC+0; eventid="15119009846539; hostId="R1"; severity="informational";
app_name=""; appInstanceId=""; signature="9007"; subSigid="0"; description="SYN to TCP 1234";
attacker="192.168.100.1"; attacker_port="43242""; target="192.168.99.10"; target_port="1234;
protocol="tcp"; risk_rating="18"; target_value_rating=""; interface="Fa0/0";
interface_group=""; vlan=""↵

NOTE The two samples are from different but similar alerts.

1210
Chapter 143. Exchange (nxlog-xchg)
This add-on is available for purchase. For more information, please contact us.

Microsoft Exchange provides two types of audit logs: administrator audit logging and mailbox audit logging. For
more information, see

The nxlog-xchg add-on can be used to retrieve administrator audit logs and mailbox audit logs. These logs include
actions taken by users or administrators who make changes in the organization, mailbox actions, and mailbox
logins including access by users other than the mailbox owner. For more information, see Administrator audit
logging in Exchange 2016 and Mailbox audit logging in Exchange 2016 on TechNet.

nxlog-xchg periodically queries an Exchange server via Windows Remoting (WinRM) and writes the result to
standard output in JSON format for further processing by NXLog. The add-on is executed by NXLog via the
im_exec module, and may be configured on either the Exchange server itself or another system.

The required steps may vary from those provided below based on the organization and domain
NOTE
topology and configuration.

143.1. Requirements
Server side requirements include:

• Microsoft Exchange Server 2010 SP1+, 2013, 2016 or 2019;


• Windows Remoting (WinRM) with HTTPS listener;
• an Active Directory user that can log in, through WinRM, to the Windows server running Exchange; and
• an Active Directory user with the Audit Logs role.

Client side requirements are:

• Windows 2008 or later and


• a user with permission to install software.

NOTE The server and client can reside on the same machine.

143.2. Exchange Server Setup


1. Create the Active Directory users specified in Requirements above. See Exchange Server permissions and
View-Only Audit Logs role on Microsoft Docs.

WinRM remote login is only allowed for users in the local Administrator group, or Domain
NOTE Administrator group. The user created for login via WinRM must be a member of one of
these groups.

2. Windows Remoting (WinRM) will accept the connections from nxlog-xchg. By default, WinRM will listen on TCP
port 5985 for HTTP (insecure) requests. WinRM should be configured to listen for secure connections on
TCP/5986. Check if it is configured:

PS> Get-ChildItem -Path WSMAN:\Localhost\listener | Where-Object { (Get-Item "$($_.PSPath)


\Transport").Value -eq "HTTPS" -and (Get-Item "$($_.PSPath)\Address").Value -eq "*" }

If the command above does not return any results, then on the Exchange server, from an elevated command

1211
line (cmd), run the following command to enable WinRM HTTPS transport.

> winrm quickconfig -transport:https

3. If there is an error message about the system not having an appropriate (server authentication) certificate,
issue one for the server or create a self-signed one. To create a self signed certificate, open a Powershell
window and run these commands.

PS> New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "hostname-of-my-


server"

If you are having trouble creating a self-signed certificate (getting unaccessible private keys
NOTE in Windows 10 or Windows 2016), try using the Self-signed certificate generator from
Microsoft Script Center.

4. After the certificate has been prepared, open a PowerShell window and run:

PS> Get-ChildItem -Path cert:\LocalMachine\My

Choose the certificate and run:

PS> $cert=Get-ChildItem -Path cert:\LocalMachine\My\YOURCERTIFICATE_THUMBPRINT


PS> New-Item -Path WSMAN:\Localhost\listener -Transport HTTPS -Address * -CertificateThumbPrint
$cert.ThumbPrint -Force
PS> Enable-PSRemoting -SkipNetworkProfileCheck -Force

5. After this it should not be necessary to run the quick config for WinRM and the HTTP listener can be removed
(assuming it is no longer needed otherwise).

PS> Get-ChildItem WSMan:\Localhost\listener | Where -Property Keys -eq "Transport=HTTP" |


Remove-Item -Recurse

6. The "Audit Logs" role most be added to the Active Directory user to access the "Search-AdminAuditLog" and
"Search-MailboxAuditLog" Exchange cmdlets.

PS> New-ManagementRoleAssignment -Name nxlog-xchg-mr -Role "Audit Logs" -User "Active Directory
User Name"

7. Administrator audit logging is enabled by default. Verify by running et-AdminAuditLogConfig | FL


AdminAuditLogEnabled. See Manage administrator audit logging for more details.

8. Mailbox audit logging can be enabled on a per user basis, using the Exchange Management shell. nxlog-xchg
respects the options configured in the Exchange server. To enable mailbox audit logging for a single user,
open an Exchange Management Shell and run:

PS> Set-Mailbox -Identity "Ben Smith" -AuditEnabled $true

To enable audit logging for all user mailboxes in the organization, run:

PS> Get-Mailbox -ResultSize Unlimited -Filter {RecipientTypeDetails -eq "UserMailbox"} | Set-


Mailbox -AuditEnabled $true

For more information about mailbox audit logging (including more logging options), see Enable or disable
mailbox audit logging for a mailbox on Microsoft Docs.

143.3. nxlog-xchg (Client) Setup


The nxlog-xchg utility can be configured either by arguments on the command line or by a configuration file. The
command line arguments use the same names as in the configuration file. Three arguments are offered by nxlog-

1212
xchg in addition to those in the configuration file:

• --debug: set debug verbosity, 0-3 (0 = none/default, 3 = verbose)

• -c, --config: set the configuration file path

• --version: show the version of the nxlog-xchg utility

Sample Command Line Arguments


nxlog-xchg.exe --Url https://exchange01.corp.local:5986/wsman --User winrmuser
  --Password winrmuser_password --HostURI http://exchange01.corp.local/powershell
  --ExchangeUser exuser@local --ExchangePassword exuser_password

Sample nxlog-xchg Configuration


[Nxlog]
SavePos=TRUE
PollInterval=30

[WinRM]
Url=https://host.yourdomain.com:5986/wsman
User=winrmuser@yourdomain.com
Password=winrmuser_password
CheckCertificate=TRUE

[Exchange]
HostFQDN=exchange.yourdomain.com
ExchangeUser=ex_user@yourdomain.com
ExchangePassword=exuser_password
ExchangeAuth=KERBEROS

[Options]
SearchAdminLog=TRUE
SearchMailboxLog=TRUE
ResultSize=5000

The following directives are available for configuring nxlog-xchg.

Nxlog section:

SavePos
This optional boolean directive specifies whether the last record number should be saved when nxlog-xchg
exits. The default is TRUE.

PollInterval
This optional directive specifies the time (in seconds) between polls. Valid values are 3-3600; the default is 30
seconds.

WinRM section:

Url
This specifies the URL of the WinRM listener (for example,
https://exchangeserver.mydomain.com:5986/wsman).

User
This specifies the user that has permission to log on to the Exchange Server system.

Password
This should be set to the password of the user defined in User above.

1213
Auth
The authentication method to use when establishing a WinRM connection (KERBEROS or NTLM). NTLM is the
default authentication method used if this is not set.

CheckCertificate
This optional boolean directive specifies whether the server certificate should be verified. The default is TRUE
(the certificate is validated).

Exchange section:

HostURI
This sets the full URI to use for the remote PowerShell connection (for example,
http://name.domain.tld/PowerShell/).

ExchangeUser
This specifies the user that has permission to query the Exchange Server.

ExchangePassword
This should be set to the password of the user defined in ExchangeUser above.

ExchangeAuth
The authentication method to use when establishing a connection to PowerShell on the Exchange server
(KERBEROS or NTLM). Kerberos is the default is authentication method used if this is not set.

Options section:

QueryAdminLog
This optional boolean directive specifies whether the administrator audit log should be queried. The default is
TRUE (the administrator audit log is queried).

QueryMailboxLog
This optional boolean directive specifies whether the mailbox audit log should be queried. The default is
TRUE (the mailbox audit log is queried).

ResultSize
This optional directive specifies the maximum number of log entries to retrieve. The default is 5000 entries.

143.3.1. Using nxlog-xchg in an NXLog Configuration


Including nxlog-xchg in a working Nxlog installation is quite simple: just configure an Input block using the
im_exec module and specify nxlog-xchg.exe as the external program. With this Input block defined, the
received logs can be routed as necessary.

1214
Example 716. Writing Exchange Logs to a File

This configuration uses the im_exec module to receive logs from nxlog-xchg, and writes them to file locally
with om_file.

nxlog.conf
 1 <Input in>
 2 Module im_exec
 3 Command 'C:\Program Files (x86)\nxlog-exchange\nxlog-xchg.exe'
 4 Arg -c
 5 Arg C:\Program Files (x86)\nxlog-xchg\nxlog-xchg.cfg
 6 </Input>
 7
 8 <Output out>
 9 Module om_file
10 File "C:\\logs\\exchange_audit_log.txt"
11 </Output>
12
13 <Route ex>
14 Path in => out
15 </Route>

143.4. Performance
It is important to configure nxlog-xchg so the server is not polled too frequently (running nxlog-xchg too often) or
infrequently (requiring the collection of a very large result set). If PollInterval is properly adjusted, there should
not be any performance issue.

143.5. Troubleshooting
Nxlog-xchg does not launch from NXLog
Make sure the quotations in the im_exec block are correct. This can be tested by placing a simple batch script
(containing echo "Hello world", for example) into the same directory as nxlog-xchg.exe and calling that
batch file from im_exec.

No events received
If no events are being received, make sure the relevant logging is enabled in Exchange. For admin audit
logging run:

PS> Get-AdminAuditLogConfig | fl _log_

1215
Chapter 144. Microsoft Azure and Office 365
This add-on is available for purchase. For more information, please contact us.

This NXLog add-on can retrieve information about various user, admin, system, and policy actions and events
from Microsoft Azure and Office 365. Once configured, the add-on prints Syslog events, each with a JSON
payload, to standard output for processing by NXLog.

The add-on supports getting logs from the following reports corresponding to the supported Microsoft REST-
based APIs:

• <strong>Azure Active Directory reports (based on Microsoft Graph API)</strong> &ndash; Sign-In events and
directory audit log events.
• <strong>Office 365 Management Activity API</strong> &ndash; Azure Active Directory Audit events,
Exchange Audit events and Sharepoint Audit events using the <strong>Audit.AzureActiveDirectory</strong>,
<strong>Audit.Exchange</strong>, and <strong>Audit.SharePoint</strong> parameters.
• <strong>Office 365 Service Communications API</strong> &ndash; Status, service and message related
events, using the <strong>CurrentStatus</strong>, <strong>HistoricalStatus</strong>,
<strong>Messages</strong>, and <strong>Services</strong> parameters.

For more information about the log sources, see the links below:

• Announcing the preview of Graph Reports and Events API


• Office 365 Management APIs overview
• Office 365 Management Activity API reference
• Office 365 Service Communications API reference (preview)

144.1. Prerequisites
In order to complete the steps in this section and collect logs from the above mentioned APIs, the following
prerequisites laid out in this section will need to be met.

During the steps explained in this section you need to make a note of the following data:

• client_id
• tenant_domain <domainname>.onmicrosoft.com

• tenant_id
• certthumbprint
• certkeyfile <certkey.pem>

144.1.1. Azure Requirements and Permissions


• Access to the Azure Management Portal with a tenant user having the necessary permissions to make the
configuration changes
• An Azure Active Directory application with appropriate permissions and licences
• Audit log search to be turned on, for audit logging to work

Some of the add-on arguments (parameters) require certain permissions set in MS Azure. They are listed in the
table below with a reference to the Microsoft documentation. Their configuration is detailed in the Parameters
section below.

1216
Table 73. Required Permissions

Azure AD
Microsoft Docs
Parameter API used permissions
link
required
--enable-azure-ad-reports Microsoft Graph API v1.0 AuditLog.Read.All See reference link,
and and reference link
Directory.Read.All

--service_communication_operatio Office 365 Management APIs ServiceHealth.Read See reference link


ns

--management_activity_sources Office 365 Management APIs ActivityFeed.Read See reference link

--license-details Microsoft Graph API v1.0 No special This switch


permissions requires the use of
required the following
switches
client_id,
tenant_id,
tenant_domain,
certthumbprint,
enable-azure-
ad-reports,
certkeyfile

144.1.2. Required Microsoft Licenses


Depending on the arguments in use, certain Microsoft licenses or service plans need to be active.

Parameter License required Reference link

--enable-azure-ad-reports An Azure Active Directory Premium license See reference link


(either AAD_PREMIUM or AAD_PREMIUM_P2), or a
license that includes it

--service_communication_operatio An Office 365 license, or a license that includes it See reference link
ns

--management_activity_sources An Office 365 license, or a license that includes it See reference link

For troubleshooting/debugging purposes, the list of active license SKUs can be retrieved through the --license
-details switch.

As Microsoft’s licensing information can be subject to change at any time, always double-
IMPORTANT check your current requirements with the licensing/service plan documentation. The
required licenses can be managed in the Microsoft 365 admin center.

NOTE The above table with the licensing requirements are for informational purposes only.

144.2. Setup Procedure


The complete procedure includes installing the NXLog Microsoft Azure and Office 365 add-on, setting up a MS
Azure AD application with its required permissions, and generating a certificate.

1217
144.2.1. Installing the Microsoft Azure and Office 365 NXLog Add-On
1. Install the add-on with dpkg:

# dpkg -i nxlog-msazure-<version>.deb

2. If the previous command exits non-zero, resolve any missing dependencies:

# apt-get -f install

The installation can be found under /opt/nxlog-msazure.

NOTE The nxlog-msazure add-on depends on nxlog or nxlog-trial.

144.2.2. Create an Azure AD Application to Access the APIs


To access information from your directory, you must create an application in your Azure Active Directory tenant
and grant the appropriate read permissions to access the data.

Carry out the steps described in Register an application section.

Once the new application has been registered, make note of the Application (client) ID (this will be the
client_id), as well as the Directory (tenant) ID (this will be tenant_id) on the Overview page for the new
application.

144.2.3. Grant Permissions to the Application


Grant the required permissions and grant admin consent to the above created application by following the steps
in the Grant permissions section of the Microsoft documentation.

Grant the following permissions as described previously:

For the Microsoft Graph API:

1218
• AuditLog.Read.All

• Directory.Read.All

For the Office 365 Management APIs:

• ActivityFeed.Read

• ServiceHealth.Read

Once your permissions are set up and the Admin consent is granted, your permission list should look like the
one below.

144.2.4. Generate and Set Up an X.509 Certificate


The log collection process uses service-to-service calls via the Microsoft REST-based APIs, so it is important to
generate and set up an X.509 certificate for authenticating to the service. A gencertkey.sh script is provided for
Linux that can be used to simplify the process. It creates the private key-pair in a certkey.pem file in the working
directory. The script is located in the /opt/nxlog-msazure/bin/ directory.

The gencertkey.sh script depends on the openssl toolkit and the uuidgen program. Install the corresponding
packages if necessary.

On Debian-based platforms:

# apt install openssl uuid-runtime

On Centos/RedHat platforms:

# yum install openssl util-linux

Follow the steps below to generate the X.509 certificate and insert the relevant portion into the manifest file in
MS Azure:

1. Generate the certificate with the gencertkey.sh script on the computer where the add-on is installed.

1219
$ ./gencertkey.sh
Generating a RSA private key
............+++++
................................................+++++
writing new private key to 'certkey.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
ThumbPrint:0nFt3fB0JP7zuSmHaRQtmsFNYqo=
"keyCredentials": [
{
  "customKeyIdentifier":"0nFt3fB0JP7zuSmHaRQtmsFNYqo=",
  "keyId":"629ab88d-1059-454b-b258-4ca05b46dee4",
  "type":"AsymmetricX509Cert",
  "usage":"Verify",
  "value":"MIIDXTCCAkWgAwIBAgIJAP+XrnwhAxjOMA0GCSqGSIb3DQEBCwUAMEUxCzAJB..."
}
],

Make note of the the base64-encoded certificate fingerprint value after ThumbPrint: (certthumbprint), and
the KeyCredentials portion (which will be used in the following steps).

2. In the App registration page in MS Azure, select Manifest on the left side and click Download.

3. Edit the downloaded manifest file and replace the "empty" KeyCredentials section with the previously

1220
generated output.

From
"keyCredentials": [],

To
"keyCredentials": [
{
  "customKeyIdentifier":"0nFt3fB0JP7zuSmHaRQtmsFNYqo=",
  "keyId":"629ab88d-1059-454b-b258-4ca05b46dee4",
  "type":"AsymmetricX509Cert",
  "usage":"Verify",
  "value":"MIIDXTCCAkWgAwIBAgIJAP+XrnwhAxjOMA0GCSqGSIb3DQEBCwUAMEUxCzAJB..."
}
],

4. Save the modified manifest and upload it.

Follow the steps below to move the generated certificate files to their intended directory as well as make the
required permission changes:

1. Move the certificates you generated into the /opt/nxlog-msazure/conf directory. This directory is used
later on as a value for the --working_directory parameter.

$ mv cert* /opt/nxlog-msazure/conf/

2. Set the file ownership and permissions to be in agreement with the User and Group directives (NXLog runs
under the nxlog user and nxlog group by default).

$ chown nxlog:nxlog /opt/nxlog-msazure/conf/*


$ chmod 750 /opt/nxlog-msazure/conf/cert*

144.3. Parameters
Certain parameters need to be passed to the NXLog Microsoft Azure and Office 365 add-on as arguments in

1221
order to achieve the desired outcome. These parameters can be passed to the add-on by using the Arg directive.

144.3.1. Mandatory Parameters


The add-on requires the following mandatory parameters. Details about these parameters and their values are
listed in the Prerequisites section.

--client_id=
The Azure App registration Application (client) ID

--tenant_id=
The Azure App registration Directory (tenant) ID

--certthumbprint=
The certificate fingerprint value

--tenant_domain=
The domain name created in MS Azure AD <domainname>.onmicrosoft.com

--certkeyfile=
The certificate key file certkey.pem

--working_directory=
The path where the add-on is run, which is /opt/nxlog-msazure/conf by default

IMPORTANT The --certkeyfile path is always relative to the --working_directory.

144.3.2. Source Parameters


To specify the data sources, use the following parameters.

--enable-azure-ad-reports
Active Directory sign-in events and directory audit logs ( based on Microsoft Graph API ). This parameter does
not require any value to be passed to it.

--management_activity_sources=
Office 365 Management Activity API

The available values are: Audit.Exchange, Audit.SharePoint, Audit.AzureActiveDirectory

--service_communication_operations=
Office 365 Service Communications API

The available values are: Services, CurrentStatus, HistoricalStatus, Messages

144.3.3. Optional Parameters


These parameters are already defined in the built-in configuration file (/opt/nxlog-msazure/conf/msazure-
pull.bic) of the add-on, therefore they are not mandatory. However, the default parameters can be overridden
by defining any parameters that might require non-default values.

--top=n
The top parameter works only with Azure Active Directory reports and events. It returns a subset of the
entries for the given report, consisting of the first n entries, where n is a positive integer. top=5 returns the 5
most recent audit report events. top will be overridden where start_date and end_date can be used—top is

1222
lower priority.

--start_date=YYYY-MM-DDTHH:MM:SSZ|amonthago|aweekago|yesterday
--end_date=YYYY-MM-DDTHH:MM:SSZ|amonthago|aweekago|yesterday|now
The start_date and end_date parameters specify the time range of content to return. These parameters
work with all Office 365 reports and most of the Azure Active Directory reports. Where start/end ranges are
not supported, the add-on uses top. The amonthago, aweekago, yesterday, and now values are dynamic and
calculated in every loop.

To pull reports from the last 24 hours, use: --start_date=yesterday --end_date=now

--log_errors=path
For troubleshooting purposes, the --log_errors argument is available. The value of this parameter is a path
to a file where the add-on will write all its error messages.

The Microsoft documentation lists API errors and responses respectively in the Office 365
NOTE
API errors and Microsoft Graph error responses pages.

--add_syslog_header=true|false|yes|no
Enable or disable the Syslog header.

--infinity=true|false|yes|no
Indicates that the script should never stop and should pull logs in an endless loop. The default is true. This
can be set to false for special cases or debugging, when the script should run once and then exit.

--skip_state_file=true|false|yes|no
If true, the script will neither read from nor write to the state files. The default is false.

--sleep=n
The script will sleep n seconds between loops.

--verbose=true|false|yes|no
For debugging; if true, provides as much detail as possible about what the script is doing. The default is
false. In normal mode, the script should print only events, logs, and reports (data it retrieves from the APIs).
The script emits all diagnostic messages to standard error.

--license-details=true|false|yes|no
For troubleshooting/debugging purposes; if true, a list of active license SKUs will be retrieved. The default is
false.

144.4. NXLog Configuration Examples


Once all the details have been collected, the NXLog configuration file /opt/nxlog/etc/nxlog.conf needs to be
edited and augmented with the relevant details.

1223
Example 717. Azure Active Directory Events

This configuration collects all the Azure Active Directory report events, such as user creation, group
membership, permission changes and so on. The output provided by Microsoft is in JSON format.

nxlog.conf
 1 <Input msazurepull>
 2 Module im_exec
 3 Command /opt/nxlog-msazure/bin/msazure-pull.sh
 4 Arg --client_id=912497ba-9780-46bc-a6a6-3a56a4c14278
 5 Arg --tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2
 6 Arg --certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o=
 7 Arg --tenant_domain=contoso.onmicrosoft.com
 8 Arg --certkeyfile=certkey.pem
 9 Arg --working_directory=/opt/nxlog-msazure/conf
10 Arg --enable-azure-ad-reports
11 <Exec>
12 parse_syslog();
13 </Exec>
14 </Input>

Output of a Delete User Event in JSON Format


{
  "activityDateTime": "2020-05-21T10:27:24.7742514Z",
  "activityDisplayName": "Delete user",
  "additionalDetails": [],
  "category": "UserManagement",
  "correlationId": "3fc2e655-491b-4edd-a450-a7d60ec3aff2",
  "id": "Directory_3fc2e655-491b-4edd-a450-a7d60ec3aff2_S3OF7_28513191",
  "initiatedBy": {
  "app": null,
  "user": {
  "displayName": null,
  "id": "6a304e04-3ebd-4190-b128-efe4d5c7e664",
  "ipAddress": "51.105.112.41",
  "userPrincipalName": "nxlogadmin@testnxlog.onmicrosoft.com"
  }
  },
  "loggedByService": "Core Directory",
  "operationType": "Delete",
  "result": "success",
  "resultReason": "",
  "targetResources": [
  {
  "displayName": null,
  "groupType": null,
  "id": "de80979d-026b-4282-91ac-eb1925b94718",
  "modifiedProperties": [
  {
  "displayName": "Is Hard Deleted",
  "newValue": "\"False\"",
  "oldValue": null
  }
  ],
  "type": "User",
  "userPrincipalName": "de80979d026b428291aceb1925b94718johndoe@testnxlog.onmicrosoft.com"
  }
  ]
}

1224
Example 718. Office 365 Events

This configuration collects Office 365 related events, such as document creation, deletion, permission
changes and so on. The output provided by Microsoft is in JSON format.

nxlog.conf
 1 <Input msazurepull>
 2 Module im_exec
 3 Command /opt/nxlog-msazure/bin/msazure-pull.sh
 4 Arg --client_id=912497ba-9780-46bc-a6a6-3a56a4c14278
 5 Arg --tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2
 6 Arg --certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o=
 7 Arg --tenant_domain=contoso.onmicrosoft.com
 8 Arg --certkeyfile=certkey.pem
 9 Arg --working_directory=/opt/nxlog-msazure/conf
10 Arg
--service_communication_operations=Services,CurrentStatus,HistoricalStatus,Messages
11 Arg
--management_activity_sources=Audit.Exchange,Audit.SharePoint,Audit.AzureActiveDirectory
12 <Exec>
13 parse_syslog();
14 </Exec>
15 </Input>

Output of a Modified Document Event in JSON Format


{
  "ClientIP": "20.40.136.153",
  "CorrelationId": "0b98549f-0056-2000-baa9-211499d2b0e1",
  "CreationTime": "2020-05-21T13:02:22",
  "EventSource": "SharePoint",
  "Id": "1afa6393-6d9f-44dd-72a6-08d7fd8733d5",
  "ItemType": "File",
  "ListId": "d1df9d8a-25ad-4173-a9e3-c0dce2675f9a",
  "ListItemUniqueId": "ebcd6f01-564a-467f-a24a-b1a20c44b907",
  "ObjectId": "https://testnxlog.sharepoint.com/sites/nxlogtest/Shared Documents/Secret.xlsx",
  "Operation": "FileModified",
  "OrganizationId": "a78f0974-05ea-44c8-9ba3-3edaee870793",
  "RecordType": 6,
  "Site": "0ba16f09-d3b9-4827-b8bb-77f00694d6af",
  "SiteUrl": "https://testnxlog.sharepoint.com/sites/nxlogtest/",
  "SourceFileExtension": "xlsx",
  "SourceFileName": "Secret.xlsx",
  "SourceRelativeUrl": "Shared Documents",
  "UserAgent": "MSWAC",
  "UserId": "nxlogadmin@testnxlog.onmicrosoft.com",
  "UserKey": "i:0h.f|membership|10032000ba9b0c07@live.com",
  "UserType": 0,
  "Version": 1,
  "WebId": "c7b9cd18-c3a0-437e-99be-eba97cf33f09",
  "Workload": "SharePoint"
}

144.5. Running in Standalone Mode


Although the Microsoft Azure and Office 365 add-on is designed to work and collect logs as part of NXLog, it can
be run in standalone mode from a Linux terminal.

The first NXLog configuration example above would look like the one below if it were invoked from a terminal

1225
console. In this case, the received events would be continuously printed to the terminal.

Example 719. Azure Active Directory Events in Standalone Mode

$ /opt/nxlog-msazure/bin/msazure-pull.sh \
  --client_id=912497ba-9780-46bc-a6a6-3a56a4c14278 \
  --tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2 \
  --certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o= \
  --tenant_domain=contoso.onmicrosoft.com \
  --certkeyfile=certkey.pem \
  --working_directory=/opt/nxlog-msazure/conf \
  --enable-azure-ad-reports

1226
Chapter 145. MSI for NXLog Agent Setup
This add-on can be downloaded from the nxlog-public/contrib repository according the license and terms
specified there.

This add-on provides an example for building an MSI package which can be used to bootstrap an NXLog agent on
a Windows system. Normally this would be used to set up the agent for management by NXLog Manager—it
installs a custom configuration and a CA certificate. The package can be installed alongside the NXLog MSI.

1. The Windows Installer XML Toolset (Wix) is required to build the custom MSI. Wix is free software available
for download from wixtoolset.org.
2. Install Wix. Make a note where the binary folder of Wix is located (containing the candle.exe and light.exe
executables, typically C:\Program Files (x86)\WiX Toolset v3.11\bin).
3. Save the add-on files in a folder of your choosing and make sure the path to the binary folder is correct in
the pkgmsi32.bat (or pkgmsi64.bat) script by editing the WIX_BUILD_LOCATION variable.
4. Add the custom agent-ca.pem and log4ensics.conf files in the folder.

5. The files to be deployed can be customized by editing nxlog-conf.wxs.

6. Finally, execute either the pkgmsi32.bat or the pkgmsi64.bat script, depending on the targeted
architecture. While both the resulting MSIs include platform independent files, we strongly advise to build
and install the appropriate custom configuration MSI that matches the NXLog installation.
7. The script will proceed to build the MSI. Depending on the architecture selected, the result will be either
nxlog-conf_x86.msi or nxlog-conf_x64.msi.

8. The custom configuration MSI can now be deployed alongside the NXLog installer, using one the same
methods (interactively, with Msiexec, or via Group Policy).

1227
Chapter 146. Okta
This add-on is available for purchase. For more information, please contact us.

The Okta add-on can be used to pull events from Okta using their REST API. Events will be passed to NXLog in
Syslog format with the JSON event in the message field.

To set up the add-on, follow these steps.

1. Install the add-on.


2. Edit the configuration entries in the nxlog-okta.cfg file (in /opt/nxlog-okta/conf/) as necessary.

3. Configure NXLog to collect events with the im_exec module.

The script saves the current timestamp to a state file in order to properly resume when it is terminated. If the
state file does not exist, the script will collect logs beginning with the current time. To manually specify a starting
timestamp, pass it as an argument: ./okta-pull.pl --startdate="2014-10-29T17:13:24.000Z".

Example 720. Collecting Events From Okta

This configuration uses the im_exec module to run the script, which connects to Okta and returns Syslog-
encapsulated JSON. The xm_syslog parse_syslog() and xm_json parse_json() procedures are used to parse
each event into internal NXLog fields. Additional modification to the fieldset can be added, as required, in
the Input instance Exec block.

For the sake of demonstration, all internal fields are then converted back to JSON and written to file.

nxlog.conf
 1 <Extension _json>
 2 Module xm_json
 3 </Extension>
 4
 5 <Extension _syslog>
 6 Module xm_syslog
 7 </Extension>
 8
 9 <Input okta>
10 Module im_exec
11 Command /opt/nxlog-okta/bin/okta-pull.pl
12 <Exec>
13 parse_syslog();
14 parse_json($Message);
15 </Exec>
16 </Input>
17
18 <Output file>
19 Module om_file
20 File '/tmp/output'
21 Exec to_json();
22 </Output>

1228
Chapter 147. Perlfcount
This add-on is available for purchase. For more information, please contact us.

The perlfcount add-on is a Perl script that can be used with NXLog to collect system information and statistics on
Linux platforms.

1229
Chapter 148. Salesforce
This add-on is available for purchase. For more information, please contact us.

The Salesforce add-on provides support for fetching Event Log Files from Salesforce with NXLog. The script
collects Event Log Files from a Salesforce instance by periodically running SOQL queries via the REST API. The
Events can then be passed to NXLog by different means, depending how the data collection is configured.

For more information about the Event Log File API, see EventLogFile in the Salesforce SOAP API Developer Guide.

The Event Logs feature of Salesforce is a a paid add-on feature. Make sure this feature is
NOTE
enabled on the Salesforce instance before continuing.

148.1. General Usage


The salesforce.py script can be configured both from the command line and from a configuration file. The
configuration file collect.conf.json must be located in the same directory as salesforce.py, so that the
script can load the configuration parameters automatically. Passing arguments from the command line overrides
the corresponding parameter read from the configuration file. The following is a sample configuration file:

collect.conf.json
{
  "log_level": "DEBUG",
  "log_file": "var/collector.log",
  "user": "user@example.com",
  "password": "UxQqx847sQ",
  "token": "ZsQO0k5gAgJch3mLUtEqt0K",
  "url": "https://login.salesforce.com/services/Soap/u/39.0/",
  "checkpoint": "var/checkpoint/",
  "keep_csv": "True",
  "output": "structured",
  "header": "none",
  "mode": "across",
  "transport": "stdout",
  "target": "file",
  "limit": "5",
  "delay": "3",
  "request_delay": "3600"
}

A compact view of the command line options is shown below. Use salesforce.py -h to get help, including a
short explanation of the options.

salesforce.py usage
usage: salesforce.py [-h] [--config CONFIG] [--user USER]
  [--password PASSWORD] [--token TOKEN] [--url URL]
  [--checkpoint CHECKPOINT] [--keep_csv {True,False}]
  [--output {json,structured}] [--header {none,syslog}]
  [--mode {loop,across}] [--target TARGET] [--delay DELAY]
  [--limit LIMIT] [--request_delay REQUEST_DELAY]
  [--transport {file,socket,pipe,stdout}]
  [--log_level {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}]
  [--log_file LOG_FILE]

1230
148.2. Authentication and Data Retrieval
The user needs to set the authentication parameters (username, password, and token) so that the script can
connect to Salesforce and retrieve the Event Logs. The url parameter supplied with the sample configuration file
is correct at the time of writing but it may change in the future. The log_level and log_file parameters can be
used as an aid during the initial setup, as well as to identify problems during operation.

It is not possible to find the security token of an existing profile. The solution is to reset it as
NOTE
described in Reset Your Security Token on Salesforce Help.

Depending on your setup, the mode parameter can be set to loop so that the script will look for new events
continuously or to across so that once all the available events are retrieved the script will terminate. When in
loop mode, the request_delay can be configured for the script to wait the specified number of seconds before
requesting more events.

148.3. Local Storage and Processing


The script will temporarily store the Event Log Files in a directory structure under the directory name given by the
checkpoint parameter. The events are stored in CSV format. Files with the same name but with a .state
extension hold the current state, so that no events will be lost or duplicated even if the script terminates
unexpectedly. The the directory structure is shown below.

Directories and files are created automatically when an event of that type is logged by
NOTE
Salesforce.

var/checkpoint/ApexExecution:
  2018-02-08T00:00:00.000+0000.csv
  2018-02-08T00:00:00.000+0000.state
var/checkpoint/LightningError:
  2018-02-08T00:00:00.000+0000.csv
  2018-02-08T00:00:00.000+0000.state
var/checkpoint/Login:
  2018-02-08T00:00:00.000+0000.csv
  2018-02-08T00:00:00.000+0000.state
var/checkpoint/Logout:
  2018-02-08T00:00:00.000+0000.csv
  2018-02-08T00:00:00.000+0000.state
var/checkpoint/PackageInstall:
  2018-03-01T00:00:00.000+0000.csv
  2018-03-01T00:00:00.000+0000.state

If this directory structure is removed, the script will be unable to determine the state and all
available events stored in your Salesforce instance will be retrieved and passed to NXLog
WARNING
again. However, after testing and determining that everything is configured correctly,
remember to delete the directory structure to reset the state.

Once all the available events have been downloaded and the script determines that no other events has been
added, it will proceed to process them and produce the final output. The limit and delay parameters can be set
to throttle the processing by limiting by number of records and delaying between blocks of records in seconds.

The script will delete the CSV files once those are processed. However, the keep_csv parameter can be set to
True to preserve them.

1231
148.4. Data Format and Transport
The processed events can be presented in two different formats: either as structured output or as JSON. This can
be selected by setting the output parameter accordingly. Furthermore, a Syslog style header can be added
before the event data by means of the header parameter. The output types are show below.

Structured Output
CLIENT_IP="46.198.211.113" OS_NAME="LINUX"
DEVICE_SESSION_ID="33ddcf5f751fdaf4b6a010d73014710ed2f13e33" BROWSER_NAME="CHROME"
BROWSER_VERSION="64" USER_AGENT=""Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/64.0.3282.186 Safari/537.36"" CLIENT_ID="" REQUEST_ID=""
SESSION_KEY="qomr/wgmbMU73iG6" DEVICE_ID="" CONNECTION_TYPE="" EVENT_TYPE="LightningError"
SDK_APP_VERSION="" SDK_APP_TYPE="" UI_EVENT_SOURCE="storage" SDK_VERSION="" UI_EVENT_SEQUENCE_NUM=""
LOGIN_KEY="5ujU+09kPSKatTxR" UI_EVENT_TYPE="error" PAGE_START_TIME="1519928816975" DEVICE_MODEL=""
USER_TYPE="Standard" ORGANIZATION_ID="00D1r000000rH0F" OS_VERSION=""
USER_ID_DERIVED="0051r000007NyeqAAC" UI_EVENT_ID="ltng:error" APP_NAME="one:one"
UI_EVENT_TIMESTAMP="1519928819334" USER_ID="0051r000007Nyeq" TIMESTAMP="20180301182702.187"
TIMESTAMP_DERIVED="2018-03-01T18:27:02.187Z" DEVICE_PLATFORM="SFX:BROWSER:DESKTOP"↵

JSON Output
{"CLIENT_IP": "Salesforce.com IP", "REQUEST_ID": "4GVCi4pxSjCESP-qby__7-", "SESSION_KEY": "",
"API_TYPE": "", "EVENT_TYPE": "Login", "SOURCE_IP": "46.198.211.113", "RUN_TIME": "143",
"LOGIN_KEY": "", "USER_NAME": "user@example.com", "CPU_TIME": "57", "BROWSER_TYPE": "Mozilla/5.0
(X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36",
"URI": "/index.jsp", "ORGANIZATION_ID": "00D1r000000rH0F", "USER_ID_DERIVED": "0051r000007NyeqAAC",
"DB_TOTAL_TIME": "47093446", "LOGIN_STATUS": "LOGIN_NO_ERROR", "USER_ID": "0051r000007Nyeq",
"TIMESTAMP": "20180302083919.878", "TLS_PROTOCOL": "TLSv1.2", "REQUEST_STATUS": "", "CIPHER_SUITE":
"ECDHE-RSA-AES256-GCM-SHA384", "TIMESTAMP_DERIVED": "2018-03-02T08:39:19.878Z", "URI_ID_DERIVED":
"", "API_VERSION": "9998.0"}

Structured Output With Syslog Header


<14>1 2018-03-05T18:37:56.157860 eu12.salesforce.com - - - - NUMBER_FIELDS="2"
CLIENT_IP="46.198.211.113" ENTITY_NAME="EventLogFile" DB_CPU_TIME="0" USER_AGENT="5238"
REQUEST_ID="4GUW0E969JxN49-qbzCo8-" SESSION_KEY="mmOUNLlL4HlSzrSq" EVENT_TYPE="RestApi" RUN_TIME="8"
RESPONSE_SIZE="706" METHOD="GET" CPU_TIME="4" LOGIN_KEY="szBoBvcp+3dHeuff" STATUS_CODE="200"
URI="/services/data/v37.0/sobjects/EventLogFile/0AT1r000000NWSKGA4/LogFile"
ORGANIZATION_ID="00D1r000000rH0F" REQUEST_STATUS="S" DB_TOTAL_TIME="3319055" ROWS_PROCESSED="1"
MEDIA_TYPE="text/csv" DB_BLOCKS="15" USER_ID="0051r000007Nyeq" TIMESTAMP="20180301190010.634"
URI_ID_DERIVED="0AT1r000000NWSKGA4" REQUEST_SIZE="0" USER_ID_DERIVED="0051r000007NyeqAAC"
TIMESTAMP_DERIVED="2018-03-01T19:00:10.634Z"↵

NOTE The samples above are not from the same event.

The formatted output can then be displayed in standard output, passed to another program by a named pipe,
saved to a file, or sent to another program using Unix Domain Sockets (UDS). This can be controlled by setting
the transport parameter to stdout, pipe, file, or socket respectively. When the transport is pipe, file, or socket the
target parameter can be used to set the name of the pipe, file, or socket.

148.5. Configuring NXLog


The versatility of the salesforce.py script, combined with NXLog, allows for several different ways to collect the
Event Log Files from Salesforce.

A first scenario is that NXLog is running the script directly and consumes the data from the script. To do this, the
script should be running in loop mode, so that events are fetched periodically from Salesforce.

1232
Example 721. Loop Mode

NXLog executes salesforce.py, which in turn collects events every hour, processes them, formats them as
JSON with a Syslog header, and forwards them to NXLog.

collect.conf.json
{
  "log_level": "DEBUG",
  "log_file": "var/collector.log",
  "user": "user@example.com",
  "password": "UxQqx847sQ",
  "token": "ZsQO0k5gAgJch3mLUtEqt0K",
  "url": "https://login.salesforce.com/services/Soap/u/39.0/",
  "checkpoint": "var/checkpoint/",
  "keep_csv": "True",
  "output": "json",
  "header": "syslog",
  "mode": "loop",
  "transport": "stdout",
  "target": "file",
  "limit": "100",
  "delay": "3",
  "request_delay": "3600"
}

nxlog.conf
 1 <Extension _syslog>
 2 Module xm_syslog
 3 </Extension>
 4
 5 <Extension _json>
 6 Module xm_json
 7 </Extension>
 8
 9 <Input messages>
10 Module im_exec
11 Command ./salesforce.py
12 <Exec>
13 parse_syslog();
14 parse_json($Message);
15 </Exec>
16 </Input>
17
18 <Output out>
19 Module om_file
20 File "output.log"
21 </Output>
22
23 <Route messages_to_file>
24 Path messages => out
25 </Route>

A second scenario: set up NXLog to listen on a UDS for events and use either NXLog or an external scheduler to
run salesforce.py. In this case, salesforce.py runs in across mode.

1233
Be sure to provide ample time for the script to finish executing before the scheduler starts
WARNING a new execution. Or use a shell script that prevents running multiple instances
simultaneously.

Example 722. Across Mode With NXLog as Scheduler

collect.conf.json
{
  "log_level": "DEBUG",
  "log_file": "var/collector.log",
  "user": "user@example.com",
  "password": "UxQqx847sQ",
  "token": "ZsQO0k5gAgJch3mLUtEqt0K",
  "url": "https://login.salesforce.com/services/Soap/u/39.0/",
  "checkpoint": "var/checkpoint/",
  "keep_csv": "True",
  "output": "structured",
  "header": "none",
  "mode": "across",
  "transport": "socket",
  "target": "uds_socket",
  "limit": "100",
  "delay": "3",
  "request_delay": "3600"
}

nxlog.conf
 1 <Extension exec>
 2 Module xm_exec
 3 <Schedule>
 4 Every 1 hour
 5 <Exec>
 6 log_info("Scheduled execution at " + now());
 7 exec_async("./salesforce.py");
 8 </Exec>
 9 </Schedule>
10 </Extension>
11
12 <Input messages>
13 Module im_uds
14 UDS ./uds_socket
15 UDSType stream
16 </Input>
17
18 <Output out>
19 Module om_file
20 File "output.log"
21 </Output>
22
23 <Route messages_to_file>
24 Path messages => out
25 </Route>

It is even possible to manually start the salesforce.py in loop mode with a large request_delay and collect via
UDS (as shown above) without the xm_exec instance. Or set the transport to file and configure NXlog to read
events with im_file.

1234
Though events are captured in real time, Salesforce generates the Event Log Files during non-
NOTE
peak hours.

1235

Вам также может понравиться