Вы находитесь на странице: 1из 19

24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

Count Upon Security

Increase security awareness. Promote, reinforce and learn security skills.

NOV 22 2017
LEAVE A COMMENT
DIGITAL FORENSICS AND INCIDENT RESPONSE

Digital Forensics – Artifacts of interactive sessions

(h ps://countuponsecurity.files.wordpress.com/2017/11/artifacts.png)In this article I would


like to go over some of the digital forensic artifacts that are likely to be useful on your quest
to find answers to investigative questions. Specially, when conducting digital forensics and
incident response on security incidents that you know the a acker performed its actions
while logged in interactively into a Microsoft Windows systems. Normally, one of the first
things I look is the Windows Event logs. When properly configured they are a treasure
trove of information, but in this article, I want to focus on artifacts that can be useful even if
an a acker a empts to cover his tracks by deleting the Event Logs.

Let’s start with ShellBags!

To improve the customer experience, Microsoft operating systems stores folder se ings in
the registry. If you open a folder, resize its dimensions, close it and open it again, did you notice that Windows restored the view you had?
Yep, that’s ShellBags (h p://www.dfrws.org/sites/default/files/session-files/paper-
using_shellbag_information_to_reconstruct_user_activities.pdf) in action. This information is stored in the user profile hive “NTUSER.dat”
within the directory “C:\Users\%Username%\” and in the hive “UsrClass.dat” which is stored at
“%LocalAppData%\Microsoft\Windows”. When a profile is loaded into the registry, both hives are mounted into the HKEY_USERS and
then then linked to the root key HKEY_CURRENT_USER and HKEY_CURRENT_USER\Software\Classes respectively. If you are
curious, you can see where the different files are loaded by looking at the registry key
“HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\hivelist”. On Windows XP and 2003 the ShellBags registry keys are
stored at HKEY_USERS\{SID}\Software\Microsoft\Windows\Shell\ and HKEY_USERS\{SID}\Software\
Microsoft\Windows\ShellNoRoam\. On Windows 7 and beyond the ShellBags registry keys are stored at “HKEY_USERS\
{SID}_Classes\Local Se ings\Software\Microsoft\Windows\Shell\”.

Why are ShellBags relevant?

Well, this (h p://windowsir.blogspot.com/2012/08/shellbag-analysis.html) particular artifact (h p://www.4n6k.com/2013/12/shellbags-


forensics-addressing.html) allows (h ps://volatility-labs.blogspot.de/2012/09/movp-32-shellbags-in-memory-setregtime.html) us
(h ps://digital-forensics.sans.org/blog/2011/07/05/shellbags) to get visibility about the intent or knowledge that a user or an a acker had
when accessing or browsing directories and, when. Even if the directory does no longer exists. For example, an a acker that connects to a
system using Remote Desktop and accesses a directory where his toolkit is stored. Or an unhappy employee that accesses a network share
containing business data or intellectual property weeks before his last day and places this information on a USB drive. ShellBags artifacts
can help us understand if such actions were performed. So, when you obtain the NTUSER.dat and UsrClass.dat hives you could parse it
and then placed events into a timeline. When corroborated with other artifacts, the incident response team can reconstruct user activities
that were performed interactively and understand what happened and when.

Which tools can we use to parse ShellBags?

I like to use RegRipper from Harlan Carvey (h p://windowsir.blogspot.ch/), ShellBags Explorer from Eric Zimmerman
(h ps://binaryforay.blogspot.com/) or Sbags (h p://www.williballenthin.com/forensics/shellbags/index.html) from Willi Ballenthin
(h p://www.williballenthin.com/). The below picture shows an example of using Willi’s tool to parse the ShellBags information from the
NTUSER.dat and UsrClass.dat hives. As an example, this illustration shows that the a acker accessed several network folders within
SYSVOL and also accessed “c:\Windows\Temp” folder.

https://countuponsecurity.com/ 1/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/11/shellbags.png)

To give you context, why I’m showing you this particular illustration of accessing the SYSVOL folder, is because they contain Active
Directory Group Policy preference files that in some circumstances
(h ps://github.com/PowerShellMafia/PowerSploit/blob/master/Exfiltration/Get-GPPPassword.ps1) might contain valid domain credentials
that can be easily decrypted. This is a known technique used by a ackers to obtain credentials and likely to occur in the beginning of an
incident. Searching for passwords in files such as these are simple ways for a ackers to get credentials for service or administrative accounts
without executing credential harvesting tools.

Next artifact on our list, JumpLists!

Once again, to increase the customer experience and accelerate the workflow, Microsoft introduced on Windows 7 the possibility to allow a
user to access a list of recently used applications and files. This is done by enabling the feature to store and display recently opened
programs and items in the Start Menu and the taskbar. There are two files that store JumpLists
(h ps://articles.forensicfocus.com/2012/10/30/forensic-analysis-of-windows-7-jump-lists/) information
(h ps://webcache.googleusercontent.com/search?
q=cache:J ME9OB26cJ:h ps://www.champlain.edu/Documents/LCDI/Jump%2520List%2520Forensics.pdf+&cd=3&hl=en&ct=clnk&gl=us).
One is the {AppId}.automaticDestination-ms and the other is {AppId}.customDestination-ms where {AppId} corresponds to a 16 hex string
(h p://www.4n6k.com/2011/09/jump-list-forensics-appids-part-1.html) that uniquely identifies the application
(h p://www.hexacorn.com/blog/2013/04/30/jumplists-file-names-and-appid-calculator/) and is calculated based on application path CRC64
with a few oddities. These files are stored in the folder “C:\Users\%USERNAME%\AppData\Roaming\Microsoft\Windows\
Recent\AutomaticDestinations” and “C:\Users\%USERNAME%\AppData\Roaming\Microsoft\Windows\
Recent\CustomDestinations”. The folder AutomaticDestinations contain files {16hexchars}.automaticDestination-ms and these files are
generated by common operating system applications and stored in a in Shell Link Binary File Format known as [MS-SHLLINK] that are
encapsulated Inside a Compound File Binary File Format known as MS-CFB or OLE. The folder CustomDestinations contain files
{16hexchars}.customDestination-ms and these files are generated by applications installed by the user or scripts there were executed and
stored in Shell Link Binary File Format known as [MS-SHLLINK].

Why are JumpLists relevant?

Just like like ShellBags, this artifact allows us to get visibility about the intent or knowledge an a acker had when opening a particular file,
launching a particular application or browsing a specific directory during the course of an interactive session. For example, consider an
a acker that is operating on a compromised system using Remote Desktop and launches a browser, the JumpList associated with it will
contains the most visited or the recently closed website. If the a acker is pivoting between system using the Terminal Services client, the
JumpList shows the system that was specified as an argument. If an a acker dumped credentials from memory and saved into a text file
and opened it with Notepad, the JumpList will show evidence about it. Essentially, the metadata stored on these JumpList files can be
parsed and will show you a chronological list of Most Recently Used (MRU) or Most Frequently Used (MFU) files opened by the
user/application. Among other things, the information contains the Standard Information timestamps from the list entry and the time
stamps from the file at the time of opening. Furthermore, it shows the original file path and sizes. This information, when placed into a
timeline and corroborated with another artifact can give us a clear picture of the actions performed.

Which tools can we use to parse JumpLists?

JumpListsView from NIRSOFT, JumpLister from Mark Waon or JumpLists Explorer from Eric Zimmerman. Below an example of using
Eric’s tool to parse the JumpLists files (h ps:// works.net/prototypes/lp/lp.users.guide.pdf). More specifically the JumpList file that is
associated with Notepad. As an example, this illustration shows that an a acker opened the file “C:\Windows\Temp\tmp.txt”with
Notepad. It shows when the file was created and the MFT entry. Very useful.

https://countuponsecurity.com/ 2/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/11/jumplists.png)

Next artifact, LNK files!

Again, consider an a acker operating on a compromised system using a Remote Desktop session where he dumped the credentials to a text
file and then double clicked on the file. This action will result in the creation of the corresponding Windows shortcut file (LNK file). LNK
files (h p://computerforensics.parsonage.co.uk/downloads/themeaningoflife.pdf) are Windows Shortcuts. Everyone that has used Windows
has created a shortcut of your most favorite folder or program. However, the Windows operating system behind the scenes also keeps track
of recently opened files by creating LNK files within the directory “C:\Documents and Se ings\%USERNAME%\Recent\”. The LNK
files, like JumpLists, are stored in Shell Link Binary File Format known as [MS-SHLLINK]. When parsed, the LNK file, contains metadata
that, among other things, shows the target file Standard Information timestamps, path, size and MFT entry number. This information is
maintained even if the target file does no longer exists on the file system. The MFT entry number can be valuable in case the file was
recently deleted and you would like to a empt to recover by carving it from the file system.

Which tools can we use to parse .LNK files?

Joachim Me (h ps://github.com/joachimme ) has an utility that to parse the information from the Windows Shortcut files. The utility is
installed by default on SIFT workstation. In the illustration below, while analyzing a disk image, we could see that there are several .LNK
files created under a particular profile. Knowing that this profile has been used by an a acker you could parse the files. In this case parsing,
when parsing the file “tmp.lnk” file we can see the target file “C:\Windows\Temp\tmp.txt”, its size and when was created.

https://countuponsecurity.com/ 3/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/11/lnkinfo.png)

Next artifact, UserAssist!

The UserAssist registry key (h p://www.4n6k.com/2013/05/userassist-forensics-timelines.html) keeps track of the applications that were
executed by a particular user. The data is encoded using ROT-13 substitution cipher (h ps://blog.didierstevens.com/2006/07/24/rot13-is-
used-in-windows-you%E2%80%99re-joking/) and maintained on the registry key HKEY_USERS\{SID}\Software\
Microsoft\Windows\CurrentVersion\Explorer\UserAssist.

Why is UserAssist relevant?

Consider an a acker operating on a compromised system where he launched “cmd.exe” to launch other Windows built-in commands, or
opened the Active Directory Domains and Trusts Snap-in “domain.msc” to gather information about a particular domain, or launched a
credential dumper from an odd directory. This action will be tracked by the UserAssist registry key. The registry key will show information
about which programs have been executed by a specific user and how frequently. Due to the nature of how timestamps are maintained on
registry ie., only the last modified timestamp is kept, this artifact will show when was the last time that a particular application was
launched.

Which tools can we use to parse the UserAssist registry keys?

Once again RegRipper from Harlan Carvey is a great choice. Another tool is UserAssist from Didier Stevens
(h ps://blog.didierstevens.com/). Other method that I often use is to use log2timeline (h ps://countuponsecurity.com/2015/11/23/digital-
forensics-supertimeline-event-logs-part-i/) using Windows Registry plugin and then grepping for the UserAssist parser. In this example, we
can see that an a acker while operating under a compromised account, executed “cmd,exe”, “notepad.exe”and “mmc.exe”. Now
combining these artifacts with the Shellbags, JumpLists and .LNK files, I can start to interpret the results.

(h ps://countuponsecurity.files.wordpress.com/2017/11/userassist.png)

Next artifact, RDP Bitmap Cache!

With the release of RDP 5.0 on Windows 2000, Microsoft introduced a persistent bitmap caching mechanism that augmented the bitmap
RAM cache. With this mechanism, when you do a Remote Desktop connection, the bitmaps can get stored on disk and are available for the
RDP client, allowing it to load them from disk instead of waiting on the latency of the network connection. Of course this was developed
with low bandwidth network connections in mind. On Windows 7 and beyond the cache folder is located on
“%USERPROFILE%\AppData\Local\Microsoft\Terminal Server Client\Cache\ ” and there two types of cache files
(h ps://www.cert.ssi.gouv.fr/actualite/CERTFR-2016-ACT-017/). One that contains a .bmc extension and a newer format that was

https://countuponsecurity.com/ 4/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
introduced on Windows 7 that follows the naming convention of “cache{4-digits}.bin’. Both files have tiles of 64×64 pixels. The .bmc files
support different bits per pixel ranging from 8-bits to 32-bits. The .bin files are always 32-bits and have more capacity and a file can store
up to 100Mb of data.

Why are RDP Bitmap cache files relevant?

If an a acker is pivoting between systems in a particular environment and is leveraging Remote Desktop then, on the system where the
connection is initiated you could find the bitmap cache that was stored during the a acker Remote Desktop session. After reconstructing
the bitmaps, that translate what was being visualized by the a acker, it might be possible to reconstruct the bitmap puzzle and observe
what was seen by the a acker while performing the Remote Desktop connections to the compromised systems. A great exercise for people
who like puzzles!

Which tools can we use to parse RDP Bitmap Cache files?

Unfortunately, there aren’t many tools available. ANSSI-FR released a RDP Bitmap Cache parser (h ps://github.com/ANSSI-FR/bmc-
tools) that you could use to extract the bitmaps from the cache files. There was a tool called BmcViewer that was available on a now defunct
website and is great tool to parse the .bmc files. The tool doesn’t support the .bin files. If you know how to code, an interesting project might
be to develop a parser that allows you to puzzle the tiles.

Finally, combining these artifacts with our traditional file system metadata timeline and other artifacts such as ShimCache, would allows us
to further uncover more details. Below an illustration of parsing ShimCache from a memory image using Volatility and the
ShimCacheMem plugin wri en by Fred House. We could see that there are some interesting files. For example “m64.exe” and looking at
the adjacent entries we can see that it shows the execution of “notepad.exe”, “p64.exe” and “reg.exe”. Searching for those binaries on our
file system timeline uncovers that for example m64.exe is Mimika .

(h ps://countuponsecurity.files.wordpress.com/2017/11/shimcachemem.png)

That’s it for today! As I wrote in the beginning, the Windows Even Logs are a treasure trove of information when properly configured but If
an a acker a empts to cover his tracks by deleting the Event Logs, there are many other artifacts to look for. Combine the artifacts
outlined in this article with File system metadata (h ps://countuponsecurity.com/2017/05/25/digital-forensics-ntfs-change-journal/),
ShimCache (h ps://countuponsecurity.com/2016/05/18/digital-forensics-shimcache-artifacts/), AMCache, RecentDocs, Browser History,
Prefetch (h ps://countuponsecurity.com/2016/05/16/digital-forensics-prefetch-artifacts/), WorldWheelQuery, ComDlg32, RunMRU, and
many others and you likely will end up having a good understanding of what happened and when. Happy hunting!

References:
PS: Due to the extensive list of references I decided to a ach a text file with links: references
(h ps://countuponsecurity.files.wordpress.com/2017/11/references.pdf). Without them, this article won’t be possible.

https://countuponsecurity.com/ 5/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
Lu gens, J., Pepe, M., Mandia, K. (2014) Incident Response & Computer Forensics, 3rd Edition
Carvey, H. (2011) Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry, Second Edition
SANS 508 – Advanced Computer Forensics and Incident Response

Advertisements

ดูเพิมเติม ดูเพิมเติม

Tagged Digital Forensics, JumpLists, LNK files, RDP Bitmap Cache, RecentDocs, ShellBags, ShimCache, UserAssist, WorldWheelQuery
JUL 02 2017
2 COMMENTS
DIGITAL FORENSICS AND INCIDENT RESPONSE, INCIDENT HANDLING AND HACKER TECHNIQUES, MALWARE ANALYSIS

Analysis of a Master Boot Record – EternalPetya

NoPetya or EternalPetya has kept the security community pre y busy last week. A malware specimen that uses a combined arms
approach and maximizes its capabilities by using different techniques to sabotage business operations. One aspect of the malware that
raised my interest was the ability to overwrite the Master Boot Record (MBR) and launch a custom bootloader. This article shows my
approach to extract the MBR using digital forensic techniques and then analyze the MBR using Bochs. Before we roll up our sleeves let’s do
a quick review on how the MBR is used by today’s computers during the boot process.

The computers that rely on BIOS flash memory instead of the new EFI standard, when they boot, the BIOS code is executed and, among
other things, the code performs a series of routines that perform hardware checks i.e., Power-On-Self-Tests (POST). Then, the BIOS
a empts to find a bootable device. If the bootable device is a hard drive, the BIOS reads the sector 1, track 0, head 0 and if contains a valid
MBR (valid means that the sector ends with bytes 0xAA55) it will load that sector into a fixed memory location. By convention the code will
be loaded into the real-mode address 0000:7c00. Then, the instruction pointer register is transferred into that memory location and the CPU
will start executing the MBR code. What happens next is dependable on the MBR implementation code i.e., different operating systems
have different MBR code (h p://thestarman.narod.ru/asm/mbr/W7MBR.htm) Nonetheless, the code needs to fit in the 512-bytes available
at disk sector. The MBR follows a standard and its structure contains executable code, the partition table (64-bytes) with the locations of the
primary partitions and finally 2-bytes with 0xAA55 signature. This means only 446-bytes are available to implement a MBR. In the
Microsoft world, when the MBR code is executed, its role is to find an active partition, read its first sector, which contains the VBR code, load
it into memory and transfer execution into it. The VBR (h p://thestarman.narod.ru/asm/mbr/W7VBR.htm)is a bootloader program that
will find the Windows BootMgr program and executes it All this happens in 16-bits real-mode.

Now, that we have a brief overview about the boot process, how can we extract and analyze the MBR? More specifically the MBR that is
used by EternalPetya? Well, we infect a victim machine on a controlled and isolated environment
(h ps://countuponsecurity.com/2015/01/13/dynamic-malware-analysis-with-remnux-v5-part-1/). We know that EternalPetya main
component is a DLL and we can launch it and infect a Windows machine by running “rundll32.exe petya.dll, #1 10
(h ps://www.crowdstrike.com/blog/fast-spreading-petrwrap-ransomware-a ack-combines-eternalblue-exploit-credential-stealing/)”. Our
setup consisted of 2 Virtual Machines. One running with Windows 7 and another running REMnux. We created a snapshot of the victim
machine before the infection. Then executed the malware. Following that, we waited 10 minutes for the infection to complete. The scheduled
task created by the malware restarted the operating system and a ransom note appeared. Then, I shutdown the Windows 7 virtual machine

https://countuponsecurity.com/ 6/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
and used vmware-vdiskmanager.exe utility to create a single VMDK file (h p://www.4n6k.com/2014/04/forensics-quickie-merging-
vmdks.html) from the disk state before and after the infection. Next, I moved the VMDK files to a Linux machine where I used QEMU to
convert the VMDK images to RAW format.

(h ps://countuponsecurity.files.wordpress.com/2017/07/eternalpetya-vmdk.png)

Following that I could start the analysis and look at the MBR differences. The picture below illustrates the difference between the original
MBR and the EternalPetya MBR. On the right side you have the EternalPetya MBR, the first 147 bytes (0x00 through 0x92) contain
executable code. The last 66 bytes (0x1be through 0x1fd) contain the partition table and are equal to the original MBR.

(h ps://countuponsecurity.files.wordpress.com/2017/06/eternalpetya-mbr.png)

So, we are interested in the code execution instructions. We can start by extracting the MBR into a binary file and convert it to assembly
instructions. This can be done using radare, objdump or ndisasm. Now, this is the hard part of the analysis. Read the assembly instructions
and understand what it does. We can look at the instructions and perform static analysis but we can also perform dynamic analysis by
running the MBR code, combining both worlds we will have be er understanding – or at least we can try.

To perform dynamic analysis of the MBR code we will use Bochs (h p://bochs.sourceforge.net/). Bochs is an open source, fully fledged x86
emulator. Originally wri en by Kevin Lawton in 1994 is still being actively maintained today and last April version 2.6.9 was released.
Bochs brings a CLI and GUI debugger and is very useful to debug our MBR code. In addition, Bochs can be integrated with IDA PRO and
Radare. You can download Bochs from here (h ps://sourceforge.net/projects/bochs/files/bochs/2.6.9/). In our case, we want to use Bochs to
dynamically debug our MBR code. For that we need a configuration file called Bochsrc
(h p://bochs.sourceforge.net/doc/docbook/user/bochsrc.html)which is used to specify different options such as the disk image and also the
CHS parameters for the disk. This (h p://www.hexblog.com/?p=103)article from Hex-Rays contains a how-to on how to integrate Bochs
with IDA PRO. At the end of the article there is the mbr_Bochs.zip (h p://hexblog.com/ida_pro/files/mbr_Bochs.zip) file that you can
download. We will use these files to use Bochs standalone or combined with IDA PRO in case you have it. The Bochsrc file that comes with
the ZIP file contains options that are deprecated on the newer Bochs version. The picture below shows the Bochsrc options I used. The Bochs
user guide documents this file well.

(h ps://countuponsecurity.files.wordpress.com/2017/07/eternalpetya-bochsconfig.png)

https://countuponsecurity.com/ 7/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
Then you can try your configuration setup and launch Bochs. If you have IDA PRO then you can use this guide
(h ps://phocean.net/2011/01/20/debugging-the-mbr-with-ida-pro-and-Bochs.html) starting from step 6 in order to integrate it with IDA
PRO. If all is set up, the debugging session will open and stop at the first instruction from its own BIOS code at memory address
F000:FFF0. Why this address? You can read this and many other low level things in the outstanding work from Daniel B. Sedory
(h p://starman.vertcomp.com/index.html).

(h ps://countuponsecurity.files.wordpress.com/2017/06/eternalpetya-bochs.png)

Uncomment the last line from the Bochsrc configuration file, to tell Bochs to use the Enhanced Debugger. For further references, you can
read the “How to DEBUG System Code using The Bochs Emulator on a Windows™ PC
(h p://thestarman.pcministry.com/asm/bochs/bochsdbg.html)” article. Start Bochs again and the GUI will show up. You can load the
stack view by pressing F2. Then set a breakpoint where the MBR code will be loaded by issuing the command “lb 0x7c00” and then the “c”
to continue the debugging session.

(h ps://countuponsecurity.files.wordpress.com/2017/06/eternalpetya-bochsgui.png)

Now we can look at the code, step into the instructions, inspect the registers and stack. After some back and forth with the debugger I
created the following listing with my interpretation of some of the instructions.

https://countuponsecurity.com/ 8/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/06/eternalpetya-mbrcode.png)

Bo om line, the MBR code will perform a loop that uses BIOS interrupt 0x13 function 0x42 to read data starting at sector 1 of the hard-
drive. The loop copies 8880 (0x22af) bytes of data into memory location 0x8000. When the copy is done, the execution is transferred to the
address 0x8000 by performing a far jump and the malicious bootloader is executed. The malicious bootloader code has been uploaded by
Ma hieu Suiche (h ps://twi er.com/msuiche/status/880041005638180864) to Virus Total here
(h ps://www.virustotal.com/en/file/41f75e5f527a3307b246cadf344d2e07f50508cf75c9c2ef8dc3bae763d18ccf/analysis/1498652953/). You
can also extract it from the hard drive by extracting the sector 1 through 18 or be er using the commands from the following picture.
Then you can perform static and dynamic analysis.

(h ps://countuponsecurity.files.wordpress.com/2017/06/eternalpetya-bootloaderextract.png)

The 16-bits bootloader code is harder to analyze than the MBR code but it is based on the Petya ransomware code from 2016
(h ps://blog.malwarebytes.com/threat-analysis/2016/04/petya-ransomware/). In this (h ps://blog.malwarebytes.com/threat-
analysis/2017/06/eternalpetya-yet-another-stolen-piece-package/)great article, from Hasherezade (h ps://twi er.com/hasherezade), she
analyzes both Petya and EternalPetya bootloader using IDA PRO. When you use Bochs integrated with IDA PRO disassembler and
debugger, the analysis is more accessible due to the powerful combination.

That’s it for today – Entering the world of real-mode execution on x86 is quite interesting. Analyzing code that relies on BIOS services such
as software interrupts to perform different operations like reading from disk or writing to screen or, accessing the memory through
segments is revealing. What we learned today might be a starting point to start looking at bootkits (h p://williamshowalter.com/a-
universal-windows-bootkit/) that are beneath the operating system and subvert the MBR (h ps://www.symantec.com/connect/blogs/are-
mbr-infections-back-fashion)boot sequence.

Have fun!

References:

Windows Internals, Sixth Edition, Part 2 By: Mark E. Russinovich, David A. Solomon, and Alex Ionescu

Various articles wri en by Daniel B. Sedory : h p://starman.vertcomp.com/index.html

h p://standa-note.blogspot.ch/2014/11/debugging-early-boot-stages-of-windows.html (h p://standa-note.blogspot.ch/2014/11/debugging-
early-boot-stages-of-windows.html)
https://countuponsecurity.com/ 9/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
Tagged Bochs x86, EternalPetya MBR, Master Boot Record, Petya MBR
JUN 07 2017
2 COMMENTS
DIGITAL FORENSICS AND INCIDENT RESPONSE, INCIDENT HANDLING AND HACKER TECHNIQUES, INTRUSION
ANALYSIS

Threat Hunting in the Enterprise with AppCompatProcessor

Last April, at the SANS Threat Hunting and IR Summit (h ps://www.sans.org/event/threat-hunting-and-incident-response-summit-


2017/summit-overview/), among other things, there was a new tool and technique released by Matias Bevilacqua. Matias’s presentation was
titled “ShimCache and AmCache enterprise-wide hunting, evolving beyond grep” (h ps://www.sans.org/summit-archives/file/summit-
archive-1492713813.pdf) and he released the tool AppCompatProcessor (h ps://github.com/mbevilacqua/appcompatprocessor). Matias
also wrote a blog post “Evolving Analytics for Execution Trace Data (h ps://www.fireeye.com/blog/threat-
research/2017/04/appcompatprocessor.html)” with more details.

In this article, I want to go over a quick exercise on how to use this tool and expand the existing signatures. First, let me write that, in case
you have a security incident and you are doing enterprise incident response or you are performing threat hunting as part of your security
operations duties, this is a fantastic tool that you should become familiar with and have on your toolkit. Why? Because it allows the security
teams to digest, parse and analyze, at scale, two forensic artifacts that are very useful. The forensic artifacts are part of the Windows
Application Experience and Compatibility features and are known as ShimCache and the AMCache.

To give you more context, the ShimCache can be obtained from the registry and from it we can obtain information about all executable
binaries that have been executed in the system since it was rebooted. Furthermore, it tracks its size and the last modified date. In addition,
the ShimCache tracks executables that have not been executed but were browsed for example through explorer.exe. This makes a valuable
source of evidence for example to track executables that were on the system but weren’t executed – consider an a acker that used a
directory on a system to move around his toolkit. The AMCache is stored on a file and from it we can retrieve information for every
executable that run on the system such as the PATH, last modification time and created, SHA1 and PE properties. You can read more about
those 2 artifacts in the article I wrote last year. (h ps://countuponsecurity.com/2016/05/18/digital-forensics-shimcache-artifacts/)

So, I won’t go over on how to acquire this data at scale – feel free to share you technique in the comments – but, AppCompatProcessor
digests data that has been acquired by ShimCacheParser.py (h ps://github.com/mandiant/ShimCacheParser), Redline and MIR but also
consumes raw ShimCache and AMCache registry hives. I will go directly to the features.At the time of this writing the tool version is 0.8 and
one of the features I would like to focus today is the search module. This module allows us to search for known bad using regex expressions.
The search module was coded with performance in mind, which means the regex searches are quite fast. By default, the tool includes more
than 70 regex signatures for all kinds of interesting things an analyst will look for when perform threat hunting. Signatures include
searching for dual usage tools like psexec , looking for binaries in places where they shouldn’t normally be, commonly named credential
dumpers, etc. The great thing is that you can easily include your own signatures. Just add a regex line with your signature!

For this exercise, I want to use the search module to search for binaries that are commonly used by the PlugX backdoor family and friends.
This backdoor is commonly used by different threat groups on targeted a acks. PlugX is also refered as KORPLUG, SOGU, DestroyRAT
and is a modular backdoor that is designed to rely on the execution of signed and legitimated executables to load malicious code. PlugX,
normally has three main components, a DLL, an encrypted binary file and a legitimated executable that is used to load the malware using
a technique known as DLL search order (h ps://countuponsecurity.com/2016/05/24/digital-forensics-dll-search-order/). I won’t go discuss
the details about PlugX in this article but you can read the White Paper “PlugX – Payload Extraction
(h ps://www.contextis.com/documents/11/PlugX_-_Payload_Extraction_March_2013_1.pdf)” done by Kevin O’Reilly from Context, the
presentation about Plugx at Black Hat ASIA in 2014 (h ps://www.blackhat.com/docs/asia-14/materials/Haruyama/Asia-14-Haruyama-I-
Know-You-Want-Me-Unplugging-PlugX.pdf) given by Takahiro Haruyama and Hiroshi Suzuki, the analysis done by the Computer
Incident Response Center Luxembourg (h ps://www.circl.lu/files/tr-12/tr-12-circl-plugx-analysis-v1.pdf) and the Ahnlab threat report
(h p://image.ahnlab.com/global/upload/download/documents/1401223631603288.pdf). With this and other reports you could start
compiling information about different PlugX payloads. However, Adam Blaszczyk (h ps://twi er.com/Hexacorn) from Hexacorn, already
did that job and wrote an article (h p://www.hexacorn.com/blog/2016/03/10/beyond-good-ol-run-key-part-36/) where he outlines different
PlugX payloads seen in the wild.

Ok, with this information, we start creating the PlugX regex signatures. Essentially we will be looking for the signed and legitimate
executables but in places where they won’t normaly be. The syntax to create a new regex signature is simple and you can add your own
signatures to the existing AppCompatSearch.txt file or just create a new file called AppCompatSearch-PlugX.txt which will be consumed
automatically by the tool. The figure below shows the different signatures that I produced. . At the time of this writing, this is still work in
progress but is a starting point.

https://countuponsecurity.com/ 10/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/06/acp-search.png)

Next step, launch AppCompatProcessor against our data set using the newly created signatures. The following picture shows how the
output of the search module looks like. In this particular case the search produced 25 hits and a nicely presented summary of the hits is
displayed on a histogram. The raw dumps of the hits are saved on the file called Output.txt. As an analyst or investigator, you would look
at the results and verify which ones would be worth to further investigate and which ones are false positives. For this exercise, there was a hit
that triggered on the file “c:\Temp\MsMpEng.exe”. This file is part of the Windows Defender suite but could be used by PlugX as part of
DLL search order hijack technique. Basically, the a acker will craft a malicious DLL named MpSvc.dll and will place that in the same
directory as the MsMpEng.exe file and execute MsMpEng.exe. The DLL would need to be crafted in a special way but that is what PlugX
specializes in. This will load the a acker code.

(h ps://countuponsecurity.files.wordpress.com/2017/06/acp-hits.png)

Following these findings, we would want to look at the system that triggered the signature and view all the entries. The picture below shows
this step where we use the dump module. The output shows all the ShimCache entries for this particular system. The entries are normally
sorted in order of execution from bo om to top, and in this case, adjacent to the “c:\Temp\MsMpEng.exe” file there are several windows
built-in commands that were executed and a file named “c:\Temp\m64.exe”. This is what Matias calls a strong temporal execution
correlation. This is indicative that an a acker obtained access to the system, executed several windows built-in commands and and executed
a file called “m64.exe” which likely is Mimika or a cousin.

(h ps://countuponsecurity.files.wordpress.com/2017/06/acp-dump1.png)

Following those leads, you might want to obtain those binaries from the system and perform malware analysis in order to extract indicators
of compromise such as the C&C address, look at other artifacts such Windows Event Logs
(h ps://countuponsecurity.com/2015/11/25/digital-forensics-supertimeline-event-logs-part-ii/), UsnJournal
(h ps://countuponsecurity.com/2017/05/25/digital-forensics-ntfs-change-journal/), memory, etc.. and have additional leads. In addition,
you might want to further use AppCompatProcessor to search for the “m64.exe” file and also use the tstack module, to search across all the
data set for binaries that match the date of those two binaries. With these findings, among other things, you would need to scope the
incident by understanding which systems the a acker accessed, find new investigation leads and pivot on the findings.
AppCompatProcessor is a tool that helps doing that. This kind of finding would definitely trigger your incident response processes and
procedures.

That’s it, hopefully, AppCompatProcessor will reduce the entry barrier for your security operations center or incident response teams to
start performing threat hunting in your environment and produce actionable results. If you find this useful, contribute with your threat
hunting signatures in AppCompatProcessor GitHub repo and Happy Hunting!

Tagged AmCache, AppCompatProcessor, Emterprise Incident Response, PlugX, ShimCache, SOGU, Threat Hunting
https://countuponsecurity.com/ 11/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
MAY 25 2017
LEAVE A COMMENT
DIGITAL FORENSICS AND INCIDENT RESPONSE

Digital Forensics – NTFS Change Journal

Last year, I wrote a series of articles (h ps://countuponsecurity.com/2016/05/30/digital-forensics-ntfs-indx-and-journaling/) about digital


forensics. I covered different artifacts that can be useful when conducting incident response. In the last article of those series, I covered The
NTFS INDX a ribute which is used to store metadata about files inside directories and the $LogFile metadata file, which keeps record of all
operations that occurred in the NTFS volume. Both these artifacts can give the investigator a great amount of information about a acker
activity. However, there is another special file inside NTFS that also contains a wealth of historical information about operations that
occurred on the NTFS volume, the Update Sequence Number (USN) journal file (h ps://msdn.microsoft.com/en-
us/library/windows/desktop/aa363798%28v=vs.85%29.aspx) named $UsnJrnl.

While the different file operations occur on disk, in a NTFS volume, the change journal keeps record of the reason
(h ps://msdn.microsoft.com/en-us/library/aa365722.aspx) behind the operation such as file creation, deletion, encryption, directory
creation, deletion, etc. There is a USN change journal per volume, its turned on by default since Windows Vista, and used by applications
such as the Indexing Service, File Replication Service (FRS), Remote Installation Services (RIS), and Remote Storage. Nonetheless,
applications and Administrators can create, delete, and re-create change journals. The change journal file is stored in the hidden system file
$Extend\$UsnJrnl. The $UsnJrnl (h p://forensicinsight.org/wp-content/uploads/2013/07/F-INSIGHT-Advanced-UsnJrnl-Forensics-
English.pdf) file contains two alternate data streams (ADS). The $Max and the $J. The $Max data streams contains information about the
change journal such as the maximum size. The $J data stream contains the contents of the change journal and includes information such as
the date and time of the change, the reason for the change, the MFT entry, the MFT parent entry and others. This information can useful
for an investigation, for example, in a scenario where the a acker is deleting files and directories while he moves inside an organization in
order to hide his tracks. To obtain the change journal file you need raw access to the file system.

So, on a live system, you could check the size and status of the change journal by running the command “fsutil usn queryjournal C:” on a
Windows command prompt with administrator privileges. The “fsutil (h ps://technet.microsoft.com/en-us/library/cc788042(WS.10).aspx)”
command can also be used to change the size of the journal (h p://faq.a ix5.com/index.php?View=entry&EntryID=24). Fom a live system,
you could also obtain the change journal file using a tool like RawCopy (h ps://github.com/jschicht/RawCopy)or ExtractUsnJrnl
(h ps://github.com/jschicht/ExtractUsnJrnl)from Joakim Schicht. In this particular system the maximum size of the change journal is
0x2000000 bytes.

(h ps://countuponsecurity.files.wordpress.com/2017/05/usnjrnl-fsutil.png)Now, let’s perform a quick exercise about obtaining the change


journal file from a disk image. First, we use the “mmls” utility to see the partition table from the disk image. Then, we use “fls” from The
Sleuth Kit to obtain a file and directory listing and grep for the UsnJrnl string. As you could see in the picture below the output of “fls”
shows that the filesystem contains the $UsnJrnl:$Max and $UsnJrnl:$J files. We are interested in the MFT entry number which is 84621.

(h ps://countuponsecurity.files.wordpress.com/2017/05/usnjrnl-mmls.png)

Next, let’s review MFT record properties for the entry number 84621 with the command “istat” from The Sleuth Kit. This MFT entry stores
the NTFS metadata about the $UsnJrnl. We are interested in the a ributes section, more specifically, we are looking for the identifier 128
which points to the $DATA a ribute. The identifier 128-37 points to the $Max data stream which is of size 32 bytes and is resident
(h ps://countuponsecurity.com/2015/11/10/digital-forensics-ntfs-metadata-timeline-creation/). The identifier 128-38 points to the $J data
https://countuponsecurity.com/ 12/19
24/11/2560
p p y Count Upon Security
g | Increase security awareness. Promote, reinforce and learn security skills.p
stream which is of size 40-GBytes and sparse. Then we use the “icat” command to view the contents of the $Max data stream which can
gives the maximum size of the change journal and then we also use “icat” to export the $J data stream into a file. Noteworthy, that the
change journal is sparse. This means parts of the data is just zeros. However, icat from The Sleuth Kit will extract the full size of the data
stream. A more efficient and faster tool would be ExtractUsnJrnl because it only extracts the actual data. The picture below illustrates the
steps necessary to extract the change journal file.

(h ps://countuponsecurity.files.wordpress.com/2017/05/usnjrnl-istat.png)
Now that we exported the change journal into a file we will use the UsnJrnl2Csv (h ps://github.com/jschicht/UsnJrnl2Csv)utility. Once
again another brilliant tool from Joakim Schicht. The tool supports USN_RECORD_V2 and USN_RECORD_V3, and makes it very easy to
parse and extract information from the change journal. The output will be a CSV file. The picture below shows the tool in action. You just
need to browse the change journal file you obtained and start parsing it.

(h ps://countuponsecurity.files.wordpress.com/2017/05/usnjrnl2csv.png)

This process might take some time, when finished, you will have a CSV file containing the journal records. This file be can easily imported
into Excel. Then, filter based on the reason and timestamp fields. Normally when you do such analysis you already have some sort of a
lead and you have a starting point that will help uncover more leads and findings. After analyzing the change journal records we can start
building a timeline of events about a acker activity. Below picture shows a timeline of events from the change journal about malicious files
that were created and deleted. These findings can then be used as indicators of compromise in order to find more compromised systems in
the environment. In addition, for each file you have the MFT entry number that could be used to a empt to recover deleted files. You might
have a chance of recovering data from deleted files in case the gap between the time when the file was deleted and the image was obtained
is short.

https://countuponsecurity.com/ 13/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
(h ps://countuponsecurity.files.wordpress.com/2017/05/usnjrnl-timeline.png)

The change journal contains a wealth of information that shouldn’t be overlooked. Another interesting aspect of the change journal is that
allocates space and deallocates as it grows and records are not overwri en unlike the $LogFile
(h ps://countuponsecurity.com/2016/05/30/digital-forensics-ntfs-indx-and-journaling/). This means we can find old journal records in
unallocated space on a NTFS volume. How to obtain those? Luckily, the tool USN Record Carver (h ps://github.com/PoorBillionaire/USN-
Record-Carver) wri en by PoorBillionaire can carve journal records from binary data and thus recover these records .
That’s it! In this article we reviewed some introductory concepts about the NTFS change journal and how to obtain it, parse it and create a
timeline of events. The techniques and tools are not new. However, they are relevant and used in today’s digital forensic analysis. Have fun!

References:

Windows Internals, Sixth Edition, Part 2 By: Mark E. Russinovich, David A. Solomon, and Alex Ionescu
File System Forensic Analysis By: Brian Carrier

Tagged NTFS Change Journal, NTFS Metadata, USN Records, UsnJrnl


APR 12 2017
10 COMMENTS
DIGITAL FORENSICS AND INCIDENT RESPONSE

Intro to Linux Forensics

This article is a quick exercise and a small introduction to the world of Linux forensics. Below, I perform a series of steps in order to analyze
a disk that was obtained from a compromised system that was running a Red Hat operating system. I start by recognizing the file system,
mounting the different partitions, creating a super timeline (h ps://countuponsecurity.com/2015/11/23/digital-forensics-supertimeline-
event-logs-part-i/) and a file system timeline. I also take a quick look at the artifacts and then unmount the different partitions. The process
of how to obtain the disk will be skipped but here (h ps://digital-forensics.sans.org/blog/2010/09/28/digital-forensics-copy-vmdk-vmware-
virtual-environment/)are some old but good notes on how to obtain a disk image from a VMware ESX host.

When obtaining the different disk files from the ESX host, you will need the VMDK files. Then you move them to your Lab which could be
simple as your laptop running a VM with SIFT workstation. To analyze the VMDK files you could use the “libvmdk-utils” package that
contain tools to access data store in VMDK files. However, another approach would be to convert the VMDK file format into RAW format.
In this way, it will be easier to run the different tools such as the tools from The Sleuth Kit (h ps://www.sleuthkit.org/) – which will be
heavily used – against the image. To perform the conversion, you could use the QEMU disk image utility. The picture below shows this
step.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux1.png)

Following that, you could list the partition table from the disk image and obtain information about where each partition starts (sectors)
using the “mmls” utility. Then, use the starting sector and query the details associated with the file system using the “fsstat” utility. As you
could see in the image, the “mmls” and “fsstat” utilities are able to identify the first partition “/boot” which is of type 0x83 (ext4). However,
“fsstat” does not recognize the second partition that starts on sector 1050624.

https://countuponsecurity.com/ 14/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux2.png)

This is due to the fact that this partition is of type 0x8e (Logical Volume Manager)
(h ps://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-lvm.html). Nowadays, many Linux distributions use LVM (Logical
Volume Manager) scheme as default. The LVM uses an abstraction layer that allows a hard drive or a set of hard drives to be allocated to a
physical volume. The physical volumes are combined into logical volume groups which by its turn can be divided into logical volumes
which have mount points and have a file system type like ext4.

With the “dd” utility you could easily see that you are in the presence of LVM2 volumes. To make them usable for our different forensic
tools we will need to create device maps from the LVM partition table. To perform this operation, we start with “kpartx” which will
automate the creation of the partition devices by creating loopback devices and mapping them. Then, we use the different utilities
(h p://lpic2.unix.nl/ch05s03.html)that manage LVM volumes such as “pvs”, “vgscan” abd “vgchange”. The figure below illustrates the
necessary steps to perform this operation.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux3.png)

After activating the LVM volume group, we have six devices that map to six mount points that make the file system structure for this disk.
Next step, mount the different volumes as read-only as we would mount a normal device for forensic analysis. For this is important to create
a folder structure that will match the partition scheme.

https://countuponsecurity.com/ 15/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux4.png)

After mounting the disk, you normally start your forensics analysis and investigation by creating a timeline
(h ps://countuponsecurity.com/2015/11/23/digital-forensics-supertimeline-event-logs-part-i/). This is a crucial step and very useful because
it includes information about files that were modified, accessed, changed and created in a human readable format, known as MAC time
evidence (Modified, Accessed, Changed). This activity helps finding the particular time an event took place and in which order.

Before we create our timeline, noteworthy, that on Linux file systems like ext2 and ext3 there is no timestamp about the creation/birth time
of a file. There are only 3 timestamps. The creation timestamp was introduced on ext4. The book “The Forensic Discovery 1st Edition”from
Dan Farmer and Wietse Venema outlines the different timestamps:

Last Modification time. For directories, the last time an entry was added, renamed or removed. For other file types, the last time the file
was wri en to.
Last Access (read) time. For directories, the last time it was searched. For other file types, the last time the file was read.
Last status Change. Examples of status change are: change of owner, change of access permission, change of hard link count, or an
explicit change of any of the MACtimes.
Deletion time. Ext2fs and Ext3fs record the time a file was deleted in the dtime stamp.the filesystem layer but not all tools support it.
Creation time: Ext4fs records the time the file was created in crtime stamp but not all tools support it.

The different timestamps are stored in the metadata contained in the inodes. Inodes are the equivalent of MFT entry number on a Windows
world. One way to read the file metadata on a Linux system is to first get the inode number using, for example, the command “ls -i file”
and then you use “istat” against the partition device and specify the inode number. This will show you the different metadata a ributes
which include the time stamps, the file size, owners group and user id, permissions and the blocks that contains the actual data.

Ok, so, let’s start by creating a super timeline. We will use Plaso to create it. For contextualization Plaso is a Python-based rewrite of the
Perl-based log2timeline (h p://www.forensicswiki.org/wiki/Log2timeline) initially created by Kristinn Gudjonsson
(h p://www.forensicswiki.org/wiki/Kristinn_Gudjonsson) and enhanced by others. The creation of a super timeline is an easy process and it
applies to different operating systems. However, the interpretation is hard. The last version of Plaso engine is able to parse the EXT version 4
and also parse different type of artifacts such as syslog messages, audit, utmp and others. To create the Super timeline we will launch
log2timeline against the mounted disk folder and use the Linux parsers. This process will take some time and when its finished you will
have a timeline with the different artifacts in plaso database format. Then you can convert them to CSV format using “psort.py” utility. The
figure below outlines the steps necessary to perform this operation.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux5.png)

Before you start looking at the super timeline which combines different artifacts, you can also create a traditional timeline for the ext file
system layer containing data about allocated and deleted files and unallocated inodes. This is done is two steps. First you generate a body
file using “fls” tool from TSK. Then you use “mactime” to sort its contents and present the results in human readable format. You can
perform this operation against each one of the device maps that were created with “kpartx”. For sake of brevity the image below only
shows this step for the “/” partition. You will need to do it for each one of the other mapped devices.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux6.png)

Before we start the analysis, is important to mention that on Linux systems there is a wide range of files and logs
(h ps://community.rackspace.com/products/f/25/t/531)that would be relevant for an investigation. The amount of data available to collect
and investigate might vary depending on the configured se ings and also on the function/role performed by the system. Also, the different
flavors of Linux operating systems follow a filesystem structure that arranges the different files and directories in a common standard. This
is known as the Filesystem Hierarchy Standart (FHS) and is maintained here (h p://www.pathname.com/ s/). Its beneficial to be familiar
with this structure in order to spot anomalies. There would be too much things to cover in terms of things to look but one thing you might

https://countuponsecurity.com/ 16/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.
want to run is the “chkrootkit (h p://www.chkrootkit.org/)” tool against the mounted disk. Chrootkit is a collection of scripts created by
Nelson Murilo and Klaus Steding-Jessen that allow you to check the disk for presence of any known kernel-mode and user-mode rootkits.
The last version is 0.52 and contains an extensive list of known bad files.

Now, with the supertimeline and timeline produced we can start the analysis. In this case, we go directly to timeline analysis and we have a
hint that something might have happened in the first days of April.

During the analysis, it helps to be meticulous, patience and it facilitates if you have comprehensive file systems and operating system
artifacts knowledge. One thing that helps the analysis of a (super)timeline is to have some kind of lead about when the event did happen. In
this case, we got a hint that something might have happened in the beginning of April. With this information, we start to reduce the time
frame of the (super)timeline and narrowing it down. Essentially, we will be looking for artifacts of interest that have a temporal proximity
with the date. The goal is to be able to recreate what happen based on the different artifacts.

After back and forth with the timelines, we found some suspicious activity. The figure below illustrates the timeline output that was
produced using “fls” and “mactime”. Someone deleted a folder named “/tmp/k” and renamed common binaries such as “ping” and “ls”
and files with the same name were placed in the “/usr/bin” folder.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux7.png)

This needs to be looked further. Looking at the timeline we can see that the output of “fls” shows that the entry has been deleted. Because
the inode wasn’t reallocated we can try to see if a backup of the file still resides in the journaling. The journaling concept was introduced on
ext3 file system. On ext4, by default, journaling is enabled and uses the mode “data=ordered”. You can see the different modes here
(h p://www.kernel.org/doc/Documentation/filesystems/ext4.txt). In this case, we could also check the options used to mount the file system.
To do this just look at “/etc/fstab”. In this case we could see that the defaults were used. This means we might have a chance of recovering
data from deleted files in case the gap between the time when the directory was deleted and the image was obtained is short. File system
journaling is a complex topic but well explained in books like “File system forensics” from Brian Carrier. This
(h ps://www.sans.org/reading-room/whitepapers/forensics/advantage-ext3-journaling-file-system-forensic-investigation-2011)SANS GCFA
paper from Gregorio Narváez also covers it well. One way you could a empt to recover deleted data is using the tool “extundelete”. The
image below shows this step.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux111.png)

The recovered files would be very useful to understand more about what happened and further help the investigation. We can compute the
files MD5’s, verify its contents and if they are known to the NSLR database or Virustotal. If it’s a binary we can do strings against the
binary and deduce functionality with tools like “objdump” and “readelf”. Moving on, we also obtain and look at the different files that
were created on “/usr/sbin” as seen in the timeline. Checking its MD5, we found that they are legitimate operating system files distributed
with Red Hat. However, the files in “/bin” folder such as “ping” and “ls” are not, and they match the MD5 of the files recovered from
“/tmp/k”. Because some of the files are ELF binaries, we copy them to an isolated system in order to perform quick analysis. The topic of
Linux ELF binary analysis would be for other time but we can easily launch the binary using “ltrace -i” and “strace -i” which will intercept
and record the different function/system calls. Looking at the output we can easily spot that something is wrong. This binary doesn’t look
the normal “ping” command, It calls the fopen() function to read the file “/usr/include/a.h” and writes to a file in /tmp folder where the
name is generated with tmpnam(). Finally, it generates a segmentation fault. The figure below shows this behavior.

https://countuponsecurity.com/ 17/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux9.png)

Provided with this information, we go back and see that this file “/usr/include/a.h” was modified moments before the file “ping” was
moved/deleted. So, we can check when this “a.h” file was created – new timestamp of ext4 file system – using the “stat” command. By
default, the “stat” doesn’t show the crtime timestamp but you can use it in conjunction with “debugfs” to get it. We also checked that the
contents of this strange file are gibberish.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux10.png)

So, now we know that someone created this “a.h” file on April 8, 2017 at 16:34 and we were able to recover several other files that were
deleted. In addition we found that some system binaries seem to be misplaced and at least the “ping” command expects to read from
something from this “a.h” file. With this information we go back and look at the super timeline in order to find other events that might
have happened around this time. As I did mention, super timeline is able to parse different artifacts from Linux operating system. In this
case, after some cleanup we could see that we have artifacts from audit.log and WTMP at the time of interest. The Linux audit.log tracks
security-relevant information on Red Hat systems. Based on pre-configured rules, Audit daemon generates log entries
(h ps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sec-Audit_Record_Types.html) to
record as much information about the events that are happening on your system as possible. The WTMP records information about the
logins and logouts to the system.

The logs shows that someone logged into the system from the IP 213.30.114.42 (fake IP) using root credentials moments before the file
“a.h” was created and the “ping” and “ls” binaries misplaced.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux12.png)

And now we have a network indicator. Next step we would be to start looking at our proxy and firewall logs for traces about that IP
address. In parallel, we could continue our timeline analysis to find additional artifacts of interest and also perform in-depth binary analysis
of the files found, create IOC’s and, for example, Yara signatures which will help find more compromised systems in the environment.

That’s it for today. After you finish the analysis and forensic work you can umount the partitions, deactivate the volume group and delete
the device mappers. The below picture shows this steps.
https://countuponsecurity.com/ 18/19
24/11/2560 Count Upon Security | Increase security awareness. Promote, reinforce and learn security skills.

(h ps://countuponsecurity.files.wordpress.com/2017/04/dfir-linux8.png)

Linux forensics is a different and fascinating world compared with Microsoft Windows forensics. The interesting part (investigation) is to get
familiar with Linux system artifacts. Install a pristine Linux system, obtain the disk and look at the different artifacts. Then compromise the
machine using some tool/exploit and obtain the disk and analyze it again. This allows you to get practice. Practice these kind of skills, share
your experiences, get feedback, repeat the practice, and improve until you are satisfied with your performance. If you want to look further
into this topic, you can read “The Law Enforcement and Forensic Examiner’s Introduction to Linux (h p://linuxleo.com/)” wri en by Barry
J. Grundy. This is not being updated anymore but is a good overview. In addition, Hal Pomeranz (h ps://twi er.com/hal_pomeranz) has
several presentations here (h p://www.deer-run.com/~hal/) and a series of articles wri en in the SANS blog (h ps://digital-
forensics.sans.org/blog/), specially the 5 (h ps://digital-forensics.sans.org/blog/2010/12/20/digital-forensics-understanding-ext4-part-1-
extents)articles (h p://computer-forensics.sans.org/blog/2011/03/14/digital-forensics-understanding-ext4-part-2-timestamps)wri en
(h p://computer-forensics.sans.org/blog/2011/03/28/digital-forensics-understanding-ext4-part-3-extent-trees) about (h p://computer-
forensics.sans.org/blog/2011/04/08/understanding-ext4-part-4-demolition-derby)EXT4 (h p://computer-
forensics.sans.org/blog/2011/08/22/understanding-ext4-part5-large-extents).

References:
Carrier, Brian (2005) File System Forensic Analysis
Nikkel, Bruce (2016) Practical Forensic Imaging

Tagged Linux LVM forensics, Linux timeline, log2timeline, SuperTimeline, the sleuth kit
Older posts

Create a free website or blog at WordPress.com.

https://countuponsecurity.com/ 19/19

Вам также может понравиться