Вы находитесь на странице: 1из 64

TruSTAR App and TA for Splunk

By

Kartik Dass(ID No. 15CEUOG023)


&
Parth Panchal (ID No. 15CEUBG002)

A project submitted
In
partial fulfillment of the requirements
for the degree of
BACHELOR OF TECHNOLOGY
in
Computer Engineering

Internal Guide External Guide


Dr. C. K. Bhensdadia Mr. Jay Shah
Head of Department Software Developer
Dept. of Computer Engg. Crest Data System Pvt. Ltd.

Faculty of Technology
Department of Computer Engineering
Dharmsinh Desai University
April 2019
CERTIFICATE

This is to certify that the project work titled


TruStar App and TA for Splunk
is the bonafide work of

Kartik Dass(ID No. 15CEUOG023)


&
Parth Panchal (ID No. 15CEUBG002)

carried out in partial fulfillment of the degree of Bachelor of Technology in Computer


Engineering at Dharmsinh Desai University in the academic session
December 2018 to April 2019.

Dr. C. K. Bhensdadia
Head of Department
Dept. of Computer Engg.

Faculty of Technology
Department of Computer Engineering
Dharmsinh Desai University
April 2019
COMPANY CERTIFICATE
ACKNOWLEDGEMENT

We heartily thank ​Dr. C.K. Bhensdadia, Head (Department of Computer Engineering)


for providing us the opportunity and giving us exposure to peruse our final semester internship at
“Crest Data Systems” and gain a practical working experience from the industry​.

Theoretical knowledge is of no importance if one doesn’t know the way of it simple mention.
I’m thankful to my institute that provided me an opportunity to apply my theoretical knowledge
through the project. I feel obliged to submit this project as part of my curriculum.

I would like to take the opportunity to express my humble gratitude to my guide ​Mr. Jay Shah​,
Software Engineer at Crest Data Systems under whom I undertook my project. His constant
guidance and willingness to share their vast knowledge made us enhance my knowledge and
helped me in the project with words of encouragement and have shown full confidence in our
abilities. Without their effort and full support, this project may not have succeeded.

Although there may be many who are unacknowledged in this humble vote of thanks, there are
none who remain unappreciated.

With sincere regards​,


Kartik Dass (15CEUOG023)
Parth Panchal(15CEUBG002)
TABLE OF CONTENT

ABSTRACT I

TABLE II

LIST OF FIGURES II
LIST OF TABLES III

1.0 INTRODUCTION 1​1


1.1 PROJECT DETAILS : 1​2
1.2 PROJECT DEFINITION: 1​2
1.3 PURPOSE: 13
1.4 SCOPE: 13
1.5 TECHNOLOGY REVIEW: 13
2.0 PROJECT MANAGEMENT 15
2.1 FEASIBILITY STUDY 16
2.1.1 Technical Feasibility 16
2.1.2 Time Schedule Feasibility 16
2.1.3 Operational Feasibility 17
2.1.4 Implementation Feasibility 17
2.2 PROJECT PLANNING 17
​2.2.1 Project Development Approach: 17
2.2.2 Project Plan: 18
2.2.3 Milestone And Deliverables: 19
2.2.4 Roles And Responsibilities: 20
2.2.5 Group Dependencies 20
2.2.6 Project Scheduling Chart: 21
​2​.3 SYSTEM REQUIREMENT STUDY 22
2.​3.1 STUDY OF CURRENT SYSTEM 22
2.​3.2 PROBLEMS AND WEAKNESS OF CURRENT SYSTEM 22
2.​3.3 USER CHARACTERISTICS 22
2.​3.4 HARDWARE AND SOFTWARE REQUIREMENTS 22
2.​3.5​ ​CONSTRAINTS 22
2.​3.5.1 Regular Policies 23
2.​3.5.2 Hardware limitations 23
2.​3.5.3 Interface to Other Application 23
2.​3.5.4 Parallel Operation 23
2.​3.5.5 Criticality Of The Application 23
2.​3.6​ ​ASSUMPTIONS AND DEPENDENCIES 24
3​.0 SYSTEM ANALYSIS 25
3​.1 FUNCTIONAL REQUIREMENT 26
3​.2 NON-FUNCTIONAL REQUIREMENTS 28
3​.3 USE CASE DIAGRAM 29
3​.4 DATA FLOW DIAGRAM 30
3​.5 SEQUENCE DIAGRAM 31
3​.6 ACTIVITY DIAGRAM 32
4​.0 SYSTEM DESIGN 33
4​.1 AUTOMATION STEPS 34
4​.2 SYSTEM ARCHITECTURE DESIGN: 35
4​.3 SYSTEM TOPOLOGY 36
5​.0 IMPLEMENTATION PLANNING 37
5​.1 IMPLEMENTATION ENVIRONMENT 38
5​.2 KEY IMPLEMENTATION OBJECTIVES 38
5​.3 MODULE SPECIFICATION 38
5​.4 IMPLEMENTATION STEPS FOR AUTOMATION 39
5​.5 CODING STANDARDS 39
6.​0 TESTING 40
6​.1 TESTING PLAN: 25
6​.2 TESTING STRATEGY 25
6​.3 TESTING METHODS 26
6​.4 TEST CASES: 27
6​.5 USER MANUAL 32
​6.5​.1 ABOUT USER MANUAL DOCUMENT 32
​6.5​.2 HOW TO USE APPLICATION 32
​6.5​.3 SCREENSHOTS 33
6.5​.3.1 CONFIGURE DATA INPUT 35
6.5​.3.2 DASHBOARDS 40
7​.0 ​CONCLUSION​ ​AND FUTURE EXTENSION​ 58
​7​.1 CONCLUSION 59
7.​2 LIMITATIONS 59
7​.3 FUTURE EXTENSION 59

APPENDICES 60
BIBLIOGRAPHY 61
REFERENCES 62
ABSTRACT

Today we live in a world which is completely based on computers and basically only on smart
gadgets. One major source for the evolution of this tech world is one and the only Internet. Right
now the internet is a necessity for everyone like food, cloth, and roof. In each field, we use the
internet for social media, shopping, bookings, entertainment, and employment. And so the
internet is also prone to various attacks. Every day thousands of new attacks emerge from
different sources. And hence to counter them we have threat intelligence systems.

Trustar is one of such threat intelligence software which helps us in detecting threats.
It has reports and indicators as its primary resource. By implementing the app and addon for
trustar we can come to know the possible attacks which may arise from in future. We collect the
reports and indicators using add on and we have created custom visualizations using App which
helps us in visualizing reports and indicators and the indicators matched in Splunk.

I
TABLE

LIST OF FIGURES
Fig 3.1 Use Case Diagram 29
Fig 3.2 DataFlow Diagram 30
Fig 3.3 Sequence Diagram 31
Fig 3.4 Activity Diagram 32
Fig 4.1 Automation Step 34
Fig 4.2 System Architecture 35
Fig 4.3 TA and App in a distributed environment 36
Fig 4.4 TA and App in a standalone environment 36
Fig 6.1 Splunk Instance Screen 48
Fig 6.2 Tenable Add-On Inputs Page 48
Fig 6.3 Add Account 49
Fig 6.4 Add Trustar Report 50
Fig 6.5 Add Trustar Indicator 52
Fig 6.6 Add Trustar Enclave 54
Fig 6.7 Final View Od Addon 55
Fig 6.8 View of App 56
Fig 6.9 View Of Reported Data 57
LIST OF TABLES

Table 2.1 Milestones and Deliverables 19


Table 2.2 Roles and Responsibilities 20
Table 2.3 Project Scheduling Chart 21
Table 6.1 Test Cases for Splunk Handler 43
Table 6.2 Test cases for App Installation 44
Table 6.3 Test cases for account creation. 44
Table 6.4 Test Cases For Trustar Inputs 45
Table 6.5 Test Cases for Dashboard Visualization 46
Table 6.6 Significance of each field 49
Table 6.7 Configure Data input For Reports 50
Table 6.8 Significance of Each Field For Reports 51
Table 6.9 Configure Data input For Indicators 52
Table 6.10 Significance of Each Field For Indicators 53
Table 6.11 Configure Data input For Enclave 53
Table 6.12 Significance of Each Field For Enclave 54
Chapter 1
Introduction
1.1 PROJECT DETAILS :

Overview Of Splunk:

Splunk is a software platform to search, analyze and visualize the machine-generated data
gathered from the websites, applications, sensors, devices etc. which make up your IT
infrastructure and business.

It does not require complicated Splunk databases, connectors, custom parsers or controls as it can
work efficiently with the help of a web browser and an algorithm. Splunk enterprise can be used
as a cloud application that is highly scalable and reliable.

What are Splunk Apps and Add-Ons?


In Splunk, you can customize visualization and add several configurations. Now we can export
those visualizations and configuration as Splunk App. Just difference between App and Add-on
is that App usually contents visualization and Add-on mostly content configuration about how to
gather data and extract different fields.

Apps deliver a user experience designed to make Splunk immediately useful and relevant for
typical tasks and roles. Apps simplify and optimize user tasks, yet allow access to the data and
functions of the full platform.

Add-ons typically import and enrich data from any source, creating a rich data set ready for
direct analysis or use in an App. But add-ons can also be used to extend the Splunk platform to
meet your specific needs.

1.2 PROJECT DEFINITION:

The project is TruSTAR’s App and Addon. The Addon is responsible for a collection of threat
indicators and reports from TruSTAR’s database. Reports and Indicators are ​resources in
TruSTAR’s data model which are required for the detection of threats.
1.3 PURPOSE:

This App will help the Client’s customers identify potential Threats(IOCs) in their network and
provide them with an ever-growing supply of Threat Intelligence which is being shared across
the community offered by the Client to their customers.

This App will be beneficial for the Client as it will enrich their database with new IOCs collected
from their customer’s networks(with the consent of the customer) which will help them in
providing even richer Threat Intelligence and will enable them in correlating t​ heir data with the
incoming threats.

1.4 SCOPE:

The scope of this project can be briefly divided into two main portions.

1)App.
2)Add on.

1)App
● To display the collected data from TruSTAR’s data models.
● To display the matching threats in Customer’s machine.

2)Add on.
● To collect TruSTAR’s threat resources and integrate them into Splunk instance.
● To obtain correlated threat resources

1.5 TECHNOLOGY REVIEW:

FRONT END​: Splunk

Splunk is a software platform to search, analyze and visualize the machine-generated data
gathered from the websites, applications, sensors, devices, etc.which make up your IT
infrastructure and business.

It does not require complicated Splunk databases, connectors, custom parsers or controls as it can
work efficiently with the help of a web browser and an algorithm. Splunk enterprise can be used
as a cloud application that is highly scalable and reliable.

BACK END:​ Python


We have chosen python for automation implementation as python because it is open source,
easy to use and already mature.Also, Splunk internally supports Python 2.

DIAGRAM TOOL​: Draw.io


Draw.io is free online diagram software for making flowcharts, process diagrams, org charts, and
UML, Class and network diagrams.
Chapter 2
Project Management
Project management is the process and activity of planning, organizing, motivating, and
controlling resources, procedures, and protocols to achieve specific goals in scientific or daily
problems. The primary challenge of project management is to achieve all of the project goals and
objectives while honoring the preconceived constraints. We have taken into consideration the
necessary steps for project management and worked with the resources in an organized manner
to achieve the goal of our project.

2.1 FEASIBILITY STUDY


The preliminary investigation examines project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for professional service management system adding new
modules and debugging old running system. All system is feasible if they are unlimited resources
and infinite time.

2.1.1 Technical Feasibility


The preliminary investigation examines project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for professional service management system adding new
modules and debugging old running system. All system is feasible if they are unlimited resources
and infinite time.

2.1.2​ ​Time Schedule Feasibility


We checked whether our system can be ready in time without any error. We have planned all its
phase keeping the aspect in our mind, that if we find any bug or error after testing phase then we
can move our deadline to 2-4 days, as we set our deadline before the actual submission date to
the client It requires a minimum of 3 months for the implementation of the complete project with
all the features implemented. This also includes the testing and debugging phase.
2.1.3 Operational Feasibility

How the project will work and who will use it, all such concerns arise in this phase. We have to
study what the existing system’s problem is, and is it worth
solving or not.

As by using this App and Add-On, Trustar can find Vulnerabilities and we can visualize them in
the form of Dashboard.

2.1.4 Implementation Feasibility


As we identified that Splunk is using REST-EndPoints to handle all its web interface, so we can
also use that in our project. We came to know that by using Dashboard and its features we can
represent our events in meaningful ways. By this means we can achieve the scope of the project.

2.2 PROJECT PLANNING

In the development of this project, we will first check to see if our project is feasible
functionally, technically and economically. Then we collect the requirements. Hence, we gather
all the requirements which we need to develop our system. Then, after thoroughly understanding
the requirements, we will start development.

Our development process divides basically into two parts: Python files for backend and Splunk
App for front end or say visualization.

2.2.1 Project Development Approach:


The Agile Model is used for project development. We have selected Agile Model because of its
beneficial speed without affecting quality of product and agile makes team so much more
productive.

Agile Model

Agile SDLC model is a combination of iterative and incremental process models with a focus on
process adaptability and customer satisfaction by rapid delivery of working software product.
Agile Methods break the product into small incremental builds. These builds are provided in
iterations. Each iteration typically lasts from about one to three weeks. Every iteration involves
cross-functional teams working simultaneously on various areas like planning, requirements
analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration, a
working product is displayed to the customer and important stakeholders.

What is Agile?

Agile model believes that every project needs to be handled differently and the existing methods
need to be tailored to best suit the project requirements. In agile the tasks are divided into time
boxes (small time frames) to deliver specific features for a release. An iterative approach is taken
and working software build is delivered after each iteration. Each build is incremental in terms of
features; the final build holds all the features required by the customer.

Advantages of using Agile Model:

● Customer satisfaction by rapid, continuous delivery of useful software.


● People and interactions are emphasized rather than process and tools. Customers,
developers, and testers constantly interact with each other.
● Working software is delivered frequently (weeks rather than months).
● Face-to-face conversation is the best form of communication.
● Close, daily cooperation between business people and developers.
● Continuous attention to technical excellence and good design.

Disadvantages of using Agile Model:

● In case of some software deliverables, especially the large ones, it is difficult to assess the
effort required at the beginning of the software development life cycle.
● There is a lack of emphasis on necessary designing and documentation.
● The project can easily get taken off track if the customer representative is not clear what
final outcome that they want.
● Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it has no place for newbie programmers unless combined
with experienced resources.

2.2.2 Project Plan:

After the feasibility study as the functional requirements were almost clear which were decided
by our project lead. After analyzing and thoroughly understanding the requirements of the
application we planned the project. 3-tier architecture is used for this System. Here we have
decomposed the system into modules. Also, the internals of the individual modules is designed
in greater details. Coding and Unit Testing phase are required to translate the software design
into the source code. Also during this phase, each module is unit tested to determine the correct
working of all the individual modules. Integration and System Testing phase consists of the
integration of the modules in a planned manner. Here during each integration step, we have
tested the partially integrated system. Finally, when all the modules were successfully
integrated and tested, system testing was carried out successfully

2.2.3 Milestone And Deliverables:


Milestones are identified in order to complete the entire project in the time duration.
Milestones are identified for every sprint of Crest Data Systems.

Table 2.1 Milestones and Deliverables

PHASE DELIVERIES PURPOSE

System Requirements and ● Requirement It gives an exact


Analysis gathering and understanding of the user
analysis requirements.
● Functional
Specifications
● Non-Functional
specifications

System Design ● Use-case diagram It gives the logical


● Class diagram structure that describes the
● ER diagram system.
● Activity diagram
● Component
diagram

Implementation and testing The output obtained for the It gives the required
required functionality and module.
implementing and doing
various types of testing.
2.2.4 Roles And Responsibilities:

Table 2.2 Roles and Responsibilities

Name Analysis Designing Coding Testing Documentation


Ronish
Jariwala

Kartik
Dass

Parth
Panchal

2.2.5 Group Dependencies


The members of the project should be dedicated to the project and should in turn help each other
in whatever problems concerning the project. They are expected not to have any internal or
external communication gaps. They are also expected to share the challenges faced by them
during design or development so that the team and mentors can brainstorm over every possible
dimension. They should report periodically to the concerned faculty and keep them updated
regarding the Project.
2.2.6 Project Scheduling Chart:

Table 2.3 Project Scheduling Chart


2.3 ​SYSTEM REQUIREMENT STUDY

2.3.1 STUDY OF CURRENT SYSTEM

Trustar platform has malicious and harmful threat's details internally known as Indicators. But
currently no system is available to process those data for Splunk to get interactive statistical 
information and attractive visualizations out of it. So, our product facilitates Trustar users to
ingest humongous amount of data in real time and analyze them via interactive visualizations.

2.3.2 PROBLEMS AND WEAKNESS OF CURRENT SYSTEM

In the current scenario there is no product which is fulfilling customer’s requirement of


processing and monitoring such huge amount of threat information in real time. And, without
Trustar Splunk app, it’s difficult for a user to take security related decision regarding their threat
library.

2.3.3 USER​ ​CHARACTERISTICS

This Product is mainly focused for Trustar users who use Splunk.

2.3.4 HARDWARE AND SOFTWARE REQUIREMENTS

Hardware specification for User


● Processor:​Intel Core i5 (sixth generation), 12 CPU cores at 8Ghz
● RAM​: 12 GB
● Hard Disk​: Minimum 5 GB free anytime
Software specification
● OS: ​Windows/Linux/Mac
● Web Browser: ​Chrome, Firefox
● Splunk Enterprise (v7.0.X - v7.2.X)
● Trustar (4.15 or Later)

2.3.5 CONSTRAINTS

2.3.5.1 Regular Policies

As, per the Company’s policy any developer has to maintain the Coding Standards and follow
Splunk best practices. Also, each and every user should maintain the subversion and commit the
modification with appropriate comment so to have track of work and also of the code
modification. From the client’s perspective, Developer should use well known coding standards.

2​.​3.5.2 Hardware limitations

The hardware limitation is almost none. If Splunk Enterprise supports on system then Trustar
app also supports.

2.3.5.3 Interface to Other Application

Splunk App and AddOn for Trustar is tightly interfaced with Trustar, it can synchronize with
Trustar Platform in order to collect indicators and various Indicator metadata like type,
score, status from Trustar, it also updates the sightings count of indicators on Trustar
Platform and provides workflow actions to mark indicators as True/False Positive on
Trustar.

2.3.5.4 Parallel Operation

There are primarily 2 parallel operations in our Trustar Splunk App and AddOn:
● Data Collection: Based on configured interval in AddOn input, Splunk will run a
python script to collect indicators from specified export on Trustar platform
periodically and index those in configured Splunk Index.
● Sightings: A savedsearch will run periodically according to the configured interval,
it will execute a custom command to match the indicators to raw events in Splunk
and save the sightings metadata in Splunk.
● Update Sighting count on Trustar: A scheduled savedsearch will periodically
update the sighting count of indicators on the Trustar platform via REST API Calls
written in a custom command's script​.

2.3.5.5 Criticality Of The Application

Criticality means any occurrence of miss operating of the system or any accidental event in
software which can damage the resources of software as well as hardware. As per my
knowledge there is no criticality in our Application​.

2.3.6 ASSUMPTIONS AND DEPENDENCIES


We assume that end user has a knowledge of basic operation on Splunk. So, end user can get best
out of our product. Application is dependent only on working of python and Splunk Enterprise
not any other applications​.
Chapter 3
System Analysis
3.1 FUNCTIONAL REQUIREMENT
Functional requirements define the internal workings of the software: that is, the calculations,
technical details, data manipulation and processing and other specific functionality that show
how the use cases are to be satisfied.

The functional requirements of the application are mentioned as follows:-

R1: Splunk Handler.


This functionality allows to install Splunk as well as Start and Restart Splunk,
this functionality is useful for automation​.

R1.1 Splunk Installation


Input :​ Splunk installation file in compressed format (tar, tar.gz, gzip or msi)
Output :​ Splunk installed on default directory based on operating system
Processing :​ Uncompress installation file, put uncompressed directory in default
location based on operating system.

R1.2 Splunk Start


Input :​ Splunk Home path
Output :​ Splunk started
Processing :​ Apply Splunk start command $SPLUNK_HOME/bin/Splunk start.

R1.3 Splunk Restart


Input :​ Splunk connection session key
Output :​ Splunk restarted
Processing :​ Use rest api EndPoint to restart Splunk.

R2: Connection
This functionality allows to connect to Splunk enterprise via rest api call.
Input :​ Splunk installation base uri, username and password for Splunk
Output :​ Session Key for connection with specified Splunk instance
Processing :​ Use rest api EndPoint to connect with specific Splunk instance.

R3: App Handler


This functionality allows to install and upgrade Splunk App into Splunk
instance.

R3.1 App Installation


Input :​ Splunk connection session key, Splunk App installation file(format: .spl or .tar)
Output :​ App installed in Splunk instance
Processing :​ Use rest api EndPoint and App installation file to install App on specific
Splunk instance.
R4: Inputs and Configuration Management.
R4.1: Creation of Account for Trustar Add-on
Input: ​User Provides account name, address, secret key, api key
Output: ​Successful creation of Account.
Processing: ​Validates the credentials provided by user.

R4.2:Creation of Input
R4.2.1:Create Input for Trustar Reports
Input:​User Provides input_name, interval, index, Start Time, Enclaves, Global Account
Output:​Successful creation of Input.
Processing:​ Validates the inputs provided by the user.

R4.2.2:Create Input for Trustar Indicators


Input:​ User Provides input_name, interval, index, start times, tags,global account.
Output:​ Successful creation of Input.
Processing:​ Validates the inputs provided by the user.

R4.2.3:Create Input for Trustar Enclaves


Input:​User Provides input_name, interval, index, global account,
Query_Name.
Output:​Successful creation of Input.
Processing:​ Validates the inputs provided by the user.

R5: Job Management

R5.1: Index the events


Input:​ User provides valid input credentials for specific index.
Output:​ Events are generated for specific index within time range.
Processing:​The specific Rest Endpoints are called and events are indexed on specified
index.

R5.2:Provide Checkpoint for Eventtime Generation .


Input:​ User provides specific filters for checkpointing.
Output:​ Events are generated for specified filters(Removes duplication) .
Processing:​ Checkpoint is implemented as and when data is indexed for the first time on
the unique index

R6:Dashboard Visualizations.

R6.1:Creation of Different Panels.


Input:​ It takes data from the input created in add-on..
Output:​ Panels shows imported data and matched data.
Processing:​ Indexed Data gets populated according to defined SPL.
R6.2:Performing Drilldowns in Dashboard.
Input: ​User provides specific drilldowns in the specified panels.
Output: ​Desired DrillDown operation is performed.
Processing: ​Based on SPL commands defined,DrillDown operation is performed.

3.2 NON-FUNCTIONAL REQUIREMENTS

The Non Functional Requirements are as follows: -

Usability
The UI of the Splunk App should be user-friendly so that user can navigate easily through the
app.

Accuracy
As we were developing the application, we must make the system that is very accurate in its
functions. All the data should keep working properly, keep getting perfect input, process
accurately and produce the perfect output. Accuracy is the most important non-functional
characteristic or requirement of the system.

Reliability
Error handling mechanism must be robust to avoid failure of the operation and in the case of
failure the app reports it to user without any due harm.

Performance
This App will match events across millions of log entries containing raw data from the
customer’s network with the IOCs received from the Client’s site via HTTP calls made by the
Add-on. Performance of the App is crucial as it will affect the delivery of Reports, Alerts and
possibly cause data loss.

Security Requirements
The data being collected from the Client’s site and the data with which it is being matched, both
are highly confidential and need to be secured. For the customer’s on-premises data, Splunk can
ensure that the data doesn’t leave the network as it has features like user authentication and user
role management. The Client’s API returns encrypted data which can be decrypted once
received. This ensures the protection of the transmitted data

Software Quality Attributes


The App and Add-on have to follow the quality standards like Availability, Correctness,
Maintainability, Reliability, Reusability, Robustness, Testability, and Usability​.

3.3 USE CASE DIAGRAM

Fig 3.1 Use Case Diagram


3.4 DATA FLOW DIAGRAM

Fig 3.2 DataFlow Diagram


3.5 SEQUENCE DIAGRAM

Fig 3.3 Sequence Diagram


3.6 ACTIVITY DIAGRAM

Fig 3.4 Activity Diagram


Chapter 4
System Design
4.1 AUTOMATION STEPS

Fig 4.1 Automation Step


4.2 SYSTEM ARCHITECTURE DESIGN:

Fig 4.2 System Architecture

Description​:

The Splunk App allows users to use context from TruSTAR’s IOCs and incident reports within
their Splunk analysis workflow. TruSTAR arms security teams with high-signal intelligence
from sources such as internal historical data, open and closed intelligence feeds, and anonymized
incident reports from TruSTAR’s vetted community of enterprise members
4.3 SYSTEM TOPOLOGY
Trustar add-on and app for Splunk are intended to do data collection, data normalization and
visualization of data through API calls.
Below is the topology of TA and App in a distributed and standalone environment.

Fig 4.3 TA and App in a distributed environment

Fig 4.4 TA and App in a standalone environment


Chapter 5

IMPLEMENTATION PLANNING
5.1 IMPLEMENTATION ENVIRONMENT

In this project, our implementation environment is mainly python and Splunk enterprise
software. We have used python 2 for implementation of python library modules. We have chosen
Python for automation implementation as python because it is open source, easy to use and
already mature. We are using Splunk enterprise software solution, it is software from Splunk
Inc., San Francisco, CA (www.splunk.com) that collects and analyzes machine-generated data in
real time to derive operational intelligence. Splunk Enterprise is the local version, and Splunk
Cloud is software-as-a-service (SaaS). Apart from that we also using Event-Gen app which is a
Splunk app, which is useful to generate dummy events in the environment. But in case of
Event-Gen we need to do any specific configuration or code as we just need to install Event-Gen
in Splunk environment.

5.2 KEY IMPLEMENTATION OBJECTIVES


● Whole application should be automated for testing purpose.
● Simple and User-friendly UI should be developed.
● Ensure the smooth working of all functionalities of the application.
● Ensure that all the Users are familiar with System.

5.3 MODULE SPECIFICATION


Main modules of the system are:
● Python functions for automation for testing purpose
● Splunk App for easy visualization of the performance matrix
5.4 IMPLEMENTATION STEPS FOR AUTOMATION

1. Read the configuration and location of the machine.


2. Install Splunk with a license in the machine if not already installed.
3. Install SPL Query Execution Performance, Eventgen and any additional apps and add-ons
into the Splunk.
4. Get different queries from Splunk Apps and Add-ons, execute them and get performance
matrix for the same.
5. Store the performance matrix back into the Splunk.

6. Populate dashboard to visualize the performance metrics.

5.5 CODING STANDARDS

● We’ve followed standard python indentation and sonarlint extension for Python coding.
● Also autopep8,Best practices for Python is followed in our code.Splunk Xml writing and
configuration writing standards for Splunk app development.
● Sonarqube Report is Generated for Each Sprints in our Project.It detects code smells,bugs
and complexity of Each functions.
● Code Complexity must be less than 15 for successful testing.
Chapter 6
Testing
6.1 TESTING PLAN:

Application testing is the critical element of the Application quality assurance and represents the
ultimate review of specification, design, and code generation. Once the source code has been
generated, Application must be tested to uncover as many errors as
possible before delivery to the users. This chapter describes some of the testing techniques for
designing tests that,

1. Exercise the internal Logic of the Application Components.


2. Exercise the input and output domain of the program to uncover an error in program
function, behavior and performance.
3. We carried out the testing process in four stages as unit testing, module testing,
subsystem testing, and system testing.

The Testing Process

The Application process activities such as Design, Implementation, and Requirement


Engineering were tested. As the design errors are very costly to repair once the system has
started to operate. Therefore, it is quite obvious to repair them at the early stage of the system. So
analysis is the most important process of any project

6.2 TESTING STRATEGY


Requirements Traceability:
As most interested portion is whether the system is meeting its requirements or not, for that
testing should be planned so that all requirements are individually tested. We checked the output
of a certain combination of inputs, which gives desirable results, or not. Strictly stick to the
requirements specifications, gives the path to get desirable results from the system.

Tested Items:
Tested items are like sending a request to the administrator, solving the sent request by the
assignee, changing the password of assignee and student, sending user feedback, adding new
categories, adding new departments, etc.

Testing Schedule:
Testing has been done for each procedure back-to-back so that errors and omissions can be found
as early as possible. Once the system has been developed fully testing procedure is followed on
other machines, which differs in configuration.
Software Testing involves executing an implementation of the software with test data and
examining the outputs of the software and its operational behavior to check that it is performing
as required.

6.3 TESTING METHODS


Different testing techniques are as described below:

Black-box Testing:
In Black-Box Testing or Functional Testing, the output of the module and software is taken into
consideration, i.e. whether the software gives proper output as per the requirements or not. In
other words, this testing aims to test a program's behavior against its specification without
making any reference to the internal structure of the program or the algorithms used. Therefore
the source code is not needed, and so even purchased modules can be tested.

The program just gets a certain input and its functionality is examined by observing the output.

This can be done in the following way:

● Input Interface
● Output Interface
● Processing

The tested program gets certain inputs. Then the program does its job and generates a certain
output, which is collected by a second interface. This result is then compared to the expected
output, which has been determined before the test.

White-box Testing:
What it does; tests are designed to exercise the code. The code is tested using code scripts,
driver, etc. White Box testing is used as an important primary testing approach. Here the code is
inspected to see that are employed to directly interface with and drive the code. The tester can
analyze the code and use the knowledge about the structure of a component to derive the test
data. White box testing methods like control testing, loop testing have been used to make the
software of increased reliability.

Integration Testing:
After the individual modules were tested out, the integration procedure is done to create a
complete system. This integration process involves building the system and testing the resultant
system for problems that arise from component interactions. The top-down strategy is applied to
validate high-level components of a system before design and implementations have been
completed. Because the development process is started with high-level components and work is
done down the component hierarchy.
6.4 TEST CASES:

Table 6.1 Test Cases for Splunk Handler

TEST TEST EXPECTED ACTUAL TEST


CASE ID SCENARIO RESULTS RESULTS STATUS
T01 Splunk Installation of Successful Pass
installation splunk at default installation of
when splunk is location. splunk
not
installed.

T02 Splunk Returns with success Pass


installation success Message Message of
when splunk is of installation
already installed. installation.

T03 Start splunkd Splunkd started.. Splunkd Pass


when splunk is started.
not running.

T04 Start splunkd Splunkd is Splunkd is Pass


when splunk is already running. already
already running. running.

T05 Restart splunk Splunk Restarted. Splunk Pass


when splunkd is Restarted.
running.

T06 Restart splunk Error Message: Error Message: Pass


when splunkd splunkd is not
not running. running
Table 6.2 Test cases for App Installation

Test Case Test Expected Actual Results Test


ID Scenario Results Status

T01 App App Successful Pass


installation upgraded installation of
when it is with new App.
already installation
installed file.

T02 App New App is App is installed Pass


installation added on splunk
when to splunk instance.
app is not instance
available

Table 6.3 Test cases for account creation.

Test Case ID Test Expected Actual Test


Scenario Results Results Status

T01 Create Account Successful Account gets Pass


for Trustar creation of created with
without Account. unique name
Enabling
Proxy.
T03 Create Account Error Error Message Pass
for Trustar Message: will be
modular with Enter Valid displayed.
Invalid Credentials
credentials.

Table 6.4 Test Cases For Trustar Inputs

Test Case ID Test Expected Actual Test


Scenario Results Results Status

T01 Create Input Successful Input is Pass


for Trustar creation of validated and
Reports. Input. created.

T02 Create Input Successful Input is Pass


for Tustar creation of validated and
Indicators Input. created.

T03 Create Input Successful Input is Pass


for Trustar creation of validated and
Enclaves. Input. created.

T04 Create Input Successful Input is Pass


for Trustar creation of validated and
inputs with Input. created.
specified
Parameters

Input is
Create Input Successful validated and
T05 for Trustar creation of created. Pass
inputs with Input.
specific filters.
Table 6.5 Test Cases for Dashboard Visualization

Test Case Test Expected Actual Test


ID Scenario Results Results Status

T01 Check Events are Events are Pass


whether generated generated.
Events are for specific
Indexed. Index.

T02 Check Events are Events are Pass


whether not not
Search is Generated. Generated.
performed.

T03 Check all the Panels gets Panels gets Pass


panels in Populated. Populated.
report
dashboard
are loaded
properly or
not

T05 Check for Panels are Panels are Pass


filters are populated populated
selected or for specific for specific
not. filters. filters.

T06 Check for Panels are Panels are Pass


filters are not not
selected or populated populated
not. for specific for specific
filters. filters.
T07 Perform Specific Specific Pass
Drilldowns Panel Event Panel
on Panels. is Event is
Displayed. Displayed.

T08 Perform Specific Specific Pass


Drilldowns Events are Events are
on Panels. not not
displayed. displayed.

T09 Check all Panels gets Panels gets Pass


panels are Populated Populated.
loaded
properly in
app
dashboard

6.5 USER MANUAL

6.5.1 ABOUT USER MANUAL DOCUMENT

The user manual is a document that explains to users how to use or operate something, such as
software program, some other component or application. User manual tells the user by written
description or by a picture to use that application. It also describes the steps to follow for
particular functionality to work. In our application we are providing the different functionality to
the user for that following is the stepwise description to use that functionality.

6.5.2 HOW TO USE APPLICATION


Following are the steps, following those you can use this project.

● Change the necessary changes configuration and put the required installation files before
running the code. Following is the list of required files.
○ Splunk installation file
○ TA and App for TruSTAR

● Steps:-
1. Installation of Splunk with a license.
2. Installation of App.
3. Installation of TA.
4. The configuration of Inputs in TA.
6.5.3 SCREENSHOTS

Fig 6.1 Splunk Instance Screen

TRUSTAR ADD-ON FOR SPLUNk

Fig 6.2 Tenable Add-On Inputs Page


STEPS FOR CONFIGURE ACCOUNT
Before Configuring Modular Input, the user needs to Configure Account. To configure the
account, follow the below mentioned steps.
● Login into your data collection node.
● Click on Splunk Add-on for TruStar from left bar.
● Click on Configuration Tab.
● Click on Add.
● Add all necessary details
● After configuring account click on Add.

Fig 6.3 Add Account

The significance of each field is explained below:

Input Parameters Required Description

Account Name Yes The unique name to identify an


account.

Address Yes Server Address of TruStar

API Key Yes API key of TruStar

Secret Key Yes Secret key of TruStar


Table 6.6 Significance of each field
6.5.3.1 CONFIGURE DATA INPUT

8.3.1.1 FOR REPORTS

Source type Description

trustar:reports This will contain all the reports fetched


from TruStar Station

Table 6.7 Configure Data input For Reports

Fig 6.4 Add Trustar Report


The significance of each field is explained below:

Input Parameters Required Description

Name Yes The unique name for Trustar Reports


data input.

Interval Yes Interval time of input in seconds.

Index Yes Name of the index in which data will be


indexed in Splunk.

Global Account Yes Select TruStar Account.

Start Time No Start Time to fetch data.

Enclaves No This field will decide which reports to


fetch based on Enclave ID.

Table 6.8 Significance of Each Field For Reports


6.5.3.1.2 FOR INDICATORS

Source type Description

trustar:indicators This will contain all indicators fetched


from TruStar Station.

Table 6.9 Configure Data input For Indicators

Fig 6.5 Add Trustar Indicator

The significance of each field is explained below:


Input Required Description
Parameters

Name Yes The unique name for TruStar


Indicators data input.

Interval Yes Interval time of input in seconds.

Index Yes Name of the index in which data will


be indexed in Splunk.

Global Account Yes Select TruStar Account.

Start Time No Start Time to fetch data

Tags No Fetch indicators of specific tags.

Table 6.10 Significance of Each Field For Indicators

8.3.1.3 FOR ENCLAVE

Source type Description

trustar:enclaves This will contain all enclaves from TruStar


Station.

Table 6.11 Configure Data input For Enclave


Fig 6.6 Add Trustar Enclave

The significance of each field is explained below:

Input Parameters Required Description

Name Yes The unique name for TruStar Enclaves


data input.

Interval Yes Interval time of input in seconds.

Index Yes Name of the index in which data will be


indexed in Splunk.

Global Account Yes Select TruStar Account.

Table 6.12 Significance of Each Field For Enclave


This is how the input of Report, Indicator, and Enclave is shown

Fig 6.7 Final View Od Addon


TRUSTAR APP FOR SPLUNK

6.5.3.2 DASHBOARDS

6.5.3.2.1 Overview Data


This dashboard has below 8 panels.
1. Matched Indicators in Last 4 Hours.
2. Matched Reports in Last 4 Hours.
3. Total Matched Indicators.
4. Total Matched Reports.
5. Total Indicators in Last 4 hours.
6. Total Reports in Last 4 hours.
7. Total Indicators.
8. Total Reports.

Fig 6.8 View of App


6.5.3.2.2 Report Dashboard

This dashboard has below panels.

1. Reports Data.

Fig 6.9 View Of Reported Data


Chapter 7

CONCLUSION AND FUTURE EXTENSION


7.1 CONCLUSION

As computers and electronic devices have become major in the future. Monitoring data and logs
and analyzing data has become very popular field nowadays.The task of accessing machine data
and taking necessary action when an undesirable situation occurs is critical and here Splunk
proves useful. Splunk stores machine data or logs into indexes, extract fields from those events
and we can write queries to visualize those log events.

TruSTAR app and Addon for Splunk helps up in identifying threats which could be otherwise
harmful to the customer.By its ever increasing growing customer base, we can be assured of new
and upcoming threats.Its panels help user in identifying the state of system and whether there
may be some harmful threats in the system.The App has added features like Adaptive Response
actions and workflow actions which helps to upload the reports to the TruSTAR’s station.
Furthermore, storing data in lookups helps in faster loading of the customized panels.

7.2 LIMITATIONS

● The time taken to match the indicators and reports increases as the number of
events indexed in Splunk increases.
● Installing Splunk environment on remote machine is not supported, for doing that
user wants to login to remote machine and run the code.

7.3 FUTURE EXTENSION

● Indicators dashboard can be created.


● Getting a paginated list of the indicators that have been whitelisted by the user’s
company.
● Getting the trending indicators that have recently appeared in most community
reports.
● Searching for all reports that contain the given search term. Also allowing
filtering by date, enclave, and tags.
APPENDICES

● SPL- Search Processing Language


o https://wiki.splunk.com/images/a/a3/Splunk_4.x_cheatsheet.pdf
● Splunk Workflow Actions
o https://docs.splunk.com/Documentation/Splunk/7.2.5/Knowledge/Setupasearchw
orkflowaction
● Trustar Documentation
o https://docs.threatconnect.com/en/latest/rest_api/rest_api.html
● Trustar API Help Center
o https://helpcenter.trustar.com/Content/Developer_Resources/Integrations/Splunk
_Integration/About_Splunk_Integrations.htm
● Trustar Splunk App
o ​https://splunkbase.splunk.com/app/4418/
● ThreatQ Splunk Add-on
o ​https://splunkbase.splunk.com/app/4419/
BIBLIOGRAPHY

● Splunk Docs:​ ​https://docs.splunk.com/Documentation/Splunk


● Splunk Answers: ​https://answers.splunk.com/index.html
● Python : ​https://docs.python.org/2/index.html
● Splunk Developers:​ ​http://dev.splunk.com/
REFERENCES

● https://www.splunk.com/pdfs/solution-guides/splunk-quick-reference-guide.pdf
● https://docs.splunk.com/Documentation/Splunk/7.2.5/SearchTutorial/Aboutdashboards
● http://docs.python-requests.org/en/master/
● https://docs.splunk.com/Splexicon:Configurationfile

Вам также может понравиться