Академический Документы
Профессиональный Документы
Культура Документы
CHAPTER 1
INTRODUCTION
A NETWORK INTRUSION DETECTION SYSTEM (NIDS) is a system that
analyzes the traffic crossing the network, classifies packets according to header, content, or
pattern matching, and further inspects payload information with respect to content/regular-
expression matching rules for detecting the occurrence of anomalies or attacks. The demand
for network security and protection against network threats and attacks is ever increasing, due
to the widespread diffusion of network connectivity and the higher risks brought about by a
new generation of Internet threats.
NIDS rely on exact string matching of packet payloads to detect hostile packets and
string matching operation of the packets in NIDS is the most computationally expensive step.
Accordingly, NIDS typically apply string matching only to those packets that are most
suspects, and only to those sections of the packet most likely to contain the offending data.
like, Snort (a popular NIDS found at www.snort.org) [4] checks port numbers, packet headers
and flags, etc., to ensure a given packet has a high likelihood of containing hostile data before
performing string matching on the packet data.
Intrusion detection systems (IDS) collect data from the network interface and analyze
it to identify ongoing attacks. This system has provided a way in minimizing network threats
and has given uninterrupted operations, by collecting datagram from the network and
conducting matching operations based on pre defined set of rules.
Various IDS types have been proposed in the past two decades and commercial on
the-shelf (COTS) IDS products have found their way into Security Operations Centers (SOC)
of many large organizations. Nonetheless, the usefulness of single-source IDS has remained
relatively limited due to two main factors: their inability to detect new types of attacks (for
which new detection rules or training data are unavailable) and the often very high rate of
false positives.
Network intrusion detection systems (NIDS) monitor was used to implement the
module generator. Network traffic for predefined suspicious activity or data patterns and
notify system administrators when malicious traffic is detected so that appropriate action may
be taken.
Snort NIDS is the software based implementation of NIDS, and it cannot sustain the
multi Gbits/sec traffic rates typical of network backbones, and thus is confined to be used in
relatively small scale (edge) networks [4]. For high speed network links, hardware-based
NIDS solutions appear to be a more realistic choice, but the hardware implementation needs
to permit the frequent update of the supported rule set, so as to cope with the continuous
emergence of new different types of network intrusion threats and attacks.
Field Programmable Gate Arrays are thus appealing candidates. Indeed, a FPGA-
based NIDS can be easily and dynamically reprogrammed when the content-matching rules
change. Moreover, current FPGA devices are capable to provide a very high processing
capability, and support high speed interfaces.
FPGA for 100 Gbits/sec processing are available and for 400 Gbits/sec are
forthcoming [8]. However, such an increase in the traffic collection ability is not matched
with a comparable scaling of the device frequency. Indeed, logic resources still operate with
frequencies in the order of “just” hundreds of MHz; for instance a frequency of 500 MHz,
that is achievable only by last generation FPGA devices, can process 8-bit characters at
“only” 4 Gbits/sec.
It is thus straightforward to consider that, similar to the multi-core parallelization
trend in microprocessors; parallelization in FPGA-based NIDS traffic analysis appears to be a
mandatory approach to sustain the increased network throughput.
1.2 Objectives
The study had focused on these specific objectives
i. To conduct experimental studies for an uninterrupted traffic due to different types of
network intrusion.
ii. To defend from specific type of malicious threats and to have flexible operation.
iii. To validate the developed models with the simulation result obtained to observe
traffic data.
iv. To implement the design with least resource utilisation and maximum throughput of
frequency.
Chapter 1: Gives the introduction to the traffic awareness caused by the network
intrusion.NIDS implementation using FPGA for the network security, against the network
threats and its motivation for the project.
Chapter 2: Gives the literature survey done at the early stages of the project to collect the
requirements and product use cases.
Chapter 3: Gives the approach towards awareness of project, giving the idea about snort rules
and relevant classification policies for the detection of network threats and its hardware
module and relevant trade off.
Chapter 5: It is based on overall system architecture which is the major part of this project by
depicting the system architecture, explaining its main blocks and its working
Chapter 6: Explains about implementation of string matching circuit’s namely basic string
matching engine and discrete finite automata and its scalability issues
Chapter 7: it is the overall system implementation giving the explanation about snort rule set
subdivision string matching engine synthesis. And its dimensioning.
Chapter 8: Software and hardware Requirement Specification, which explains the overall
description, functional and non-functional requirements of the system and interface
description.
Chapter 9: Implementation and result, which shows the FPGA implementation by giving
brief explanation about FPGA kit which is being used and experimental simulation result
showing the outcome of overall design.
Chapter 10: Gives some of advantages and disadvantages of the proposed design.
Chapter 11: Conclusion, which gives the outcome of the work carried out and also brings out
the future enhancements.
CHAPTER 2
LITERATURE REVIEW
Securities against malicious threats, which intrude in to the network are detected by
using Network Intrusion Detection System. The relative information is collected and design
is implemented by studying various researches carried out by the various researchers.
First major observation is that packet and protocol characteristics remain stable along
the years; on the application side, the changes of the applications used on the Internet do not
seem to have a major impact on those characteristics. The statistics of aggregated packet or
byte count time series at the TCP/IP layer are then analyzed, with focus on the evolutions
with time of their marginal distributions (MDs) and of long range dependence (LRD).
One key difficulty in performing statistical longitudinal analysis is to disentangle
smooth long term evolution features from day-to-day fluctuations, as there is no single day
without anomalies or specific events. Therefore, First contribution consists of proposing a
robust estimation method based on sketches (random projections), that enables long term
analyses without being affected by specific traffic conditions or anomalies. Applied to the 7-
year long datasets, this robust estimation procedure brings new insights into the on-going
debate related to bandwidth increase and statistical multiplexing causing the disappearance
of long range dependence.
The second contribution lies in finding that, once the impacts of local events such as
anomalies and congestions are filtered out, the traffic statistics remain stable along years with
persistent LRD and MDs being well modeled with Gamma laws. A concern with this
longitudinal study is that data last only 15 min., starting systematically at 2:00 pm. One may
question the representatively w.r.t. both the natural intra-day variability and short duration
observations. To address this, 24-hour traces were analyzed. We report results for data
collected on March 19th 2008.
to packets captured by the NI. Early network intrusion detection systems consisted of
application software running on commodity servers equipped with high-speed network-
interface cards. While economical and easy to program, this approach is limited in terms of
performance because
(1) There is a significant amount of I/O overhead associated with data transfers
between the NI and the host CPU, and
(2) General purpose CPUs are not optimized for the large pattern-matching operations
that are commonly used in ID operations. As such, software-based NIDS are often unable to
keep up with the data rates of modern high-speed networks.
It’s been argued that there are multiple advantages to consolidating NIDS components
into a single FPGA. From a physical design perspective, integration simplifies board layout
and reduces the overall chip count for a NIDS. Integration also removes the need for off-chip
data transfers, which can be a performance bottleneck in multi-chip systems. However, the
main benefit for an integrated system is the increased level of customization that is made
available to system designers. FPGAs are flexible architectures that can be reconfigured after
fabrication time to meet new application demands. In order to maximize this opportunity, it is
desirable to place as much of the NIDS architecture as possible in reconfigurable logic.
false positive matches. Additional circuitry monitors the physical layer’s status in the Rocket
I/O module, and automatically resets the unit as needed.
distributed to the DFA pipelines created for each pattern. Each stage of the pipeline outputs
true if the output of the previous stage was true and the current stage’s match indicator wire is
true. If there is a character sequence in the input that matches one of the stored patterns, a true
value will propagate through the pipeline and the match output bit will be asserted for the
corresponding pattern.
CHAPTER 3
Even if the basic idea is very simple, turning it into practice is not nearly
straightforward for several reasons. First, traffic classification rules used by the dispatcher
must be extremely simple, and in any case they must be purely based on header information.
This restricts the type of classification that can be enforced. Second, such classification
approaches yield categories of uneven size in terms of traffic volume, so that dimensioning of
the content matching modules cannot be anymore based on the nominal link speed, but must
rely on the actual per-category traffic load. Third, and most important, the type of
classification enforced should attempt to group traffic so that the actual NIDS rules to be
enforced in the dedicated hardware modules are as most disjoint as possible, thus minimizing
the usage of logic resources. The application of the traffic-aware approach to the hardware
domain therefore requires a detailed analysis of aspect that is not covered by previous works.
It’s important to note that the Snort Engine and Snort Rules are distributed separately, but
should be used together.
CHAPTER 4
4.1.5 Teardrop
The Teardrop, though, is an old attack that relies on poor TCP/IP implementation
that is still around. It works by interfering with how stacks reassemble IP packet fragments.
The trick here is that as IP packets are sometimes broken up into smaller chunks, each
fragment still has the original IP packet’s header, and field that tell the TCP/IP stack what
bytes it contains. When it works right, this information is used to put the packet back together
again. What happens with Teardrop though is that your stack is buried with IP fragments that
have overlapping fields. When the stack tries to reassemble them, it cannot do it, and if it
does not know to toss these trash packet fragments out, it can quickly fail. Most systems
know how to deal with Teardrops and a firewall can block Teardrop packets in return for a bit
more latency on network connections since this makes it disregard all broken packets. Of
course, if you throw a ton of Teardrop busted packets at a system, it can still crash.
CHAPTER -5
Since, multi-byte string matching engines do complicate the internal design, the
Queue output uses 8 bits. Conversely, the interfaces between the remaining modules can be
implemented using multiple characters at a time. For example, if the network interface is the
10 Gigabit Ethernet core of Xilinx, that provides a 64 bits interface working at-
f0=156.25 MHz, the data width will be 64 (N=64 bits), and the operating frequency
of the dispatcher will be f0 [8]. In proposed architecture instead of queue manager
FIFO is used.
The resulting operation in fact depends on a configuration setting which includes the
following decisions and parameters:
• Dispatcher classification policy;
• String matching rules loaded over each cluster of engines;
• Operating frequency of each cluster;
• Number of string matching engines deployed in every cluster.
5.2 Working
Input to the overall system is fed from the network interface in the form of packets.
There will be standard I/O controllers which provide a raw byte stream interface to the
network. These packets are received by the receiver which de-frames and extracts Ethernet
packets from the incoming network stream and are stored in ROM. Packet classification is
done based on Header format as shown in fig 5.2, Homogeneous packets are grouped and
sent to the respective string matching engines.
Phase-I: The first phase is defining tools. Here values are assigned to the respective
tools
Phase-II: The second phase is Header based packet classification. Here based on Ether
and IP header format packets are classified.
Phase-III: The third phase is output. Here packets are transmitted to the respective
string matching’s based on pre-classified packet classification policy.
Next is the Transport layer and network layer and this is where host to host
communication appear. Transport layer gives end to end communication. Here type of
communication is decided based on two type of connection i.e.
1. Connectionless
E.g. UDP
2. Connection oriented
E.g. TCP/IP
TCP is reliable and is guaranteed where acknowledgement is received. UDP is not reliable,
uses less resource and acknowledgement is not received. The application developer decides
whether to use UDP or TCP. In TCP we need to have larger header compared to UDP i.e.
more resource needed. Two pieces of information segments- destination IP address and
datagram packets Transport layer has and are sent to the Network layer. In Network layer
routing of packet takes place. This will route the packet to the destination.
The bottom layer is concerned with local and physical network. In Data link layer the
packets of data are fragmented in to frames and are linked to the destination. Finally it is sent
to the Physical layer where the data are read. Physical layer is application system.
CHAPTER - 6
Fig. 6.1 further illustrates a toy example which consists in the search for two strings: “abc” or
“def”. If the character d is stored in the register labeled x(n−3), e is stored in x(n − 2) and f is
stored in x(n − 1), all the inputs of the uppermost AND gate are equal to 1 and the circuits
signal the matching by the “Match” output.
If the system is deployed as a snort off loader, devised to forward the malicious
packets to a software IDS implementation, a matching signal is all needed to drive simple
pass/drop packet logic. Indeed, note that the deployment of a full-fledged hardware IDS
requires supplementary features (e.g. alert generation, packet logging and so on), that can be
better performed in software. Besides, if the goal is to further detect which rule has been
matched, a quite straightforward implementation consists in substituting the OR gate with a
priority encoding circuit that takes as input the output of each AND gate and provides as
output the binary representation of the highest input with high Modifier Description offset: N
the search for the content begin after N characters depth: N the search for the content ends
after N characters distance: N the distance between to contents is at least N characters within:
N the distance between to contents is less than N characters value. A rough estimation of the
resource occupation of the encoder is around 15K LUTs3.
Table 6.1 summarizes the most used modifiers.
Table 6.1: Modifier description
• Content: fixed pattern to be searched in the packets payload of a flow. If the pattern is
contained anywhere within the packets payload, the test is successful and the other rule
options tests are performed. A content keyword pattern may be composed of a mix of text
and binary data. Most of the rules have multiple contents.
• Modifiers: they identify a location in which content is searched inside the payload. This
location can be absolute (defined with respect to the start of the flow) or relative to the
matching of a previous content.
these problems are widely known, and span from limiting the fan-out and replicate the logic,
to increasing the pipeline and performing speed retiming. In all these cases, when the
requested throughput increases to the Gbits/sec rate, these approaches become unfeasible.
Therefore a scalability issue becomes very important for multi-Gbps networks.
A solution proposed at architectural level consists in using a wide data bus that operates on
multiple characters each clock cycle. With this approach, the data processing rate can be
improved without increasing the operating frequency. For example, an 1-character string
matching circuit operating at 125 MHz provides a throughput of 1Gbit/sec, while a 4-
character based circuit is able to sustain a rate of 4 Gbits/sec. The area required for
implementing a multi character circuit increases at least linearly with the number of
characters. This approach has several drawbacks that limit the usability of such method. First
of all it is not easily scalable since the modification of the number of characters that a content
matching engine checks in a clock cycle requires the complete redesign of the content
matching engine itself. Moreover, the extension of this method to regular expression can be
difficult. Basically the problems related to a multi character approach derive from the nature
of the NIDS rules (and from the nature of the data that are inspected) that are strongly
character dependent. Since the rules often are based on matching strings, counting the
number of characters and so on, it is not trivial to extend these analyses to a multi-character
implementation. Finally, the effort of the logic optimization algorithms for these architectures
is greater than one of single character architecture and also the quality of results in such case
is in doubt. Similar considerations about the limitation of the multi-character approach have
been presented. Moreover it shows that, when number of characters becomes greater than 4,
the performances of this approach, computed as throughput/area (Gbps/#LUTs), decreases.
The approach proposed in what follows alleviate these issues, since the area/speed
optimization constraints are applied to smaller hardware blocks running at lower frequency.
CHAPTER - 7
Over all system implementation mainly deals with the performance gains that can be
achieved by dispatching different traffic types to different clusters, consistently distributing
different content matching rules over different engines, independently optimize the area-
frequency tradeoff for each deployed engine, and dimensioning each engine depending on the
traffic-load conditions.
despite this latter contains a significantly lower number of rules (“only” 1682 versus the 2394
of SME-WEB UPLINK).
Figure 7.1: Area and Area/frequency synthesis results for the SME-NONWEB
PORTLOW [8]
Conversely, for small circuits the implementation with the lowest area-delay product
is typically the fastest one. As an example, Fig. 7.2 documents results for the SMEWEB
UPLINK [8] case; the remaining SMEs are qualitatively similar to this and not reported to
save space.
Figure 7.2: Area and Area/frequency synthesis results for the SME-WEB UPLINK [8]
The fact that when circuit complexity increases, limitations in the synthesis process
largely impact results. Indeed, with small SMEs, speed optimization is carried out at cost of a
limited increase in logic resource consumptions (the number of LUTs for the various
implementations differs of just about 5%), thus making the implementation with the highest
speed also the one that provides the best throughput/area trade-off. Instead, when the circuit
size grows, not only the best implementation in terms of throughput/area trade-off is for an
intermediate speed, but the variability in terms of number of LUTs is up to as much as 50%,
being 32408 the smallest one and 47516 the fastest one. It’s been believed that this is caused
by limitations in the heuristics exploited by the synthesis algorithms. All data presented in
this section are referred to result obtained without enabling the retiming available by the
Xilinx synthesis tools. Enabling retiming, results that are quantitatively different, but very
similar to those presented here.
Table 7.3 summarizes the best throughput/area results achieved for all the 5 SMEs
synthesized. For sake of comparison, the table further reports results obtained by the
synthesis of a single SME supporting all the rules. Note that such a single synthesis uses
more than 8% extra LUTs with respect to the multiple SME case4 and that in all cases but
one the resulting single SME implementation has a lower speed. This is quite remarkable,
since, as discussed in section SNORT RULESET SUBDIVISION, a disjoint partition of rules
was not technically achievable (rules belonging to sets A and B had to be reimplemented in
most of the SMEs - see table 7.2 and the total number of rules implemented is thus 7176).
TABLE 7.3: SYNTHESIS RESULTS FOR THE BEST IMPLEMENTATIONS OF
THE FIVE SME
total peak load of 10 gbps. From table 7.3, the best throughput/area implementation for the
case of single SME supporting all the IDS ruleset operates at 241 MHz, i.e. it can support up
to about 1.9 gbps can be seen. Therefore a cluster of 6 replicated engines is needed to sustain
a 10 gbps peak load, which in turns implies that a total number of 388206 LUTs are required.
In order to dimension a traffic-aware system, information about the actual traffic
composition is needed. Of course, the overall sizing depends on such a traffic mix, and hence
on the specific deployment case, so that no universally valid dimensioning rules are possible.
Nevertheless, some insights on the extent of such a resource saving may be gathered by
analyzing specific (and realistic) use case deployments
Chapter 8
8.1 ModelSim
ModelSim is a verification and simulation tool for VHDL, Verilog, SystemVerilog,
SystemC, and mixed-language designs.
After creating the working library, you compile your design units into it. The
ModelSim library format is compatible across all supported platforms. You can simulate your
design on any platform without having to recompile your design.
Loading the Simulator with Your Design and Running the Simulation
With the design compiled, load the simulator with the design by invoking the
simulator on a top-level module (Verilog) or a configuration or entity/architecture pair
(VHDL). Assuming the design loads successfully, the simulation time is set to zero, and enter
a run command to begin simulation.
Debugging Your Results
If the results expected are not obtained, use ModelSim’s robust debugging
environment to track down the cause of the problem.
8.2 XILINX
Xilinx, Inc. is the world's largest supplier of programmable logic devices, the inventor
of the field programmable gate array (FPGA) and the first semiconductor company with a
fabless manufacturing model. The programmable logic device market has been led by Xilinx
since the late 1990s. Over the years, Xilinx has fuelled an aggressive expansion to India, Asia
and Europe – regions Xilinx representatives have described as high-growth areas for the
business. Xilinx's sales rose from $560 million in 1996 to almost $2 billion by 2007.
The relatively new President and CEO Moshe Gavrielov – an EDA and ASIC
industry veteran appointed in early 2008 – aims to bolster the company's revenue
substantially during his tenure by providing more complete solutions that align FPGAs with
software, IP cores, boards and kits to address focused target applications. The company aims
to use this approach to capture greater market share from application-specific integrated
circuits (ASICs) and application-specific standard products (ASSPs).
8.2.1 Technology
The Spartan-3 platform was the industry’s first 90nm FPGA, delivering more functionality
and bandwidth per dollar than was previously possible, setting new standards in the
programmable logic industry.
Xilinx designs, develops and markets programmable logic products, including integrated
circuits (ICs), software design tools, predefined system functions delivered as intellectual
property (IP) cores, design services, customer training, field engineering and technical
support. Xilinx sells both FPGAs and CPLDs for electronic equipment manufacturers in end
markets such as communications, industrial, consumer, automotive and data processing
8.3VERILOG
Verilog HDL is a hardware description language used to design and document
electronic systems. Verilog HDL allows designers to design at various levels of abstraction. It
is the most widely used HDL with a user community of more than 50,000 active designers.
Verilog was invented as simulation language. Use of Verilog for synthesis was a complete
afterthought. Rumors abound that there were merger discussions between Gateway and
Synopsys in the early days, where neither gave the other much chance of success.
In the late 1980's it seemed evident that designers were going to be moving away from
proprietary languages like n dot, HiLo and Verilog towards the US Depatment of Defense
standard H.D.L., known as the VHSIC Hardware Description Language. VHSIC it self stands
for "Very High Speed Integrated Circuit" BTW).
Perhaps due to such market pressure, Cadence Design Systems decided to open the Verilog
language to the public in 1990, and thus OVI (Open Verilog International) was born. Until
that time, Verilog HDL was a proprietary language, being the property of Cadence Design
Systems. When OVI was formed in 1991, a number of small companies began working on
Verilog simulators, including Chronologic Simulation, Frontline Design Automation, and
others. The first of these came to market in 1992, and now there are mature Verilog
simulators available from many sources.
As a result, the Verilog market has grown substantially. The market for Verilog related tools
in 1994 was well over $75,000,000, making it the most commercially significant hardware
description language on the market.
An IEEE working group was established in 1993 under the Design Automation Sub-
Committee to produce the IEEE Verilog standard 1364. Verilog became IEEE Standard 1364
in 1995.
As an international standard, the Verilog market continued to grow. In 1998 the market for
Verilog simulators alone was well over $150,000,000; continuing its dominance.
The IEEE working group released a revised standard in March of 2002, known as IEEE
1364-2001. Significant publication errors marred this release, and a revised version was
released in 2003, known as IEEE 1364-2001 Revision C.
.8.4 Tools
VHDL descriptions of hardware design and test benches are portable between design
tools, and portable between design canters and project partners. You can safely invest in
VHDL modelling effort and training, knowing that you will not be tied in to a single tool
vendor, but will be free to preserve your investment across tools and platforms. Also, the
design automation tool vendors are themselves making a large investment in VHDL, ensuring
a continuing supply of state-of-the-art VHDL tools.
8.5 Technology
VHDL permits technology independent design through support for top down design
and logic synthesis. To move a design to a new technology you need not start from scratch or
reverse-engineer a specification - instead you go back up the design tree to a behavioural
VHDL description, then implement that in the new technology knowing that the correct
functionality will be preserved
8.6 Benefits
Executable specification
Validate spec in system context (Subcontract)
Functionality separated from implementation
Simulate early and fast (Manage complexity)
Explore design alternatives
Get feedback (Produce better designs)
Automatic synthesis and test generation (ATPG for ASICs)
Increase productivity (Shorten time-to-market)
Technology and tool independence (though FPGA features may be unexploited)
The second design input component is the choice of FPGA device. Each FPGA
vendor typically provides a wide range of FPGA devices, with different performance, cost,
and power tradeoffs. The designer may start with a small (low capacity) device with a
nominal speed-grade. But, if synthesis effort fails to map the design into the target device, the
designer has to upgrade to a high-capacity device. Similarly, if the synthesis result fails to
meet the operating frequency, he has to upgrade to a device with higher speed-grade. In both
the cases, the cost of the FPGA device will increase by 50% or even by 100%. Thus better
synthesis tools are required, since their quality directly impacts the performance and cost of
FPGA designs.
3. HDL Synthesis: The process which translates VHDL or Verilog code into a device net list
format. i.e. a complete circuit with logical elements such as Multiplexer, Adder/Subtractors,
Counters, Registers, Flip flops, Latches, Comparators, XORs, Tristate Buffers, Decoders, etc.
4. Advanced HDL Synthesis: Low Level synthesis: The blocks synthesized in the HDL
synthesis and the Advanced HDL synthesis are further defined in terms of the low level
blocks such as buffers, lookup tables. It also optimizes the logic entities in the design by
eliminating the redundant logic, if any. The tool then generates a netlist file (NGC file) and
then optimizes it.
2. Mapping: The Map process is run after the Translate process is complete. Mapping maps
the logical design described in the NGD file to the components/primitives (slices/CLBs)
present on the target device. The Map process creates an NCD file.
3. Place and Route: The place and route (PAR) process is run after the design has been
mapped. PAR uses the NCD file created by the Map process to place and route the design on
the target FPGA design.
4. Bit stream Generation: The collection of binary data used to program reconfigurable
logic device is most commonly referred to as a “bit stream,” although this is somewhat
misleading because the data are no more bit oriented than that of an instruction set processor
and there is generally no “streaming”.
5. Functional Simulation: Post-Translate (functional) simulation can be performed prior to
mapping of the design. This simulation process allows the user to verify that your design has
been synthesized correctly and any differences due to the lower level of abstraction can be
identified.
(a) (b)
Figure 8.4: (a) Design Flow during FPGA Implementation (b) Procedure Followed for
Implementation
6. Static timing analysis: Three types of static timing analysis are performed that are:
(a) Post-fit Static timing analysis: The Analyze Post-Fit Static timing process opens the
timing Analyzer window, which lets you interactively select timing paths in your design for
tracing the timing results.
(b) Post-Map Static Timing Analysis: You can analyze the timing results of the Map process.
Post-Map timing reports can be very useful in evaluating timing performance (logic delay +
route delay).
(c) Post Place and Route Static Timing Analysis: Post-PAR timing reports incorporate all
delays to provide a comprehensive timing summary. If a placed and routed design has met all
of your timing constraints, then you can proceed by creating configuration data and
downloading a device.
7. Timing Simulation: After your design has been placed and routed, a timing simulation net
list can be created. This simulation process allows you to see how your design will behave in
the circuit.
Chapter 9
IMPLEMENTATION AND RESULT
operations is done. OR operation is carried out for the output of packet classification to get
final output
9.2.1 Receiver
Figure 9.2 shows RTL schematic of Receiver module. Fig 9.3 shows the Simulation
test result of receiver of NIDS system. Where once the clock cycles begin, reset is made from
high to low and start of packet (SOP) is made high. Output will be the values which are in
temp. Temp values are those which are received from Data in. here input to the system is of 8
bits packets, total 76 packets are received hence 608 bits and input is given as data-in=8’d608
9.2.2 RAM
Figure 9.4 shows RTL schematic of Ram module. Here the input to the system will be
the output. Input to the system is from receiver output. Fig 9.5 shows the simulation result of
RAM. Reset will be low and once Write Enable (WE) is made high output will be the input
which are stored in temp.
CHAPTER 9
9.1 Advantage
Network Intrusion Prevention Systems (IPS) is able to use signatures designed to
detect and defend from specific types of attacks such as denial of service attacks
among other.
A system automatically identifies and responds to intrusion activities.
Intrusion Prevents the act of dropping detected bad traffic in real-time by not
allowing the traffic to continue to its destination, and is useful against denial of
services floods, brute force attacks, vulnerability detection, protocols anomaly
detection and prevention against unknown exploits. Intrusion Prevention Systems
protect the information systems from unauthorized access, damage or disruption.
The system support both static and dynamic patterns and can handle multi-pattern
signature matching.
When the number multi pattern signature increase significantly the resource cost of
the encoders, counters and comparators are not considerable.
This approach is feasible for the number of signature that has reached thousands.
Pattern matching on reconfigurable device not only focused on one type of pattern but
also considers the combination of multiple patterns.
The system when evaluated shows that system can maintain better gigabit line rate
throughput when compared to previous approach without dropping packets.
9.2 Disadvantage
The risk introduced with placing either an IDS inline is related to the likelihood of the
system failing resulting in the link being brought down.
System dimensioning is a major issue, several trade off needs to be considered
String matching engine implementation is costly
CHAPTER 11
CONCLUSION AND FUTURE WORK
11.1 Conclusion
The integration of network interface hardware and packet analysis hardware into a
single FPGA chip gives best approach for network intrusion attacks caused by various
malicious intrusions in to the network. NIDS gives a flexible way of foundation for network
security operations. It can be concluded that security of today’s system rely on network
intrusion detection system
The overall resource utilization and throughput of frequency is of major concern, and
least resource is utilized and best throughput of frequency is achieved, keeping in concern of
various trade off related to the design
Another avenue of investigation for this work involves an exploration of how high-
capacity FPGAs can be better leveraged in a large-scale NIDS. Experiments indicate that
current FPGAs are large enough to house very large rule sets. However, compiling these
circuits is very time consuming on state-of the workstations. These compilation times and
clock speeds can be improved with better floor-planning. Therefore, the next step in this work
will involve refining how placement information is added to the ID cores when the pattern
matching circuitry is generated.
Finally, this work can be improved upon by considering methods in which an FPGA-
based NIDS is incrementally updated with new patterns over time. In current approach of
NIDS, It must be taken offline briefly and reconfigured whenever rule updates need to be
applied. Because updates are infrequent and require only a few seconds of downtime, this
approach is acceptable for many applications. However, if high availability is required,
partial reconfiguration techniques are potential solutions. With partial reconfiguration, the NI
units would buffer incoming messages while the ID unit is updated with new circuitry.
REFERENCES
[1] P. Borgnat, G. Dewaele, K. Fukuda, P. Abry, K. Cho, “Seven years and one day:
Sketching the evolution of internet traffic”, in Proc. Of the Twenty-Eight Annual Joint
Conference of the IEEE Computer and Communications, INFOCOM 2009, pp. 711-719.
[2] A.V. Aho, M.J. Corasick,“Efficient String Matching: An Aid to Bibliographic Search”,
Communications of ACM, Vol. 18 n. 6, June 1975
[4] Sourcefire, “Snort: The Open Source Network Intrusion Detection System”, available at
http://www.snort.org.
[5] S. Teofili, E. Nobile, S. Pontarelli, G. Bianchi, “IDS Rules Adaptation for Packets Pre-
filtering in Gbps Line Rates”, in Trustworthy Internet, pp. 303–316, Springer, 2011.
[7] Christopher R. Clark, Craig D. Ulmer, “Network Intrusion Detection System On FPGAS
with On-Chip Network Interfaces”, In Proceedings of International Workshop on Applied
Reconfigurable Computing (ARC), Algarve, Portugal, February 2005.
[8] Salvatore Pontarelli, Giuseppe Bianchi, Simone Teofili, “Traffic awareness Network
Intrusion Detection System” Consorzio Nazionale InterUniversitario per le
Telecomunicazioni (CNIT) University of Rome “Tor Vergata” Via del Politecnico 1, 00133,
Rome, ITALY.
PAPER PUBLISHED
SANDEEP NAIK R and PRASHANTH BARLA, “TRAFFIC AWARE DESIGN OF A
HIGH SPEED FPGA NETWORK INTRUSION DETECTION SYSTEM”, International
journal of Research in Information Technology (IJRIT), ISSN:2001-5569, June 2014.