Вы находитесь на странице: 1из 26

Chapter 1 : Introduction. 1. What du u mean by security policy? Explain the characteristics of good security policy (5m).

[A-05,09, N-09] A security policy is a document that states in writing how a company plans to protect the company's physical and information technology (IT) assets. A security policy is often considered to be a "living document", meaning that the document is never finished, but is continuously updated as technology and employee requirements change. A good security policy takes care of 4 key aspects: Affordability: Functionality: Mechanism of providing security. Cultural issues: Whether the policy gets well with the peoples expectations, working style and beliefs. Legality: whether the policy meets with legal requirements. 2. Explain the concept of incident handling (4m). [N-03,04,07,08,10, A-08] Incident handling is a generalized term that refers to the response by a person or organization to an attack. An organized and careful reaction to an incident can mean the difference between complete recovery and total disaster. The following sequence of steps should be followed in the case of an attack: 1) Preparation Comprehensively addressing the issue of security includes methods to prevent attack as well as how to respond to a successful one. In order to minimize the potential damage from an attack, some level of preparation is needed. These practices include backup copies of all key data on a regular basis, monitoring and updating software on a regular basis, and creating and implementing a documented security policy. Regularly-scheduled backups minimize the potential loss of data should an attack occur. Monitoring vendors' and security web sites and mailing lists

is a good way to keep up to date with the state of the software and patches. It is necessary to update software in order to patch vulnerabilities that are discovered. It is also vital to update anti-virus software in order to keep system protection up-to-date. A documented security policy that outlines the responses to incidents will prove helpful in the event of an attack, as a reliable set of instructions. 2) Identification of Attack While preparation is vital for minimizing the effects of an attack, the first post-attack step in Incident handling is the identification of an incident. Identification of an incident becomes more difficult as the complexity of the attack grows. One needs to identify several characteristics of an attack before it can be properly contained: the fact that an attack is occurring, its effects on local and remote networks and systems and from where it originates. 3) Containment of Attack Once an attack has been identified, steps must be taken to minimize the effects of the attack. Containment allows the user or administrator to protect other systems and networks from the attack and limit damage. The response phase details the methods used to stop the attack or virus outbreak. Once the attack has been contained, the final phases are recovery and analysis. 4) Recovery and Analysis The recovery phase allows users to assess what damage has been incurred, what information has been lost and what the post-attack status of the system is. Once the user can be assured that the attack has been contained, it is helpful to conduct an analysis of the attack. Why did it happen? Was it handled promptly and properly? Could it have been handled better? The analysis phase allows the users and administrators to determine the reason the attack succeeded and the best course of action to protect against future attacks.

3. What are the components of security policy? Explain (8m). [A-08, N-08,09] Components of security policy are: Computer technology purchasing guidelines, which specify required, or preferred, security features. These should supplement - existing purchasing policies and guidelines. A privacy policy: Which defines reasonable expectations of privacy regarding such issues as monitoring of email, logging of keystrokes, and access to users files. An access policy: Which defines rights and privileges to protect assets from loss or disclosure by specifying acceptable use guidelines for users, operating staff, and management. It should provide guidelines for external connections, data communications, connecting devices to a network, and adding new software to systems. It should also specify any required notification message (eg. Connect messages should provide warning about authorized usage and line monitoring, and not simply say welcome) An Accountability policy: Which defines the responsibility of users, operating staff, and management. It should specify an audit capability, and provide incident handling guidelines (eg. What to do? And who to contact if possible intrusion is detected.) An Authentication policy: Which establishes trust through an effective password policy, and by setting guidelines for remote location authentication and the use of authentication device (eg. One-time password and the device that generated them)

An Availability Statement: This sets users expectations for the availability of resources. It should address redundancy and recovery issues, as well as specify operating hours and maintenance downtime periods. It should also include contact information for reporting system and network failure. An IT System and network maintenance policy: It describes how both internal and external maintenance people are allowed to handle and access technology. One important topic to address here is whether remote maintenance is allowed and how such access is controlled. Outsourcing is also considered, and how it is managed. A Violations Reporting policy: That indicates which type of violations (eg. privacy and security, internal and external) must be reported and to whom the reports are made. A non threatening atmosphere and the possibility of anonymous reporting will result in a greater probability that a violation will be reported if it is detected. Supporting Information policy: Which provides users, staff and management with contact information for each type of policy violation; guidelines on how to handle outside queries about a security incident, or information which may be considered confidential or proprietary; and cross reference to security procedures and related information, such as company policies and governmental laws and regulations. 4. Difference between packet sniffing and packet spoofing. (Vsit pg no 17) 5. Differentiate between active and passive attack. 6. Risk assessment (8m). [N-03,04,05,08, A-04,06,07] One of the most important reasons for creating a computer security policy is to ensure that efforts spent on security yield cost effective benefits.

Although, this may seem obvious, it is possible to be misleading about where the effort is needed. As an example, there is a great deal of publicity about intruders on computer system; yet most surveys of computer security show that, for most organizations, the actual loss from insiders is greater. Risk analysis involves determining what you need to protect, what you need to protect them from, and how to protect it. It is the process of examining all of you risks, then ranking those risks by level of severity. This process involves making cost-effective decision on what you want to protect. However, there are 2 elements of risk analysis: i) Identifying the assets. ii) Identifying the threats. For each asset, the basic goals of security are availability, confidentiality and integrity. Each threat should be examined with an eye to how the threat could affect these areas. Identifying the Assets: One step of risk analysis is to identify all the things that need to be protected. Some things are obvious, but some are overlooked, such as the people who actually use the system. The essential point is to list all things that could be affected by a security problem. i) Hardware: CPUs, boards, keyboards, terminals, workstations, PCs, printers, disk drives, communication lines, terminal servers, routers, etc. ii) Software: Source program, object program, utilities, diagnostic programs, OS, communication program ,etc. iii) Data: iv) Documentations: v) People: users, administrators, hardware maintainers. Identifying the Threats: Once the assets requiring protection are identifies, it is necessary to identify threats to those assets. These threats can then be examined to determine what potential loss exists. It helps to consider what you are trying to protect your assets from. The following are the classic threats that should be considered.

i) ii) iii)

Unauthorized access to information. Unintended and/or unauthorized Disclosure of Information. Denial Of Service.

7. Explain the different principals of security (8m). Confidentiality: Confidentiality gets compromised if an unauthorized person is able to access the message. ATTACK: Interception. Authentication: Authentication process ensures that the origin of the message is correct. ATTACK: Fabrication. Integrity: If the content of the message is changed after the sender has sent the message and before the intended receiver has received it, we say that the integrity of the message is lost. ATTACK: Modification. Non-repudiation: There are situations when the user has sent the message and then later she refuses that she had sent it. The principal of nonrepudiation defeats such possibilities of denying something having done it. Availability: The principal of availability states that resources should be available to authorized parties at all times. ATTACK: Interruption. 8. State and explain different security models. No Security : In this simplest case, the approach could be decision to implement no security at all. Security through obscurity: In this model, a system is secure simply because nobody knows about its existence and the contents. This approach cannot work for too long, as there are many ways an attacker can come to know about it. Host Security: In this scheme, the security for each host is enforced individually. This is a very safe approach, but the trouble is that it cannot

scale well. The complexity and the diversity of modern sites / organizations makes the task even harder. Network Security: Host security is tough to achieve as organization grow and become more diverse. In this technique, the focus is to control network access to various hosts and their services, rather than individual host security. This is very efficient and scalable model.

9. Why do we need security? Discuss. We need security to : To protect data, files and folders. To protect our resources. To protect e-commerce transaction information like user-id, password, pin, etc. To protect my site from getting blocked by and attack such as DOS. To protect our IP Address. To protect my e-mails. To protect incoming packets so that no virus / worms come in. To protect outgoing packets so that the secrets does not leak out. Chapter 2 : Classes of Attack 1) Define :I) Botnets: A Botnet is usually an executable file made by someone to infect your computer and gain control over your computer. A Botnet is usually made up of mIRC (Microsoft- Internet Relay Chat) and other custom scripts. Botnets are controlled via protocols such as IRC (Internet Relay Chat) and http. A Botnet is made for several purposes the major one being packeting which is when your connection is used to send a ping packet to an IP at certain intervals causing the receiving IP to stop responding.

Other purposes of a Botnet are to use your host as IRC BNC which means the hackers can connect to IRC server masked with your host name as their own so that they dont get banned from the server. Owners of Botnets can flood the IRC channels. Although the Botnet cannot actually be called a virus, the owner of the botnet can use it (the bot) to delete files on your computer or to upload real virus. It is also possible that IRC networks may be run by botnet owners. II) Attack: There are 2 types of attacks viz active attack and passive attack Passive attack: In passive attack, there is no attempt made to modify or change the original message, but the attacker is plainly watching the information. Hence, these attacks are harder to detect. Active attacks: In case of active attack, the original message is modified in some way. This is a fabrication type of attack to security. In this way the authentication mechanism is fooled by using the false identity. Another example of this type of attack is called the man-in-middle type of attack, in which the attacker is in between the communication link and intercepts everything that comes the way. Thus the 2 parties think they are talking to each other, actually speaking they are talking to the hackers. Other destruction from active attack include interruption (called as masquerading) and modification. The denial of service attack can also be included into active attacks. Although active attacks are easy to detect, they are not as easy to prevent as of passive attacks. 2. Explain bugs and backdoors. Bugs: Software loopholes known as Bugs.

As it is seen and experienced, no computer software is ever made free from bugs. A bug may mean some problem in the software, which is undesired by its author, it may mean some kind of limitations the software, which does not allow it to do the appropriate work. These are loopholes and vulnerabilities in the program, which make it less secure. Hackers, who know about these loopholes, can misuse it or use it for their benefit whereas some of them may disclose it to make everyone aware about it. One solution for this is to keep the software updated with bug fixes, which are normally provided by its developer. There are people who post the known vulnerabilities in software to make everyone aware of it. Backdoors: Access availability to unauthorized use is known as Backdoors. Another security vulnerability is due to backdoors. These are the programs which when stored on the target system, may allow easy access to the hackers or give them sufficient information about the target to carry out attacks. There are several backdoors programs used by hackers. These are like automated tools, which carry out the destructive job for hackers. Trojan horse program may also come under this category. In order to avoid backdoors, cleaner solutions are also available. Backdoors were originally nothing more than methods used by software developers to ensure that they could gain access to an application even if something were to happen in the future to prevent normal access methods. The problem with these backdoors is that since it is hard-code, it cannot be removed. Should the attacker learn of the backdoors, all the systems running that software would be vulnerable to attack. Common backdoors include Net Bus and Back Orifice. Both of these, if running in your system, will allow them to perform any action.

3. Write a note on Information leakage. ( Pg25 vsit ) 4. Denial Of Service (DOS)

DOS attacks, also known as availability attacks, are much more significant in network than in other contexts. There are many accidental and malicious threats to availability or continued service. i) ii) iii) iv) v) vi) vii) viii) ix) Transmission Failure: Connection Flooding: Echo-Chargen: Ping Of Death: Smurf: Syn Flood: Teardrop: Traffic Redirection: DNS (Domain Name Server) Attack:

5. Distributed Denial Of Service Attack. The Denial Of Service attacks are powerful by themselves. But, an attacker can construct a 2-stage attack that multiplies the effect many times. This multiplicative effect gives power to DDOS. To prepare a DDOS attack, the attacker does 2 things. In 1st stage, the attacker uses convenient attack to plant a Trojan Horse on a target machine. The Trojan Horse does not necessarily harm the system, so it may not be noticed. The Trojan Horse may be named for a popular editor or utility, bound to standard operating system services, or entered into the list of processes activated at startup. No matter how it is situated in the system, it will not attract any attention. The attacker repeats this process with many targets. Each of these targets becomes what is known as a zombie. The target systems carry out their work, unaware of the resident zombie. Then the zombies can attack the victim on command.

6. Social Engineering. Social engineering is the art of manipulating people into performing actions or making confidential information public, rather than by breaking in or using technical cracking techniques.

While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face with the victim. A study by Google researchers analyzing fake Anti Virus distribution found that up to 90% of all domains involved in distributing fake antivirus software used social engineering techniques. 1) Phishing: Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate businessa bank, or credit card companyrequesting "verification" of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimatewith company logos and contentand has a form requesting everything from a home address to an ATM card's PIN. 2) Baiting: Baiting is like the real-world Trojan Horse that uses physical media and relies on the curiosity or greed of the victim.[8] In this attack, the attacker leaves a malware infected floppy disk, CD ROM, or USB flash drive in a location sure to be found (bathroom, elevator, sidewalk, parking lot), gives it a legitimate looking and curiosity-piquing label, and simply waits for the victim to use the device. A consequence of merely inserting the disk into a computer to see the contents, the user would unknowingly install malware on it, likely giving an attacker unfettered access to the victim's PC and perhaps, the targeted company's internal computer network. 3) Quid pro quo: An attacker calls random numbers at a company claiming to be calling back from technical support. Eventually they will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will "help" solve the problem and in the process have the user type commands that give the attacker access or launch malware. 4) Pretexting:

Pretexting is the act of creating and using an invented scenario to engage a targeted victim in a manner that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary circumstances. It is more than a simple lie, as it most often involves some prior research or setup and the use of prior information for impersonation (e.g., date of birth,Social Security Number, last bill amount) to establish legitimacy in the mind of the target. 7. Digging for Worms Email isnt the only way that viruses and worms spread but it is one of the most common. If your user population runs susceptible software (ie windows), you really need to filter incoming emails.

Various approaches: i) To screen each piece of incoming mail on each desktop but the only drawback is that desktops are often behind in their updates, and getting new pattern files to them now can be difficult. To install a centralized filter for malware. Use MX records to ensure that all inbound email goes to a central place. Make sure that you also include a wild card MX record for both your inside and your outside DNS. Its a good idea to use a different brand of virus scanner for your gateway than for your desktop; all virus scanners are subjected to false negative. Many good once are out there, both commercial and open source. If you can, obtain your central scanner from the vendor who delivers new patterns rapidly during times of plague. In some cases, you may want to add your own patterns. There are some legal worms spam, actually but legal because the user consented their spread by not decrypting the legalese in the license. Antivirus company have been hesitant to block them, given that they are technically, legal, but you are under no obligation to allow them inside your organization.






Outgoing email should be scanned, too. There is no convenient analogue to MX records; if u cant rely on your user who configure their mailers correctly you can encourage them by blocking outbound connection to TCP port 25. That will also help guard against worms that do their own SMTP. Some antivirus software annoys as much as it protects.

8. Protocol Failure. How does it affect the security of the systems. 1. The protocol used in the network also has certain limitations or problems contained in them, which prevent the applications from doing the appropriate things. 2. We know, areas where everything was working properly, but trustworthy authentication was not possible. Similarly in protocol failure, we consider, areas where protocols themselves are buggy or inadequate, thus denying the application the opportunity to do the right thing. 3. In TCP Protocol, because of insufficient randomness in the initial sequence of numbers required to make the connection, it is possible for an attacker to engage in source-address spoofing. 4. To be fair, TCPs sequence number were not intended to defend against malicious attacks; to that extend where address based authentication is relied on. 5. Other protocols that rely on sequence number are vulnerable to same kind of attack. 6. The list is legion ( lengthy ); it includes DNS and any of the RPC-based protocol. 7. In cryptographic world, finding holes in protocols is a popular game. Sometimes, the creator made mistakes, plain and simple. More often, the holes arise because of different assumptions. Proving the correctness of cryptography is a difficult business and is the subject of much active research. 8. For now, the holes remain, both in academe and in the real world as well. 9. Ssh is a fine protocol for secure remote access. Ssh has features where a user can specify a trusted public key by storing it in a file called authorized keys (local file). Then, if the client knows the private key, the user can log in without having to type a password. 10. In UNIX, this file typically resides in the .ssh directory in the users home directory. Now, consider the case in which someone uses the ssh to log into a

host, and attacker can spoof the replies to inject a bogus authorized keys (local) file. 11. The authorized keys file introduces another vulnerability. If a user gets a new account in a new environment he typically copies all of the important files there from an existing account, including the .ssh directory., so that all the .ssh keys are available from the new account. 12. However, the user may not realize that copying the authorized keys file means that this new account can be accessed by any key trusted to access the previous account. How does it affect? i) Since they (protocols) work from behind the applications, this may increase the vulnerability. ii) An example of such failure is the TCP protocol failure.TCP provides the circuits or paths for the IP datagrams. These may be sent across the network. The attacker checking for packets can get information about the source IP. iii) Similarly the IP is a stateless and unreliable protocol. No guarantee of delivery of packets can be given for it. It is possible for attackers to send packets using any known or valid source address. This is called source address spoofing. Although the OS controls this, still it cannot be relied on. The protocol used in the networks also has certain limitations or problems contained in them, which prevent the applications from doing the appropriate things. Since they work from behind the application, this may increase the vulnerability. In protocol failures, we consider the reverse; ie., areas where the protocols themselves are inadequate, thus denying the application the opportunity to do the right thing. 9. Various methods to crack/steal the password. (eq sol pg 33) I. II. III. IV. V. Physical security breach. Unintentionally shared. Cracked. Sniffed. Guessed.

10. Various password recovery methods. ( vsit pg no.23 ) I. II. III. IV. V. VI. VII. Instant password Extraction. Fake Password Creation. Reset the Password. Brute Force Attack. Dictionary Attack. Smart Force Attack. Plain Text Attack. Chapter 3: Computer Security 1. Write a note on Virus. Explain types of Virus. A program or piece of code that is loaded onto your computer without your knowledge and runs against your wishes. Viruses can also replicate themselves. All computer viruses are manmade. A simple virus that can make a copy of itself over and over again is relatively easy to produce. Even such a simple virus is dangerous because it will quickly use all available memory and bring the system to a halt. An even more dangerous type of virus is one capable of transmitting itself across networks and bypassing security systems. Signs of a Computer Infection: Some signs that may indicate that your computer is infected include: a. Your computer functions slower than normal b. Your computer responds slowly and freezes often c. Your computer restarts itself often d. You see uncommon error messages, distorted menus, and dialog boxes e. You notice applications on your computer fail to work correctly f. You fail to print correctly Types of Viruses: i) *Boot Sector viruses: A boot sector virus infects diskettes and hard drives. All disks and hard drives contain smaller sections called sectors. The first sector is called the boot. The boot carries the Mater Boot Record (MBR). MBR functions to read and load the operating system. So, if a virus infects the boot or MBR of a disk, such as a floppy disk, your hard drive can become infected, if

you re-boot your computer while the infected disk is in the drive. Once your hard drive is infected all diskettes that you use in your computer will be infected. Boot sector viruses often spread to other computers by the use of shared infected disks and pirated software applications. The best way to disinfect your computer of the boot sector virus is by using antivirus software. ii) *Program / File Infecting / Parasite viruses: A program virus becomes active when the program file (usually with extensions .BIN, .COM, .EXE, .OVL, .DRV) carrying the virus is opened. Once active, the virus will make copies of itself and will infect other programs on the computer. iii) *Multipartite viruses: A multipartite virus is a hybrid of a Boot Sector and Program viruses. It infects program files and when the infected program is active it will affect the boot record. So the next time you start up your computer it'll infect your local drive and other programs on your computer. iv) *Macro Viruses: A macro virus is programmed as a macro embedded in a document. Many applications, such as Microsoft Word and Excel, support macro languages. Once a macro virus gets on to your computer, every document you produce will become infected. This type of virus is relatively new and may slip by your antivirus software if you don't have the most recent version installed on your computer. v) Stealth viruses: A stealth virus can disguise itself by using certain tactics to prevent being detected by antivirus software. These tactics include altering its file size, concealing itself in memory, and so on. This type of virus is nothing new, in fact, the first computer virus, dubbed Brain, was a stealth virus. A good antivirus should be able to detect a stealth virus lurking on your hard drive by checking the areas the virus infected and evidence in memory. vi) Polymorphic viruses: A polymorphic virus acts like a chameleon, changing its virus signature (also known as binary pattern) every time it multiples and infects a new file. By changing binary patterns, a polymorphic virus becomes hard to detect by an antivirus program.

2. Describe the life cycle of Virus. Computer viruses have a life cycle that starts when they're created and ends when they're completely eradicated. The following outline describes each stage.


Creation Until a few years ago, creating a virus required knowledge of a computer programming language. Today anyone with even a little programming knowledge can create a virus. Usually, though, viruses are created by misguided individuals who wish to cause widespread, random damage to computers. Replication Viruses replicate by nature. A well-designed virus will replicate for a long time before it activates, which allows it plenty of time to spread. Activation Viruses that have damage routines will activate when certain conditions are met, for example, on a certain date or when a particular action is taken by the user. Viruses without damage routines don't activate, instead causing damage by stealing storage space. Discovery This phase doesn't always come after activation, but it usually does. When a virus is detected and isolated, it is sent to the International Computer Security Association in Washington, D.C., to be documented and distributed to antivirus developers. Discovery normally takes place at least a year before the virus might have become a threat to the computing community. Assimilation At this point, antivirus developers modify their software so that it can detect the new virus. This can take anywhere from one day to six months, depending on the developer and the virus type. Eradication If enough users install up-to-date virus protection software, any virus can be wiped out. So far no viruses have disappeared completely, but some have long ceased to be a major threat.






What are Trojan Horses? How do they penetrate into the system? How can they be detected and removed from the system?
A Trojan horse, or Trojan, is software that appears to perform a desirable function for the user prior to run or install, but (perhaps in addition to the expected function) steals information or harms the system. The term is derived from the Trojan Horse story in Greek mythology. A destructive program that masquerades as a benign (helpful) application. Unlike viruses, Trojan horses do not replicate themselves, but they can be just as destructive. One of the most insidious types of Trojan horse is a program that claims to rid a computer of viruses but instead introduces viruses onto the computer. Trojan may allow a hacker remote access to a target computer system. Once a Trojan has been installed on a target computer system, a hacker may have access to the computer remotely and perform various operations, limited by user privileges on the target computer system and the design of the Trojan. Operations that could be performed by a hacker on a target computer system include:

Use of the machine as part of a botnet (e.g. to perform automated spamming or to distribute Denialof-service attacks)

Plant a Backdoor. Data theft (e.g. retrieving passwords or credit card information). Installation of software, including third-party malware Downloading or uploading of files on the user's computer Modification or deletion of files Keystroke logging Watching the user's screen Crashing the computer Anonymizing internet viewing Trojan horses in this way require interaction with a hacker to fulfill their purpose, though the hacker need not be the individual responsible for distributing the Trojan horse. It is possible for individual hackers to scan computers on a network using a port scanner in the hope of finding one with a malicious Trojan horse installed, which the hacker can then use to control the target computer. A recent innovation in Trojan horse code takes advantage of a security flaw in older versions of IE explorer and Google Chrome to use the host computer as an anonymizer proxy to effectively hide internet usage. The hacker is able to view internet sites while the tracking cookies, internet history, and any IP logging are maintained on the host computer. The host computer may or may not show the internet history of the sites viewed using the computer as a proxy. The first generation of anonymizer Trojan horses tended to leave their tracks in the

page view histories of the host computer. Newer generations of the Trojan horse tend to "cover" their tracks more efficiently. Several versions of Slavebot have been widely circulated in the US and Europe and are the most widely distributed examples of this type of Trojan horse. According to a survey conducted by BitDefender from January to June 2009, "Trojan-type malware is on the rise, accounting for 83-percent of the global malware detected in the world". Detection: Trojan horses can be detected with most of latest Anti-virus. Anti-Spyware or All-in-one (AIO) security softwares. The best way to keep system safe from Trojan Horse is to update Anti-virus softwares regularly and stop downloading pirated softwares and visiting suspicious links. Once the Antivirus detects Trojan Horse, it automatically deletes it. Remove Trojan Horse: Keep antivirus updated. Do not download pirated softwares. Scan external device before using them. Follow security measures.

Explain functioning of any 2 types of viruses. Structure of Virus. Here is a simple structure of virus. In the infected binary at a known byte location in the file, a virus inserts a signature byte used to determine if a potential error program has been infected. V() { infectExecutable() if(transferred)

{ Do damage(); } Jump to main infected program } --------Void infectExecutable() { File = choose and uninfected executable file. Prepend V to the file } --------void doDamage() { . } Int triggered() { Return (some test? 1 0) } The above virus makes the infected file longer that it was making it difficult to spot. There are many techniques to leave the file length and even check sum unchanged and yet infected.

Chapter 4: Firewalls and Proxy Servers

1. Compare traditional and Distributed Firewall. Traditional Firewall Provides single entry point to the n/w More prone to attacks. Cannot prevent inside attacks. Less secure implementation Servers have to be inside perimeter Less flexibility in operations Provides same level of security Distributed Firewall. Provides multiple check points. Less prone Possible to prevent inside attacks. More secure implementation Servers can be outside perimeters. More flexibility in operations. Different security levels possible.

2. Discuss on Deny all/ Accept all stance. As far as allowing or disallowing the services, there are mainly two approaches or methods. First is Allow All approach and the second one is the Deny All approach. We can describe the two school of thought as that which is not expressly forbidden is permitted and that which is not expressly permitted is forbidden. The 1st one is more open while the later one is more conservative. In default allow approach, 1st by default everything is open. Later on, the rules can be added for whatever you wish to block to the user. Users, always interested in new features, prefer this. In 2nd approach, by default or in the beginning everything is 'blocked' and as and when required rules can be added to open up the services/information that is required or is thought as trustworthy. Security experts, relying on several decades of experience, recommend this. On the application level gateways or proxy servers, especially linux based, there exists the configuration files called hosts.allow and hosts.deny using which specific configuration can be made. The addresses added in hosts.allow file will be necessary allowed and similarly the addresses in the hosts.deny file will be prevented. 3. Short note on Distributed Firewall. The Distributed firewalls are the host-resident security solutions which protect the enterprise networks critical end points against the intrusion.

As the name suggests, the firewall implementation is distributed over the multiple points rather than providing a single point entry into your network as in case of traditional firewalls. With distributed firewalls, one can provide separate level of security to the Web, Application servers or individual nodes in the setup. These are meant to provide higher security to the corporate networks. These can also prevent the malicious inside attacks also within the network, as they treat all traffic as unfriendly whether it is originated from the internet or your local network. This is more important advantage, since most of the attacks are initiated from inside the network. These firewalls also guard the individual machines the same way as the perimeter firewall guards the entire network. These are like the personal firewalls but the additional features include the centralized management, logging and a fine access control granularity. These are the prime features considered for the implementation for the firewalls in large enterprise. These protect remote employees, precious servers of the enterprise, internal network as well as the individual terminals. Presently, organizations of various types that are security conscious are deploying the Distributed type of firewalls and has a scope of unlimited scalability even keeping the same performance .In some cases, even the perimeter firewalls need not be installed at all when distributed firewalls are deployed. 4. Difference between application level proxies and circuit level proxies. 5. What do you mean by proxy server / application gateway? What does proxy server do? In computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.

Today, most proxies are a web proxy, allowing access to content on the World Wide Web. 6. Concept of packet filtering. This is the basic level of the firewalls. As the name suggests, this firewall check for each and every IP packet individually, either coming in or going out of private network. According to the selected policies it determines whether to accept a packet or reject it. This is the 1st line of defense against the intruders, and is not totally foolproof. It has to be combined with other techniques as well, to strengthen the security. Packet filtering can also be incorporated in Routers. Many routers have this capability in which the Rule-sets can be hardcoded into them. Thus, apart from normal routing decisions, a router can also be made capable of performing packet filtering. Another implementation of packet filters is kernel based in which the kernel is configured to carry out packet filtering. In case of Linux OS, command line tools such as ipchains (now replaced iptables) can also be used to define, modify or apply the specific Rule-sets for packet filters. Packet filtering is simple and straight forward mechanism. This works at the internet layer in the TCP/IP model. Usually, a packet is checked for the following information for filtering: 1. Source IP address. 2. Destination IP address. 3. Source TCP/UDP port. 4. Destination TCP/UDP port. Packet filters do not see inside a packet; they block or accept packets solely on the basis for the IP address and ports. Thus, any details in the packets data field is beyond the capability of a packet filter. Hence using these, a security decision may suggest blocking certain address or a website, which are not trustworthy.

Advantage of packet filters: 1. Simple and straightforward mechanism. 2. Operation in totally transparent to the users. 3. Faster in operation. Disadvantages of packet filters:

1. Complex Rule-sets: Rule-sets to be defined for a packet filters may be very complex to specify as well as to test. 2. In order to allow certain access, some exceptions to the rules need to be added. This may add further to the complexity. 3. Some packet filters do not filter on the source TCP/UDP ports at all, which may increase the flaws in the filtering system. 4. These do not possess any auditing (reviewing) capabilities and auditing is considered to be of major importance in security. 5. All the application on internet may not be fully supported by packet filtering firewalls. 6. These types of firewalls do not attempt to hide the private network topology to the outside network and hence gets exposed. 7. Using packet filters may be complex as graphical interface is not available in more of the cases. 7. Different types of firewall configuration. 8. Compare stateful multi-layer inspection and static single-layer inspection. i) There are 2 classes of firewall architecture, single layer and multiple layer. ii) In a single layer architecture, one host is allocated all firewall functions. This method is usually chosen when either cost is a key factor or if there are only 2 networks to connect. iii) The advantage to this architecture is any changes to the firewall need only to be done at a single host. iv) The biggest disadvantage of the single layer approach is that it provides single entry point. v) In multiple layer architecture the firewall function are distributed among 2 or more hosts normally connected in series. vi) This method is more difficult to design and manage, it is also more costly, but can provide significantly greater security by diversifying the firewall defense. vii) A common design for this type of architecture is using 2 firewall hosts with a demilitarized network (DMZ) between them separating the Internet and the internal network. viii) Using this setup, traffic between the internal network and the internet must pass through 2 firewalls and the DMZ. 9. What is firewall? Show its architecture. State the types of firewall. i) A firewall is a device that filters all traffic between a protected or inside network and a less trustworthy or outside network.



iv) v)



Usually a firewall runs on a dedicated device; because it is a single point through which traffic is channeled, performance is important, which means non-firewall functions should not be done on the same machine. Because a firewall is executable code, the attacker could compromise the code and execute from the firewalls device. Thus, the fewer pieces of code on the device, the fewer tools the attacker would have by compromising the firewall. Firewall code usually runs on a proprietary (owned) or carefully minimized OS. The purpose of firewall is to keep bad things outside a protected environment. To accomplish that, firewalls implement a security policy that is specifically designed to address what bad things might happen. For example, the policy might be to prevent any access from outside (while still allowing traffic to pass from inside to outside). Alternatively, the policy might permit accesses only from certain places, from certain users, or for certain activities. Part of the challenge of protecting a network with a firewall is determining which security policy meets the needs of the installation. Design Of Firewall: A firewall is a special form of reference monitor. By carefully positioning a firewall within a network, we can ensure that all network accesses that we want to control must pass through it. This restriction meets the always invoked condition. A firewall is, typically, well isolated, making is immune to modification. Usually a firewall is implemented on a separate computer, with direct connections only to the outside and inside networks. This isolation is expected to meet the tamperproof requirement. And firewall designers strongly recommend keeping the functionality of the firewall simple. Types OF Firewall: i) Packet filtering gateways or screening routers. ii) Stateful inspection firewalls. iii) Application proxies. iv) Guards. v) Personal firewalls.

i) ii)

iii) iv)


Chapter 5 : Cryptography. 1. 2. 3. 4. Write a note on Digital Signature. Explain the sender side and the receiver side working of digital signature. Basic steps used in Data encryption Algorithm. Compare similarities between symmetric and asymmetric cryptosystem with suitable example. 5. Explain Ceaser cipher and mono-alphabetic cipher substitution technique used to convert a plan text message to cipher text.