Вы находитесь на странице: 1из 26

J.D.

BIRLA INSTITUTE (DEPARTMENT MANAGEMENT)


SUBJECT: Marketing (Paper 2) SEMESTER: 6 SECTION: B GROUP: 7 ROLL NO: 141, 143, 145, 146, 148, 149, 153, 154, 155 And 157 TOPIC: Security and Control for BIS

OF

INTRODUTION
SECURITY AND PROTECTION Can you imagine what would happen if you tried to link to the internet without a firewall or antivirus software? Your computer would be disabled in a few seconds and it might take you many days to recover. If you used the computer to run your business, you might not be able to sell to your customers or place orders with your suppliers while it was down. And you might find that your system had been penetrated by outsiders, who perhaps stole or destroyed valuable data, including confidential payment data from your customers. If too much data were destroyed or divulged, your business might never be able to operate! In short, if you operate a business today, you need to make security and control a top priority. Security refers to policies, procedures and technical measures used to prevent unauthorized access, alteration, theft, or physical damage to information systems. Controls consist of all the methods, policies, and organizational procedures that ensure the safety of the organizations assets, the accuracy and reliability of its accounting records, and operational adherence to management standards. Security and protection are used together but they do two totally different types of work: Protection: Guarding Users (internal or external) data against interference from internal (authorized) users of the system. Security: Guarding Users (internal or external) data or the system against interference and access from external (unauthorized) users of the system.

Security and control are important but often neglected areas for information system investment. Firms relying on computer systems for their core business functions can lose sales and productivity. Information assets, such as confidential employee records, trade secrets, or business plans, lose much of their value if they are revealed to outsiders or if they expose the firm to legal liability. News laws, such as HIPAA, the Sarbanes-Oxley Act and the Gramm-Leach-Biley Act, require companies to practice stringent electronics record management. And adhere to strict standards for security, privacy, and control. Legal actions requiring electronics evidence and computer forensics also require firms to pay more attention to security and electronics record management.

Firms need to establish an appropriate organizational and managerial framework for security and control to use technologies effectively to protect their information resources. A risk assessment evaluates information assets, identifies control points and control weaknesses, and

determines the most cost effective set of controls. Firms must also develop a coherent corporate security policy and plans for continuing business operations in the event of disaster or disruption. The security policy includes policies for acceptable use and authorization. A disaster recovery plan provides procedures and facilities for restoring computing and communication services after they have been disrupted, whereas a business continuity plan focuses on how the company can restore business operations. Comprehensive and systematic MIS auditing helps organizations determine the effectiveness of security and controls of their information system.

GOALS AND PRINCIPLES OF PROTECTION


Goals of Protection To prevent mischievous, intentional violation of an access restriction by a user. The need to ensure that each component of the system active in a system uses system resources only in ways consistent with stated policy.

Principle of Protection A key time tested guiding principle for protection is the principle of least privilege. It dictates that programs, users and even systems be given just enough privileges to perform their tasks

PROTECTION MECHANISM
In computer science, protection mechanisms are built into computer architecture to support the enforcement of security policies. A simple definition of a security policy is "to set who may use what information in a computer system". The access matrix model, first introduced in 1971, is a generalized description of operating system protection mechanisms. The separation of protection and security is a special case of the separation of mechanism and policy. Access control list An access control list (ACL), with respect to a computer file system, is a list of permission

attached to an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies a subject and an operation. For instance, if a file has an ACL that contains (Alice, delete), this would give Alice permission to delete the file. Capability List Capability-based security is a concept in the design of secure computing systems, one of the existing security models. A capability (known in some systems as a key) is a communicable, unforgivable token of authority. It refers to a value that references an object along with an associated set of access rights. A user program on a capability-based operating system must use a capability to access an object. Capability-based security refers to the principle of designing user programs such that they directly share capabilities with each other according to the principle of least privilege, and to the operating system infrastructure necessary to make such transactions efficient and secure. Capability-based security is to be contrasted with an approach that uses hierarchical protection domains. Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system. A capability-based system, in contrast, is designed with that goal in mind. Data Encryption The Data Encryption Standard is a previously predominant algorithm for the encryption of electronic data. It was highly influential in the advancement of modern cryptography in the academic world. Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS eventually selected a slightly modified version, which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977. The publication of an NSA-approved encryption standard simultaneously resulted in its quick international adoption and widespread academic scrutiny. Controversies arose out of classified design elements, a relatively short key length of the symmetric-key block cipher design, and the involvement of the NSA, nourishing suspicions about a backdoor. While these suspicions eventually have turned out to be unfounded, the intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and their cryptanalysis. DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible to mount in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). Furthermore, DES has been withdrawn as a standard by the National Institute of Standards and Technology (formerly the National Bureau of Standards).

Some documentation makes a distinction between DES as a standard and DES as an algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm).

TYPES OF SECURITY THREATS

Breach of Confidentiality -This type of violation involves unauthorized reading of data or theft of information. Hackers are individuals who gain illegal entry into a computer system, often without malicious intent but simply to see if they can do it. Although insiders constitute the greatest threat to information security, the hacker problem is serious. Other terms sometimes used in this context are crackers and attackers. Actions taken by hackers, crackers, and attackers may be limited to simply browsing through information in a system or may extend to stealing, altering, or destroying information. Systems accessible via remote access are particularly vulnerable to hacker activity

Breach of Integrity This type of violation involves unauthorized modification of data. Malicious code can attack both personal computers and more sophisticated systems. It includes viruses, worms, Trojan horses, logic bombs, and other software. Malicious code programs may play harmless pranks, such as displaying unwanted phrases or graphics, or create serious problems by destroying or altering data or crashing systems. The increasing use of corporate networks, e-mail, and the Internet provides fertile ground for the development of new strains of viruses and other malicious code. It is critical that antiviral or antimalware software be kept up-to-date.

Theft of service -It refers to unauthorized use of services. Desktop and laptop computers and the data they contain are vulnerable to theft and/or loss from inside and outside the organization. The increasing use of laptops, tablets, smartphones and other handheld devices, along with portable media such as external hard drives and USB thumb drives, makes potential inappropriate access to PHI a greater threat, especially if these devices lack encryption. Measures must be implemented to ensure that patient and corporate data are protected in the event devices are lost, stolen, or misplaced by users. Measures such as encryption and limiting USB usage are strongly recommended practices to enhance information security.

Denial of service -This violation involves preventing legitimate use of the system by the authorized users. End users, data entry clerks, system operators, and programmers may make unintentional errors that contribute to security problems by creating vulnerabilities, crashing systems, or compromising data integrity. Legitimate users may sometimes attempt to access information they do not need to do their jobs simply to satisfy their curiosity. Extremely sensitive information, such as human immunodeficiency virus test results, may be vulnerable to this threat if not adequately protected in system or security design

VIRUS
A virus is a program or piece of code that is loaded onto your computer without your knowledge and runs against your wishes. Viruses can also replicate themselves. All computer viruses are manmade. A simple virus that can make a copy of itself over and over again is relatively easy to produce. Even such a simple virus is dangerous because it will quickly use all available memory and bring the system to a halt. An even more dangerous type of virus is one capable of transmitting itself across networks and bypassing security systems. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system's data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves. A virus program contains instructions to initiate some sort of "event" that affects the infected computer. Each virus has an unique event associated with it. These events and their effects can range from harmless to devastating. For examples:

An annoying message appearing on the computer screen. Reduced memory or disk space. Modification of data. Files overwritten or damaged. Hard drive erased.

WORMS
A computer worm is a self-replicating malware computer program, which uses a computer

network to send copies of itself to other nodes (computers on the network) and it may do so without any user intervention. This is due to security shortcomings on the target computer. Unlike a computer virus, it does not need to attach itself to an existing program files and rely less on human behavior in order to spread from computer to computer. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify
files on a targeted computer.

Worms are independent computer programs that copy themselves from one computer to others over a network. Worms can destroy data and programs as well as disrupt or even halt the operation of computer networks. Worms and viruses are often spread over the Internet from files of loaded software, from files attached to E-mail transmissions, or from computerized e-mail messages or instant messaging. Viruses have also invaded computerized information systems from "infected" disks or infected machines. Worms are currently the most problematic. Now viruses and worms are spreading to wireless computing devices. For exa mple, the Cabir worm, which first appeared in early 2005, targets mobiles running the popular Symbian mobile operating system and spreads through Bluetooth wireless networks. Cabir continually seeks other Bluetooths and eventually runs down a device's battery. Mobile device viruses pose serious threats to enterprise computing because so many wireless devices are now linked to corporate information systems.

TRAP DOOR
A trap door is an entrance point in an information processing system which circumvents the normal safety measures. It is generally a hidden program or an electronic component which makes the protection system ineffective if certain not documented orders are placed to him. Moreover, the trap door is often activated by an event or a normal action. A trap door can also be a hole of safety in a system which was deliberately set up by the creators or the people in charge of maintenance. The principal interest of these trap doors is not always harmful: certain operating systems, for example, have accounts users with high privileges intended to facilitate the work of the maintenance men. But in this case, they must be documented. The best guaranteed against the trap doors is to use software whose source codes public and are analyzed by a maximum of people.

LOGIC BOMB

In a computer program, a logic bomb, also called slag code, is programming code, inserted surreptitiously or intentionally, that is designed to execute (or "explode") under circumstances such as the lapse of a certain amount of time or the failure of a a program user to respond to a program command. It is in effect a delayed-action computer virus or Trojan horse. A logic bomb, when "exploded," may be designed to display or print a spurious message, delete or corrupt data, or have other undesirable effects. Some logic bombs can be detected and eliminated before they execute through a periodic scan of all computer files, including compressed files, with an up-to-date anti-virus program. For best results, the auto-protect and e-mail screening functions of the anti-virus program should be activated by the computer user whenever the machine is online. In a network, each computer should be individually protected, in addition to whatever protection is provided by the network administrator. Unfortunately, even this precaution does not guarantee 100percent system immunity

FIREWALL
A firewall is a combination of hardware and software that controls the flow of incoming and outgoing network traffic. It is generally placed between the organization's private internal networks and distrusted external networks, such as the Internet, although firewalls can also be used to protect one part of a company's network from the rest of the network. A firewall protects against a number of different online threats:

Hackers breaking into your computer. Some viruses, called worms, that spread from computer to computer over the internet. Some firewalls block outgoing traffic that might originate from a virus infection.

Because the internet is a public network, any connected computer can find and connect to any other connected computer. A firewall is a barrier between the public internet and your private computer system. Think of it as a really paranoid bouncer who stops anyone coming into your computer if theyre not on the guest list. The firewall acts like a gatekeeper who examines each user's credentials fore access is granted to a network. The firewall identifies names, IP a ddresses, applications, and other characteristics of incoming traffic. It checks information against the access rules that have been programmed into the stem by the network administrator. The firewall prevents unauthorized communication into and out of the network. In large organizations, the firewall often resides on a specially designated computer separate from the rest of the network, so no incoming request directly accesses private network resources. Types of firewall

Desktop firewall A desktop firewall is installed on each computer that is connected to the internet and monitors (and blocks, where necessary) internet traffic. They are also sometimes known as software firewalls. Windows Firewall is a basic firewall. You can replace it with a more sophisticated commercial desktop firewall or supplement it with a hardware firewall if you wish. The benefits of a desktop firewall are: Windows Firewall is free and included with Windows Vista and Windows XP (with the latest updates) Free software firewalls are also available for older systems in private use. Commercial desktop firewalls often integrate well with other security products like virus scanners. Easy to set up no wiring or extra hardware. If you use a laptop, a desktop firewall will protect you wherever you connect to the internet.

Hardware firewall Hardware firewalls are often built into broadband internet routers. If several computers share an internet connection, a hardware firewall will protect all of them. Most router manufacturers offer devices with firewalls. Although they are getting easier to use, configuring a hardware firewall is often trickier than configuring a software firewall. Most internet routers and firewalls have a password that lets one control them from their computer. It's a good idea to change that password so that it is not the default one. Commercial firewall Commercial firewalls operate in the same way as Windows Firewall but generally give you extra protection, more control over how the firewall works and more information about how to configure it. Most security software companies sell a firewall as a standalone software package or as part of a security suite that includes other protection such as a virus scanner There are a number of firewall screening technologies, including static packet filtering, stateful inspection, Network Address Translation, and application proxy filtering. They are frequently used in combination to provide firewall protection. Packet filtering examines selected fields in the headers of data packets flowing back and forth between the trusted network and the Internet, examining individual packets in isolation. This filtering technology can miss many types of attacks. Stateful inspection

provides additional security by determining whether packets are part of an ongoing dialogue between a sender and a receiver. It sets up state tables to track information over multiple packets. Packets are accepted or rejected based on whether they are part of an approved conversation or whether they are attempting to establish a legitimate connection. Network Address Translation (NAT) can provide another layer of protection when static packet filtering and stateful inspection are employed. NAT conceals the IP addresses of the organization's internal host computer(s) to prevent sniffer programs outside the firewall from ascertaining them and using that information to penetrate internal systems. Application proxy filtering examines the application content of packets A proxy server stops data packets originating outside the organization, inspects them, and passes a proxy to the other side of the fire wall. If a user outside the company wants to communicate with a user inside the organization, the outside user first "talks" to the proxy application and the proxy application communicates with the firm's internal computer. Likewise, a computer user inside the organization goes through the proxy to talk with computers on t outside. To create a good firewall, an administrator must maintain detailed intern rules identifying the people, applications, or addresses that are allowed rejected. Firewalls can deter, but not completely prevent, network penetration by outsiders and should be viewed as one element in an overall security plan.

Features of Firewall Intrusion Detection and Protection: This is the most important feature of a firewall. In this the firewall checks every inward traffic into the internal host and compare them with the predefined rules set by the system administrator of the internal network and drops the traffic and logs the unauthorized connection attempts Blocking unauthorized outward traffic : Blocks the users of internal network from any unauthorized connection attempts to external network. Protection from Viruses and worms: Many firewall vendors adds this feature to their firewall products and checks for any virus in the incoming traffic

Protection from Denial of Service Attacks : A denial of service attack is one in which the attacker deprives an organization of the use of a network resource (such as the email system or web site) by the receiving system (for example, an inability to deal with large number of email connection requests in a short time). Protection from Spam: Spam is an unsolicited commercial email messages and are not created with the same malicious intent as threats like viruses, but are now having a negative economic impact on the same order of magnitude. Spam clogs networks, hogs disk space and wastes countless hours of user time reading and dealing with the messages.

SYSTEM VULNERABILITY AND ABUSE


Before computer automation, data about individuals or organizations were maintained and secured as paper records dispersed in separate business or organizational units. Information systems concentrate data in computer files that can potentially be accessed by large numbers of people and by groups outside of the organization. When large amounts of data are stored in electronic form they are vulnerable to many more kinds of threats than when they exist in manual form. Through communications networks, information systems in different locations can be interconnected. The potential for unauthorized access, abuse, or fraud is not limited to a single location but can occur at any access point in the network. Why Systems Are Vulnerable Figure 1 illustrates the most common threats against contemporary information systems. They can stem from technical, organizational, and environmental factors compounded by poor management decisions. In the multitier client/server computing environment illustrated here, vulnerabilities exist at each layer and in the communications between the layers. Users at the client layer can cause harm by introducing errors or by accessing systems without authorization. It is possible to access data flowing over networks, steal valuable data during transmission, or alter messages without authorization. Radiation can disrupt a network at various points as well. Intruders can launch denial of service attacks or malicious software to disrupt the operation of Web sites. Those capable of penetrating corporate systems can destroy or alter corporate data stored in databases or files.

FIGURE 1 Contemporary security challenges and vulnerabilities The architecture of a Web-based application typically includes a Web client, a server, and corporate information systems linked to databases. Each of these components presents security challenges and vulnerabilities. Floods, fires, power failures, and other electrical problems can cause disruptions at any point in the network. Systems malfunction if computer hardware breaks down, is not configured properly, or is damaged by improper use or criminal acts. Errors in programming, improper installation, or unauthorized changes cause computer software to fail. Computer systems can also be disrupted by power failures, floods, fires, or other natural disasters. Domestic or offshore outsourcing to another company adds to system vulnerability because valuable information will reside on networks and computers outside the organizations control. Without strong safeguards, valuable data could be lost, destroyed, or could fall into the wrong hands, revealing important trade secrets or information that violates personal privacy. Some worry that outsourcing application development to offshore companies might provide opportunities for programmers to insert hidden code that would later enable someone to gain control over an application or its data (Schmerken and Fitzgerald, 2004). Threats to Information System 1. Hardware Failure 2. Software Failure 3. Attacks by malicious software 4. Sabotage 5. Theft of data, services, or equipments 6. Accidents / Natural Disaster 7. User Error 8. Bugs in the program 9. Telecommunication failure

INTERNET VULNERABI LITIES Large public networks such as the Internet are more vulnerable than internal networks because they are virtually open to anyone. The Internet is so huge that when abuses do occur, they can have an enormously widespread impact. When the Internet becomes part of the

corporate network, the organizations information systems are even more vulnerable to actions from outsiders. Computers that are constantly connected to the Internet by cable modems or Digital Subscriber Line (DSL) is more open to penetration by outsiders because they use fixed Internet addresses where they can be easily identified. (With dial-up service, a temporary Internet address is assigned for each session.) A fixed Internet address creates a fixed target for hackers. Telephone service based on Internet technology can be more vulnerable than the switched voice network if it does not run over a secure private network. Most Voice over IP (VoIP) traffic over the public Internet is not encrypted, so anyone linked to a network can listen in on conversations. Hackers can intercept conversations to obtain credit card and other confidential personal information or shut down voice service by flooding servers supporting VoIP with bogus traffic. Vulnerability has also increased from widespread use of e-mail and instant messaging (IM). E-mail can contain attachments that serve as springboards for malicious software or unauthorized access to internal corporate systems. Employees may use e-mail messages to transmit valuable trade secrets, financial data, or confidential customer information to unauthorized recipients. Popular instant messaging applications for consumers do not use a secure layer for text messages, so they can be intercepted and read by outsiders during transmission over the public Internet. IM activity over the Internet can in some cases be used as a back door to an otherwise secure network. (IM systems designed for corporations, such as IBMs SameTime, include security features.)

WIRELESS SECURITY CHALLENGES Wireless networks using radio-based technology are even more vulnerable to penetration because radio frequency bands are easy to scan. Although the range of Wireless Fidelity (Wi- Fi) networks is only several hundred feet, it can be extended up to one-fourth of a mile using external antennae. Local area networks (LANs) that use the 802.11b (Wi-Fi) standard can be easily penetrated by outsiders armed with laptops, wireless cards, external antennae, and freeware hacking software. Hackers use these tools to detect unprotected networks, monitor network traffic, and in some cases, gain access to the Internet or to corporate networks. Wi-Fi transmission technology uses spread spectrum transmission in which a signal is spread over a wide range of frequencies, and the particular version of spread spectrum transmission used in the 802.11 standard was designed to make it easier for stations to find and hear one another. The service set identifiers (SSID) identifying the access points in a Wi-Fi network are broadcast multiple times and can be picked up fairly easily by intruders sniffer programs (see Figure 10-2). Wireless networks in many locations do not have basic protections against war driving, in which eavesdroppers drive by buildings or park outside and try to intercept wireless network traffic.

FIGURE 2 Wi-Fi security challenges Many Wi-Fi networks can be penetrated easily by intruders using sniffer programs to obtain an address to access the resources of a network without authorization. The 802.11 standard specifies the SSID as a form of password for a users radio network interface card (NIC) to join a particular wireless network. The users radio NIC must have the same SSID as the access point to enable association and communication. Most access points broadcast the SSID multiple times per second. A hacker can employ an 802.11 analysis tool to identify the SSID. (Windows XP has capabilities for detecting the SSID used in a network and automatically configuring the radio NIC within the users device.) An intruder that has associated with an access point by using the correct SSID can then obtain a legitimate IP address to access other resources on the network because many wireless LANs automatically assign IP addresses to users as they become active. This enables an intruder who has illicitly associated with a wireless LAN to use the Windows operating system to determine which other users are connected to the network, and even to click on other users devices, locate their documents folders, and open or copy their files. This is a serious problem many end users overlook when connecting to access points at Intruders can also use the information they have gleaned about Internet Protocol (IP) addresses and SSIDs to set up rogue access points on a different radio channel in physical locations close to users to force a users radio NIC to associate with the rogue access point. Once this association occurs, hackers using the rogue access point can capture the names and

passwords of unsuspecting users. The initial security standard developed for Wi-Fi, called Wired Equivalent Privacy (WEP), is not very effective. WEP is built into all standard 802.11 products, but its use is optional. Users must turn it on, and many neglect to do so, leaving many access points unprotected. The basic WEP specification calls for an access point and all of its users to share the same 40-bit encrypted password, which can be easily decrypted by hackers from a small amount of traffic. Manufacturers of wireless networking products are now beefing up their security by offering stronger encryption and authentication systems. Software Vulnerability Software errors also pose a constant threat to information systems, causing untold losses in productivity. The U.S. Department of Commerce National Institute of Standards and Technology (NIST) reports that software flaws (including vulnerabilities to hackers and malware) cost the U.S. economy $59.6 billion each year (Hulme, 2004). A major problem with software is the presence of hidden bugs, or program code defects. Studies have shown that it is virtually impossible to eliminate all bugs from large programs. The main source of bugs is the complexity of decision-making code. Important programs within most corporations may contain tens of thousands or even millions of lines of code, each with many alternative decision paths. Such complexity is difficult to document and designdesigners may document some reactions incorrectly or may fail to consider some possibilities. Even after rigorous testing, developers do not know for sure that a piece of software is dependable until the product proves itself after much operational use.

Commercial software often contains flaws that create not only performance issues but also security vulnerabilities that open networks to intruders. These vulnerabilities and bugs can enable malware to slip past antivirus defenses. A great deal of malware has been trying to exploit vulnerabilities in the Microsoft Windows operating system and other Microsoft products, but malware targeting the Linux operating system is on the rise as well. To correct software flaws once they are identified, the software vendor creates lines of code called patches to repair the flaw without disturbing proper operation of the software. An example is Microsofts XP Service Pack 2 (SP2) introduced in 2004, which features added firewall protection against viruses and intruders, capabilities for automatic security updates, and an easy-to-use interface for managing the security applications on the users computer. It is up to users of the software to track these vulnerabilities, test, and apply all patches. This process is called patch management.

DISASTER RECOVERY PLANNING AND BUSINESS CONTINUITY PLANNING

Disaster Definition A disaster is any event that can cause a significant disruption in operational and/or computer processing capabilities for a period of time, which affects the operations of the business. The purpose of defining a crisis or a discontinuity is to establish a documented description of what constitutes a crisis or a discontinuity. The intent is to minimize the decision-making process when an event occurs. Disaster recovery is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or humaninduced disaster Disaster recovery planning devises plans for the restoration of computing and communications services after they have been disrupted by an event such s an earthquake, flood, or terrorist attack. Disaster recovery plans focus primarily on the technical issues involved in keeping systems up and running, such as which files to back up and the maintenance of backup computer stems or disaster recovery services. For example, MasterCard maintains a duplicate computer center in Kansas City, Missouri, to serve as an emergency backup to its primary computer center St. Louis. Rather than build their own backup facilities, many firms contract with disaster recovery firms, such as Comdisco Disaster Recovery Services in Rosemont, Illinois, and SunGard Recovery Services, headquartered in Wayne, Pennsylvania. These disaster recovery firms provide hot sites housing spare computers at locations around the country where subscribing firms can run their critical applications in an emergency. Business continuity planning focuses on how the company can restore business operations after a disaster strikes. The business continuity plan id entifies critical business processes and determines action plans for handling mission-critical functions if systems go down. . Business managers and information technology specialists need to work together on both types of plans to determine which systems and business processes are most critical to the company. They must conduct a business impact analysis to identify the firm's most critical systems and the impact systems outage would have on the business. Management must determine the maximum amount of time the business can survive with its systems down and which parts of the business must be restored first.

Although these are different activities, they are clearly related and Disaster Recovery Planning builds upon Business Continuity Planning when properly done. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions. Introduction This Disaster Recovery Plan is designed to ensure the continuation of vital business processes

in the event that a disaster occurs. This plan will provide an effective solution that can be used to recover all vital business processes within the required time frame using vital records that are stored off-site. The issue to be addressed within the Disaster Recovery Strategy is to insure that every reasonable measure has been taken to identify and mitigate potential risks that exist within the processing environment. The most successful Disaster Recovery Strategy is one that will never be implemented; therefore, risk avoidance is a critical element in the disaster recovery process. Scope and Objectives The Disaster Recovery Plan provides a state of readiness allowing prompt personnel response after a disaster has occurred. This, in turn, provides for a more effective and efficient recovery effort. The Disaster Recovery Plan should be developed to accomplish the following objectives:

1. Limit the magnitude of any loss by minimizing the duration of a critical application service interruption. 2. Assess damage, repair the damage, and activate the repaired computer center. 3. Recover data and information imperative to the operation of critical applications. 4. Manage the recovery operation in an organized and effective manner. 5. Prepare technology personnel to respond effectively in disaster recovery situations. Every business has the responsibility to respond to any short or long term disruption of services. By developing, documenting, implementing and testing a Disaster Recovery Plan, businesses will be able to restore the availability of critical applications in a timely and organized manner following a disaster occurrence.

Importance of disaster recovery planning As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of the systems, or the rapid recovery of the systems, has increased. It is estimated that most large companies spend between 2% and 4% of their IT budget on disaster recovery planning, with the aim of avoiding larger losses in the event that the business cannot continue to function due to loss of IT infrastructure and data. Of companies that had a major loss of business data, 43% never reopen, 51% close within two years, and only 6% survive in the long-run.

As a result, preparation for recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Security Outsourcing Many companies, especially small businesses, lack the resources or expertise to provide a secure high-availability computing environment on their own. They can outsource many security functions to managed security service providers (MSSPs) that monitor network activity and perform vulnerability testing and intrusion detection. Guardent, Counterpane, VeriSign, and Symantec are leading providers of MSSP services.

CONCERN FOR SYSTEM OWNERS AND DEVELOPERS


Many firms are reluctant to spend heavily on security because it is not directly related to sales revenue. However protecting information system is so critical to the operation of that deserves a second look. Companies have very valuable information assets to protect. Businesses must protect not only their own information assets but also those of customers, employees and business partners. The system owners and developers always have the concern to secure the information systems. The concerns for the owners and the developers of system are:

1. Disaster recovery plan: As companies increasingly rely on digital networks for revenue and operations, they need to take additional steps to ensure that their systems and applications are always available. The system owners and developers must plan for running the business even in the event of any kind of system disaster. 2. Security policy: Once they identify the risks to the system, they will need to develop a security policy for protecting the companys assets. A security policy consists of policies determining acceptable use of the firms information resources and which members of the company have access to its information assets, procedures, and technical measures used to prevent unauthorised access, alteration, theft, or physical damage. 3. Software quality assurance: In order to ensure the assurance of quality of software, the system owners and developers must develop

- Rules and policy for validation, verification and certification of software. verification and validation is the process of checking that a software system meets specifications and that it fulfills its intended purpose. It is normally the responsibility of software testers as part of the software development lifecycle. - System development methodology: A software development methodology or system development methodology in software engineering is a framework that is used to structure, plan, and control the process of developing an information system. Every software development methodology framework acts as a basis for applying specific approaches to develop and maintain software. Several software development approaches have been used since the origin of information technology. These are:

Waterfall: a linear framework Prototyping: an iterative framework Incremental: a combined linear-iterative framework Spiral: a combined linear-iterative framework Rapid application development (RAD): an iterative framework Extreme Programming

4. Data quality: Data quality describe the quality of the content of information systems. It is often pragmatically defined as: "The fitness for use of the information provided." The system owners and developers must look for data quality and must form policies regarding source, authorization and auto-checking of the data by the software.

CONTROL
Controls can be regarded as : Methods, policies, and procedures Ensures protection of organizations assets Ensures accuracy and reliability of management standards The Control Challenge records, and operational adherence to

How the organization will control the flow of information to internal and external world? Protect the confidentiality: One of the main importance of control is that it protects the confidentiality . Control is very essential nowdays. Control enables the user to protect its password, encryption etc and to maintain its confidential and allows it to be confidential to the user only. Protect the unwarranted changes in the information by unauthorized users: An Important function of control is to protect the unwarranted changes in the information by unauthorized users. Control helps to restrict unauthorized users to gain access of the information and to make unwarranted changes. Thus control, allows authorized users and restrict unauthorized users from accessing information. Protect the privacy of the customers: Another important function is Control is to protect the privacy of the customers. Control enables the users information and data to be confidential as well as to enable privacy of the users information and data, by making it available only to the user and no one else etc.

Criteria for determining control structure Importance of data Efficiency, complexity, and expense of each control technique Level of risk if a specific activity or process is not properly controlled

Types of Control There are two main types of controls : General Control: Overall controls that establish a framework for controlling the design, security and use of computer programs. Application Control: Specific control unique to each computerized application. General Control There are six types of general control : Implementation Control: It is a review of development and implementation methodologies and adherence to the standards. This is mainly for software quality assurance.

Software Control: Controls the various categories of software, versions of software, unauthorized access of software programs. Hardware Control: Ensures physical security of the hardware, Preventive maintenance audit etc Computer Operation Control: User control, System administrator procedures, network access rules etc. Data Security & Control: Backup control, system audit for password security, capability lists and data access permissions. Administrative Control: Formalized standard rules, procedures and disciplines to ensure that organizations control are properly executed. Application Control There are four types of application control: Input Control: The procedure to check data for accuracy and completeness. Validation Authorization Control totals Processing Control: The routines for establishing that data are complete and accurate during updating. Capability Control: Measures to restrict the capability of a user to use the system. Output Control: Measures that ensure that the results of a computer processing are accurate, complete and properly distributed.

TYPES OF CONTROL
There are two main types of controls : General Control: Overall controls that establish a framework for controlling the design, security and use of computer programs. Application Control: Specific control unique to each computerized application.

General Control

There are six types of general control : Implementation Control: It is a review of development and implementation methodologies and adherence to the standards. This is mainly for software quality assurance. Software Control: Controls the various categories of software, versions of software, unauthorized access of software programs. Hardware Control: Ensures physical security of the hardware, Preventive maintenance audit etc Computer Operation Control: User control, System administrator procedures, network access rules etc. Data Security & Control: Backup control, system audit for password security, capability lists and data access permissions. Administrative Control: Formalized standard rules, procedures and disciplines to ensure that organizations control are properly executed. Application Control There are four types of application control: Input Control: The procedure to check data for accuracy and completeness. Validation Authorization Control totals Processing Control: The routines for establishing that data are complete and accurate during updating. Capability Control: Measures to restrict the capability of a user to use the system. Output Control: Measures that ensure that the results of a computer processing are accurate, complete and properly distributed.

General control An IT general control should demonstrate that the organization has a procedure or policy in place for technology that affects the management of fundamental organizational processes such as risk management, change management, disaster recovery and security.

Implementation Control: It is a review of development and implementation methodologies and adherence to the standards. This is mainly for software quality assurance.

Software Control: Controls the various categories of software, versions of software, unauthorized access of software programs.

Hardware Control: Ensures physical security of the hardware, Preventive maintenance audit. Etc.

Computer Operation Control: User control, System administrator procedures, network access rules etc.

Data Security & Control: Backup control, system audit for password security, capability lists and data access permissions

Administrative Control: Formalized standard rules, procedures and disciplines to ensure that organizations control are properly executed.

Examples of general control Back up/operations: Back-up media (tapes, disks, etc.) are rotated to an offsite storage facility. The backup includes operating system and application databases. Logs showing the rotation and presence of back up media should be maintained. Back up /operations: at minimum incremental backups are performed nightly and full back ups are done on atleast a monthly basis. These back-ups are rotated offsite to a well protected storage site. The term incremental backups applies to redo logs and any other methods that permits files to be reconstructed from an original base file plus records that will bring it up to current state.

Application control An important audit conducted entails general work flow control for computer applications. Below are general guidelines to be used to review these controls for any computer application

that may be in use.

Input Control: The procedure to check data for accuracy and completeness Validation Authorization Control totals Data input controls ensure the accuracy, completeness, and timeliness of data during its conversion from its original source into computer data, or entry into a computer application. Data can be entered into a computer application from either manual online input or by scheduled automated processing. The input control reviewer should determine the adequacy of both manual and automated controls over data input to ensure that data is input accurately with optimum use of computerized validation and editing and that error handling procedures facilitate the timely and accurate resubmission of all corrected data.

1) Documented procedures should exist for any data manually entered into the application. The procedures should include how to identify, correct, and reprocess rejected data. 2) Input edits should be used by the application. These could include checking for invalid field lengths, invalid characters, missing or erroneous data, incorrect dates, or the use of check digits. 3) Input data should also be controlled by the use of record counts, control totals, and event logs. 4) Another way to help ensure appropriate data entry is through authorization/approval. The authorization levels of the assigned approvers should also be reviewed to determine if they are reasonable. 5) Passwords should be used to control access. Passwords should be changed periodically, deleted when users leave the University, and modified to reflect user job changes. 6) Duties should be separated to ensure that no one individual performs multiple critical tasks. (Example: Name Change and distribution of payroll checks)

Processing Control: The routines for establishing that data are complete and accurate during updating Data processing controls are used to ensure the accuracy, completeness, and timeliness of data during either batch or online processing. These controls will insure that data is accurately processed through the application and that no data is added, lost, or altered during

processing.

1) Documentation should exist explaining the workflow through the application. Examples would be narratives on the application processes, flowcharts, and an explanation of system or error messages. 2) If batch processes are run on a regular schedule, there should be documented procedures explaining how this is performed. The schedule of steps to be followed in case normal completion or failure and restart. 3) A processing log should be available. It should be reviewed for unusual or unauthorized activity. 4) Processing logs show errors or problems encountered during processing. These logs should be the source for error reports to be used for trend analysis and follow up analysis. 5) There should be controls in place to document the correct files are used for processing. 6) Processing edits should also be used. These can limit large scale damage which could result in a major database recovery effort. 7) Audit logs should be generated during processing. These logs contain information about each transaction. Data that should be included are: who initiated each of the transactions, the data and time of the transactions, the location of the transaction origination (IP address as an example). Logs are used for activity reporting and anomaly detection.

Capability Control: Measures to restrict the capability of a user to use the system.

Output Control: Measures that ensure that the results of a computer processing are accurate, complete and properly distributed. Data output controls ensure the integrity of output and the correct and timely distribution of any output produced. Output can be in paper, an email attachment, as file input to another application or on an online screen. Output controls result in the verification of accurate control totals, and timely result distribution.

1) Output should be balanced/reconciled to input. There should be adequate separation of duties for the balancing/reconciliation process. 2) There should be documented procedures to explain the methods for the proper balancing/reconciliation and error correcting of output.

3) Output should be reviewed for general acceptability and completeness, including any control totals. 4) There should be error reports. These should contain: - A description of problems/errors and date identified - Corrective action taken 5) Record retention and backup schedules for output files should be established. Consideration should be given to rotate output files offsite.

Вам также может понравиться