Вы находитесь на странице: 1из 44

What is the Internet architecture?

It is by definition a meta-network, a constantly changing collection of thousands of individual


networks intercommunicating with a common protocol.

The Internet's architecture is described in its name, a short from of the compound word "inter-
networking". This architecture is based in the very specification of the standard TCP/IP protocol,
designed to connect any two networks which may be very different in internal hardware,
software, and technical design. Once two networks are interconnected, communication with
TCP/IP is enabled end-to-end, so that any node on the Internet has the near magical ability to
communicate with any other no matter where they are. This openness of design has enabled the
Internet architecture to grow to a global scale.

In practice, the Internet technical architecture looks a bit like a multi-dimensional river system,
with small tributaries feeding medium-sized streams feeding large rivers. For example, an
individual's access to the Internet is often from home over a modem to a local Internet service
provider who connects to a regional network connected to a national network. At the office, a
desktop computer might be connected to a local area network with a company connection to a
corporate Intranet connected to several national Internet service providers. In general, small local
Internet service providers connect to medium-sized regional networks which connect to large
national networks, which then connect to very large bandwidth networks on the
Internet backbone. Most Internet service providers have several redundant network cross-
connections to other providers in order to ensure continuous availability.

The companies running the Internet backbone operate very high bandwidth networks relied on
by governments, corporations, large organizations, and other Internet service providers. Their
technical infrastructure often includes global connections through underwater cables
and satellite links to enable communication between countries and continents. As always, a
larger scale introduces new phenomena: the number of packets flowing through the switches on
the backbone is so large that it exhibits the kind of complex non-linear patterns usually found in
natural, analog systems like the flow of water or development of the rings of Saturn (RFC 3439,
S2.2).

Each communication packet goes up the hierarchy of Internet networks as far as necessary to get
to its destination network where local routing takes over to deliver it to the addressee. In the
same way, each level in the hierarchy pays the next level for the bandwidth they use, and then
the large backbone companies settle up with each other. Bandwidth is priced by large Internet
service providers by several methods, such as at a fixed rate for constant availability of a certain
number of megabits per second, or by a variety of use methods that amount to a cost per
gigabyte. Due to economies of scale and efficiencies in management, bandwidth cost drops
dramatically at the higher levels of the architecture.
Defining Network Infrastructure
A network can be defined as the grouping of hardware devices and software components which
are necessary to connect devices within the organization, and to connect the organization to other
organizations and the Internet.
 Typical hardware components utilized in a networking environment are network interface cards,
computers, routers, hubs, switches, printers, and cabling and phone lines.
 Typical software components utilized in a networking environment are the network services and
protocols needed to enable devices to communicate.
Only after the hardware is installed and configured, can operating systems and software be
installed into the network infrastructure. The operating systems which you install on your
computers are considered the main software components within the network infrastructure. This
is due to the operating system containing network communication protocols that enable network
communication to occur. The operating system also typically includes applications and services
that implement security for network communication.
Another concept, namely network infrastructure, is also commonly used to refer to the grouping
of physical hardware and logical components which are needed to provide a number of features
for the network, including these common features:
 Connectivity
 Routing and switching capabilities
 Network security
 Access control

The network or network infrastructure has to exist before a number of servers needed to support
applications which are needed by your users can be deployed into your networking environment:
 File and print servers
 Web and messaging servers
 Database servers
 Application servers

When you plan your network infrastructure, a number of key elements need to be clarified or
determined:
 Determine which physical hardware components are needed for the network infrastructure which
you want to implement.
 Determine the software components needed for the network infrastructure.
 Determine the following important factors for your hardware and software components:
 Specific location of these components
 How the components are to be installed.
 How the components are to be configured.
When you implement a network infrastructure, you need to perform a number of activities that
can be broadly grouped as follows:
 Determine the hardware and software components needed.
 Purchase, assemble and install the hardware components.
 Install and configure the operating systems, applications and all other software.

The physical infrastructure of the network refers to the physical design of the network together
with the hardware components. The physical design of the network is also called the network’s
topology. When you plan the physical infrastructure of the network, you are usually limited in
your hardware component selection by the logical infrastructure of the network

The logical infrastructure of the network is made up of all the software components required to
enable connectivity between devices, and to provide network security. The network’s logical
infrastructure consists of the following:
 Software products
 Networking protocols/services.
It is therefore the network’s logical infrastructure that makes it possible for computers to
communicate using the routes defined in the physical network topology.
The logical components of the network topology define a number of important elements:
 Speed of the network.
 Type of switching that occurs.
 Media which will be utilized.
 Type of connections which can be formed.

Understanding the OSI Reference Model and TCP/IP Protocol Suite


The International Organization for Standardization (ISO) developed the Open Systems
Interconnection (OSI) reference model for computing. The OSI model defines how hardware and
software function to enable communication between computers. The OSI model is a conceptual
framework which can be referenced to better comprehend how devices operate on the network. It
is the most widely used guide for a networking infrastructure. When manufacturers design new
products, they reference the OSI model’s concepts on the manner in which hardware and
software components should function.
The OSI model defines standards for:
 How devices communicate between each other.
 The means used to inform devices when to send data and when not to transmit data.
 The methods which ensure that devices have a correct data flow rate.
 The means used to ensure that data is passed to, and received by the intended recipient.
 How physical transmission media is arranged and connected.

The OSI model is made up of seven layers which are presented as a stack. Data which is passed
over the network moves through each layer. Each layer of the OSI model has its own unique
functions and protocols. Different protocols operate at the different layers of the OSI model. The
layer of the OSI reference model at which the protocol operates defines its function. Different
protocols can operate together at different layers within a protocol stack. When protocols operate
together, they are referred to as a protocol suite or protocol stack. When protocols support
multiple path LAN-to-LAN communications, they are called routable protocols. The binding
order determines the order in which the operating system runs the protocols.
The seven layers of the OSI reference model, and each layers’ associated function are listed here:
 Physical Layer – layer 1: The Physical layer transmits raw bit streams over a physical medium,
and deals with establishing a physical connection between computers to enable communication.
The physical layer is hardware specific; it deals with the actual physical connection between the
computer and the network medium. The medium used is typically a copper cable that utilizes
electric currents for signaling. Other media that are becoming popular are fiber-optic and
wireless media. The specifications of the Physical layer include physical layout of the network,
voltage changes and the timing of voltage changes, data rates, maximum transmission distances,
and physical connectors to transmission mediums. The issues normally clarified at the Physical
Layer include:
 Whether data is transmitted synchronously or asynchronously.
 Whether the analog or digital signaling method is used.
 Whether baseband or broadband signalling is used.
 Data-Link Layer – layer 2: The Data-link layer of the OSI model enables the movement of data
over a link from one device to another, by defining the interface between the network medium
and the software on the computer. The Data-link layer maintains the data link between two
computers to enable communications. The functions of the Data-link layer include packet
addressing, media access control, formatting of the frame used to encapsulate data, error
notification on the Physical layer, and management of error messaging specific to the delivery of
packets. The Data-link layer is divided into the following two sublayers:
 The Logical Link Control (LLC) sublayer provides and maintains the logical links used for
communication between the devices.
 The Media Access Control (MAC) sublayer controls the transmission of packets from one
network interface card (NIC) to another over a shared media channel. A NIC has a unique MAC
address, or physical address. The MAC sublayer handles media access control which essentially
prevents data collisions. The common media access control methods are:

 Token Passing; utlized in Token Ring and FDDI networks


 Carrier Sense Multiple Access/Collision Detection (CSMA/CD); utilized in Ethernet networks.
 Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA); utilized in AppleTalk networks.
 Network Layer – layer 3: The Network layer provides end-to-end communications between
computers that exist on different network. One of the main functions performed at the Network
layer is routing. Routing enables packets to be moved between computers which are more than
one link from one another. Other functions include traffic direction to the end destination,
addressing, packet switching and packet sequence control, end-to-end error detection, congestion
control, and Network layer flow control and error control.
 Transport Layer – layer 4: The Transport layer deals with transporting data in a sequential
manner, and with no data loss. The Transport layer divides large messages into smaller data
packets so that it can be transmitted to the destination computer. It also reassembles packets into
messages for it to be presented to the Network layer. Functions of the Transport layer include
guaranteed data delivery, name resolution, flow control, and error detection and recovery. The
common Transport protocols utilized at this layer are Transmission Control Protocol (TCP) and
User Datagram Protocol (UDP).
 Session Layer – layer 5: The Session layer enables communication sessions to be established
between processes or applications running on two different computers. A process is a specific
task that is associated with a particular application. Applications can simultaneously run
numerous processes. The Session layer establishes, maintains and terminates communication
sessions between applications. The Session layer utilizes the virtual circuits created by the
Transport layer to establish communication sessions.
 Presentation Layer – layer 6: The Presentation layer is responsible for translating data between
the formats which the network requires and the formats which the computer is anticipating. The
presentation layer translates the formats of each computer to a common transfer format which
can be interpreted by each computer. Functions include protocol conversion, data translation,
data encryption and decryption, data compression, character set conversion, and interpretation of
graphics commands.
 Application Layer – layer 7: The Application layer provides the interface between the network
protocol and the software running on the computer. It provides the interface for e-mail, Telnet
and File Transfer Protocol (FTP) applications, and files transfers. This is the location where
applications interrelate with the network.
Transmission Control Protocol/Internet Protocol (TCP/IP) is a network communication protocol
suite that can be utilized as the communications protocol on private networks. TCP/IP is also the
default protocol utilized on the Internet. The majority of network infrastructures are based on
TCP/IP.
As an engineer designing the network infrastructure, you have to provide a TCP/IP design which
can provide the following:
 Connect devices in the private internal network to the Internet.
 Enable users to access TCP/IP based resources.
 Protect confidential company data.
 Provide application responses in accordance to the requirements of the organization.
The TCP/IP protocol suite is a four layer model which corresponds to seven layers of the OSI
reference model:
 Network Interface layer: The Network Interface layer maps to the Physical Layer (Layer 1) and
the Data-link layer (Layer 2) of the OSI reference model. The Network Interfae layer’s function
is to move bits (0s and 1s) over the network medium.
 Internet layer: The Internet layer is associated with the OSI model’s Network layer. The Internet
layer handles the packaging, addressing, and routing of data. The main protocols of the TCP/IP
suite that operate at the Internet layer are:
 Internet Protocol (IP): IP is a connectionless, routable protocol which performs addressing and
routing functions. IP places data into packets, and removes data from packets.
 Internet Control Message Protocol (ICMP): The protocol is responsible for dealing with errors
associated with undeliverable IP packets, and for indicating network congestion and timeout
conditions.
 Internet Group Management Protocol (IGMP): The IGMP protocol controls host membership in
groups of devices, called IP multicast groups. The devices in the IP multicast groups receive
traffic which is addressed to a shared multicast IP address. Unicast messages are sent to a host,
while a multicast is sent to each member of an IP multicast group.
 Address Resolution Protocol (ARP): The ARP protocol maintains the associations which map IP
addresses to MAC addresses. Because mappings are stored in the ARP Cache, when the same IP
address needs to be mapped again to its associated MAC address, the discovery process is not
performed again. Reverse Address Resolution (RARP) resolves MAC addresses to IP addresses.
 Transport layer/ Host-to-Host Transport: This layer is associated with the Transport layer of the
OSI model. The main TCP/IP protocols operating at the Host to Host or Transport layer are:
 Transmission Control Protocol (TCP): TCP offers greater reliability when it comes to
transporting data than what UDP, the other protocol which works at this level provides. With
TCP, the application which sends the data receives acknowledgement or verification that the data
was actually received. TCP is regarded as a connection-orientated protocol – a connection is
established before data is transmitted. A three-part TCP handshake process is performed to
establish a host to host connection. The three-part TCP handshake process establishes a reliable
connection over which to exchange data.
 User Datagram Protocol (UDP): UDP does not provide reliable data transport. No
acknowledgements are transmitted. While UDP is faster than TCP, it is less reliable.
 Application layer: The Application layer is associated with the Session layer, Presentation layer,
and Application layer of the OSI model. Application layer protocols of the TCP/IP protocol suite
functions at these layers. Application layer protocols enable applications to communicate
between each other, and also provide access to the services of the lower layers.

Understanding Networking Services


Running on the physical hardware in the network infrastructure are networking services.
Networking services basically extend the physical network by providing a number of key
capabilities, including the following:
 Multiprotocol support, networks can run multiple protocol, including:
 Transmission ControlProtocol/Internet Protocol (TCP/IP)
 Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX)
 Appletalk
 Systems Network Architecture (SNA)
 Multiprotocol routing among network different network segments: The Routing and Remote
Access Service (RRAS) feature of Windows 2000 and Windows Server 2003 can be used to
identify networks with different topologies and secure segments of the network. The Routing and
Remote Access Service can be configured for:
LAN-to-LAN routing
 LAN-to-WAN routing
 Virtual private network (VPN) routing
 Network Address Translation (NAT) routing
 Routing features, including IP multicasting, Packet filtering, Demand-dial routing, and DHCP
relay
 Support for strong network security: Internet Protocol Security (IPSec) and Virtual Private
Networks (VPNs) can be used to provide a number of features. VPNs provide secure and
advanced connections through a non-secure network by providing data privacy. Private data is
secure in a public environment. VPN client software is assured private access in a publicly
shared environment. By using analog, ISDN, DSL, cable technology, dial and mobile IP; VPNs
are implemented over extensively shared infrastructures. IPSec protects, secures and
authenticates data between IPSec peer devices by providing per packet data authentication. Data
flows between IPSec peers are confidential and protected. IPSec supports the following:
 Unicast IP datagrams
 High-Level Data-Link Control (HDLC)
 ATM
 Point-to-Point Protocol (PPP)
 Frame Relay serial encapsulation
 Generic Routing Encapsulation (GRE)
 IP-in-IP (IPinIP)
 Encapsulation Layer 3 tunneling protocols.
 Enable connectivity between the private internal network and Internet applications: Networking
services such as the RRAS service and Network Address Translation (NAT) service enable users
on the private internal network to connect to the Internet, while at the same time securing
resources located on the private network.
 NAT translates IP addresses and associated TCP/UDP port numbers on the private network to
public IP addresses which can be routed on the Internet. Through NAT, host computers are able
to share a single publicly registered IP address to access the Internet. NAT also offers a number
of security features which can be used to secure the resources on your private network.
 RRAS IP packet filters can be used to restrict incoming or outgoing IP address ranges based on
information in the IP header. You can configure and combine multiple filters to control network
traffic. You can also map external public IP addresses and ports to private IP addresses and ports
so that internal private resources can be accessed by Internet users. You use a special port to map
specific Internet users to resources within the private network.
 The Internet Connection Sharing (ICS) service is basically a simplified implementation of a
Network Address Translation (NAT) server. You can use ICS to connect the entire network to
the Internet. This is due to the ICS service providing a translated connection – all computers can
access resources on the Internet. Implementing ICS is though only recommended for those
exceptionally small networks.
 Microsoft Proxy Server can also be used to provide connectivity between the private internal
network, and Internet applications.
 Enable users to remotely access the private network. The service that enables this capability is
the Routing and Remote Access Service (RRAS). The different types of remote access are:
 Dial-in remote access: Dial-in remote access uses modems and servers running the Routing and
Remote Access (RRAS) service. To enable communication, dial-in access utilizes the Point-to-
Point (PPP) protocol.
 VPN remote access: A VPN provides secure and advanced connections through a non-secure
network. With VPN access, encryption is used to create the VPN tunnel between the remote
client and the corporate network. To secure VPN access, Windows Server 2003 provides strong
levels of encryption.
 Wireless remote access: Wireless networks are defined by the IEEE 802.11 specification. With
wireless networks, wireless users connect to the network through connecting to a wireless access
point (WAP). To secure wireless networks and wireless connections, administrators can require
all wireless communications to be authenticated and encrypted. When planning wireless remote
access, planning security for wireless networks should be a high priority factor.
 Name resolution capabilities: The Domain Name System (DNS) service or Windows Internet
Name Service (WINS) service can be used to resolve host names to IP addresses. Name
resolution has to occur whenever the host name is used to connect to a computer and not the IP
addresses. Name resolution has to occur so that the IP address can be resolved to the hardware
address for TCP/IP based communication to occur.
 The DNS service resolves host names and fully qualified domain names (FQDNs) to IP
addresses in TCP/IP based networks. The DNS server manages a database of host name to IP
address mappings. This is the primary method used for name resolution in Windows Server
2003.
 WINS is an enhanced NetBIOS name server (NBNS) which was designed by Microsoft to
resolve NetBIOS computer names to IP addresses, and at the same time eliminate the usage of
broadcasts for name resolution. WINS can resolve NetBIOS names for local hosts and remote
hosts.
 Automatic configuration of IP addressing and other IP parameters: The Dynamic Host
Configuration Protocol (DHCP) service simplifies the administration of IP addressing in TCP/IP
based networks. One of the primary tasks of the protocol is to automatically assign IP addresses
to DHCP clients. A server running the DHCP service is called a DHCP server. The DHCP
protocol automates the configuration of TCP/IP clients because IP addressing occurs through the
system. IP addresses that are assigned via a DHCP server are regarded as dynamically assigned
IP addresses. The DHCP server assigns IP addresses from a predetermined IP address range(s).
The functions of the DHCP server running the DHCP service are listed here:
 Dynamically assign IP addresses to DHCP clients.
 Assign the following TCP/IP configuration information to DHCP clients:
 Subnet mask information
 Default gateway IP addresses
 Domain Name System (DNS) IP addresses.
 Windows Internet Naming Service (WINS) IP addresses.
There are a number of tools and features included with Windows 2000 and Windows Server
2003 that can be used to manage and monitor the networking services which you deploy within
your networking infrastructure.

Network Infrastructure Planning Overview


Planning network infrastructure is a complex task that needs to be performed so that the network
infrastructure needed by the organization can be designed and created. Proper planning is crucial
to ensure a highly available network and high performance network that result in reduced costs
and enhances business procedures for the organization.
To properly plan your network infrastructure, you have to be knowledgeable on a number of
factors, including the following:
 Requirements of the organization.
 Requirements of users.
 Existing networking technologies.
 Necessary hardware and software components.
 Networking services which should be installed on the user’s computers so that they can perform
their necessary tasks.
A typical network infrastructure planning strategy should include the following:
 Determine the requirements of the organization and its users, and then document these
requirements.
 Define a performance baseline for all existing hardware devices.
 Define a baseline for network utilization as well.
 Identify the capacity for the physical network installation. This should encompass the following:
 Server hardware, client hardware.
 Allocation of network bandwidth for the necessary networking services and applications.
 Allocation of Internet bandwidth
 Determine which network protocol will be used.
 Determine which IP addressing method you will use.
 Determine which technologies, such as operating systems and routing protocols are needed to
cater for the organization’s needs as well as for possible future expansions.
 Determine the security mechanisms which will be implemented to secure the network and
network communication.
After planning, the following step would be to implement the technologies which you have
identified. Implementation of the network infrastructure involves the following tasks:
 Installing the operating systems.
 Installing the necessary protocols and software components.
 Deploying DNS or WINS name resolution.
 Designing the DNS namespace.
 Assigning IP addresses and subnet masks to computers.
 Deploying the necessary applications.
 Implementing the required security mechanisms.
 Defining and implementing IPSec policies.
 Determining the network infrastructure maintenance strategy which you will employ once the
network infrastructure is implemented. Network infrastructure maintenance consists of the
following activities:
 Upgrading operating systems.
 Upgrading applications.
 Monitoring network performance, processes and usage.
 Troubleshooting networking issues.
Windows Server 2003 includes a number of features, and user and computer management tools
that can be utilized to plan the network configuration:
 The Resultant Set of Policy (RSoP) MMC snap-in can be used to determine the effects of
applying changes to Group Policy Objects (GPOs) in Windows 2000 and Windows Server
2003 Active Directory environments beforeapplying the changes.
 The Group Policy Management Console (GPMC) can be used if you want to view configuration
information on the existing GPO settings in Windows 2000 and Windows Server 2003 Active
Directory environments.

Determining Network Layer and Transport Layer Protocols


Windows Server 2003 supports the following network layer and transport layer protocol
combinations:
 Transmission Control Protocol/Internet Protocol (TCP/IP): TCP/IP is a grouping of protocols
which provides a collection of networking services. TCP/IP is the main protocol which Windows
Server 2003 utilizes for its network services. The main protocols in the TCP/IP suite
are Transmission Control Protocol (TCP) that operates at the Transport layer, and Internet
Protocol (IP) that operates at the Network layer. When communication takes place through
TCP/IP, IP is used at the Network layer, and either TCP or UDP is used at the Transport layer.
With TCP/IP, the TCP component of the protocol suite utilizes port numbers to forward
messages to the correct application process. Port numbers are assigned by the Internet Assigned
Numbers Authority (IANA), and they identify the process to which a particular packet is
connected to. Port numbers are found in the packet header.
The main advantages of using TCP/IP are summarized below:
 Can be used to establish connections between different types of computers and servers.
 Includes support for a number of routing protocols.
 Enables internetworking between organizations.
 Includes support for name and address resolution services, including Domain Name Service
(DNS), Dynamic Host Configuration Protocol (DHCP), and Windows Internet Name Service
(WINS).
 Includes support for a number of different Internet standard protocols for Web browsing, file and
print services, and transporting mail.
The disadvantages of TCP/IP are summarized below:
 IPX is faster than TCP/IP.
 TCP/IP is intricate to set up and manage.
 The overhead of TCP/IP is higher than that of IPX.
 Internetwork Packet Exchange (IPX): The Microsoft implementation of Novell’s IPX/SPX
protocol stack is NWLink IPX/SPX. NWLink IPX/SPX is used in Novell NetWare, and is
basically IPX for Windows. Windows Server 2003 includes NWLink IPX/SPX support to enable
Windows Server 2003 to communicate with legacy Novell NetWare servers and clients.
NWLink IPX/SPX could become problematic in large networks because it does not have a
central IPX addressing scheme which prevents networks from utilizing the same address
numbers.
The main advantages of NWLink IPX/SPX are summarized below:
 NWLink IPX/SPX is simple to implement and manage.
 Connecting to is NetWare servers and clients is a simple process.
 NWLink IPX/SPX is routable
The disadvantages of NWLink IPX/SPX are summarized below:
 Windows Server 2003 only includes limited support for NWLink IPX/SPX.
 Exchanging data between different organizations via NWLink IPX/SPX is an intricate process.
 NWLink IPX/SPX does not support standard network management protocols.
 NetBIOS Extended User Interface (NetBEUI): NetBIOS naming is supported in Windows Server
2003. Windows Server 2003 does not though support the NetBEUI protocol. NetBEUI is a single
protocol that was initially used in Windows NT 3.1 and Windows for Workgroups operating
systems. The protocol provides basic file sharing services for Windows computers, and is
designed for small networks. NetBEUI does not perform well on large networks. The protocol
can also not support internetwork traffic because it cannot route traffic between networks.
NetBEUI cannot address traffic to a computer on a different network

TCP/IP Design Requirements


Before deciding to use a TCP/IP based network design, you first have to determine whether you
actually need to utilize TCP/IP. Whether a TCP/IP based network design is required or not is
dictated by the networking services and applications required within your network infrastructure:
 The Active Directory directory service uses the Lightweight Directory Access Protocol (LDAP)
and Domain Name System (DNS). These protocols are dependent on TCP/IP.
 Domain Name System (DNS) is the primary name resolution method used in Windows Server
2003, and is dependent on TCP/IP being installed.
 Web servers use the File Transfer Protocol (FTP) protocol and HTTPprotocol, which are each
reliant on TCP/IP.
 As mentioned earlier, the default protocol on the Internet is TCP/IP. In fact, all Internet protocols
are based on TCP/IP. If you are planning to enable internet connectivity, TCP/IP is a
requirement.
 Both Line Printer Daemon (LPD) and PrinterRemote (LPR) printers need TCP/IP to be
installed.
 To enable interoperability between Unix and other operating systems, TCP/IP is used as the
common transport protocol.
In order to implement a TCP/IP network infrastructure, you have to gather a number of design
requirements, including the following:
 The existing TCP/IP network’s characteristics, if applicable, should include:
 The number of network segments which currently exist.
 The IP address range assigned to the organization.
 The routing protocols being utilized.
 The attributes of the data which is to be transmitted over the network segments:
 The quantity of data transmitted over each network segment.
 The confidentiality requirements of the data.
 The amount of tie which users need to access the network.
 The desired response times for any applications that access resources in the network.
 Possible future network expansion expectations.
There are a number of additional factors which need to be determined before you can create a
routing solution for your network:
 The IP addressing scheme which will be utilized.
 The IP subnet masks which will be utilized.
 The Variable Length Subnet Masks (VLSMs) which will be utilized.
 The Classless Interdomain Routing utilization.
 The standards for creating TCP/IP filters
 The authentication methods for protecting access to the private network.
 The encryption algorithms for ensuring data confidentiality.

Determining the IP Addressing Scheme


The IP addressing scheme which you use can be based on:
 Public IP addresses: Here, the IP addressing scheme consists of only public IP addresses.
 Private IP addresses: Here, the IP addressing scheme consists of private IP addresses and a small
number of public IP addresses needed to enable Internet connectivity.
If you are only using a public IP addressing scheme in your network design, then you need to
perform the following activities:
 Purchase a range of public IP addresses from an ISP that is approved by the Internet Corporation
for Assigned Names and Numbers (ICANN).
 The IP address range should have sufficient IP addresses for all interfaces in your network
infrastructure design. Devices that connect to the private network need an IP address, and so too
does VPN connections.
 You need to be certain that network address translation (NAT) is not required.
 You need to implement firewalls and router packet filters to secure the resources within your
private network from Internet users.
If you are implementing a private IP addressing scheme, then the network design would consist
of the following:
 Private IP addresses would be assigned to all devices in the private internal network.
 Public IP addresses would be assigned to all devices connecting to the public network.
The selection of the IP address range needed for the organization should be based on the
following factors:
 Maximum number of IP devices on each subnet
 Maximum number of network subnets needed in the network design.
If you are using a private IP addressing scheme in your network design, consider the following
important points:
 For those IP devices that connect the company network to public networks such as the Internet,
you need to obtain a range of public IP addresses from the ISP for these devices.
 You should only assign public IP addresses to those devices that communicate directly with the
Internet. This is mainly due to you paying for each IP address obtained. Devices which directly
connect to the Internet are your network address translation (NAT) servers, Web servers, VPN
remote access servers, routers, firewall devices, and Internet application servers.
 The private IP address range which you choose should have sufficient addresses to support the
number of network subnets in your design, and the number of devices or hosts on each particular
network subnet.
 You must cater for a network address translation (NAT) implementation. NAT translates IP
addresses and associated TCP/UDP port numbers on the private network to public IP addresses
which can be routed on the Internet. Networks that do not require an implementation of a
firewall solution or a proxy server solution can use NAT to provide basic Internet connectivity.
Through NAT, host computers are able to share a single publicly registered IP address to access
the Internet.
IP version 6 (IPv6) was designed to deal with the current shortage of IP addresses with IP
version 4 (IPv4). IP version 6 also includes some modifications to TCP/IP.
The primary differences between IPv6 and IPv4 are listed here
 Source and destination addresses: IPv4: 128 bits in length; IPv6: 32 bits in length
 IPSec support: IPv4: Optional; IPv6: Required.
 Configuration of IP addresses: IPv4: Manually or via DHCP; IPv6: Via Address
Autoconfiguration – DHCP is no longer required, nor is manual configuration.
 Packet flow identification for QoS handling in the header: IPv4: No identification of packet
flow; IPv6: Packet flow identification for QoS handling exists via the Flow Label field.
 Broadcast addresses: IPv4: Broadcast addresses are used to transmit traffic to all nodes on a
specific subnet; IPv6: Broadcast addresses are replaced by a link-local scope all-nodes multicast
address.
 Fragmentation: IPv4: Performed by the sending host and at the routers; IPv6: Performed by the
sending host.
 Reassembly: IPv4: Has to be able to reassemble a 576-byte packet; IPv6: Has to be able to
reassemble a 1,500-byte packet.
 ARP Request frames: IPv4: Used by ARP to resolve an IPv4 address to a link-layer address;
IPv6: Replaced with Neighbor Solicitation messages.
 ICMP Router Discovery: IPv4: Used to determine the IPv4 address of the optimal default
gateway; IPv6: Replaced with ICMPv6 Router Solicitation and Router Advertisement messages.
 Internet Group Management Protocol (IGMP): IPv4: Used to manage local subnet group
membership; IPv6: Replaced with Multicast Listener Discovery (MLD) messages.
 Header checksum: IPv4: Included; IPv6: Excluded
The advantages of IPv6 are listed below:
 Large address space: Because of the larger number of available addresses, it is no longer
necessary to use utilize Network Address Translator (NAT) to map a public IP address to
multiple private IP addresses.
 A new header format which offers less overhead: The new header format of IPv6 is designed to
minimize header overhead. All optional fields which are needed for routing are moved to
extension headers. These extension headers are located after the IPv6 header. The IPv6 header
format is also streamlined so that it is more efficiently processed at intermediate routers. The
number of bits in IPv6 addresses is four times larger than IPv4 addresses.
 An efficient hierarchical addressing and routing infrastructure: The IPv6 global addresses are
designed to create an efficient routing infrastructure.
 Built in support for security – IPSec: A requirement of IPv6 is support for IPSec. IPSec contains
the following components that provide security:
 Authentication header (AH): The AH provides data authentication, data integrity and replay
protection for the IPv6 packet. The only fields in the IPv6 packet that are excluded are those
fields that change when the packet moves over the network.
 Encapsulating Security Payload (ESP) header: The ESP header provides data authentication,
data confidentiality, data integrity, and replay protection for ESP encapsulated payload
 Internet Key Exchange (IKE) protocol: The IKE protocol is used to negotiate IPSec security
settings.
 Support for Stateless and stateful address configuration: IPv6 can support a stateful address
configuration and a stateless address configuration. With IPv4, hosts configured to use DHCP
have to wait a minute before they can configure their own IPv4 addresses. Stateless address
configuration however enables a host on a link to automatically configure its own IPv6 address
for the link. These addresses are called link-local addresses. A link-local address is configured
automatically, even when no router exists. This allows communication between neighboring
nodes on the same link to occur immediately.
 Support for Quality of service (QoS) header fields: There are new fields in the IPv6 header that
specify the way traffic is identified and handled.
 Traffic Class field: This field defines traffic that must be prioritized.
 Flow Label field: This field enables the router to identify packets, and also handle packets that
are part of the identical flow in a special way.
 Unlimited extension headers: You can add extension headers after the IPv6 header if you want to
extend IPv6 for any new features.
 The Neighbor Discovery (ND) protocol for managing nodes on the same link:Neighbor
Discovery is a series of Internet Control Message Protocol for IPv6 (ICMPv6) messages that are
used in IPv6 environments to identify the relationships between neighboring nodes. ND enables
hosts to discover routes on the same segment, addresses and address prefixes. Address
Resolution Protocol (ARP), ICMPv4 Router Discovery and ICMPv4 Redirect messages are
replaced with the more efficient multicast and unicast Neighbor Discovery messages.
If you want an IP address to provide all services to the network, then each particular service must
have a unique TCP port or UDP port from that specific IP address. There are a number of well-
known ports which are used by the different services running on your computers.
The main port numbers used by protocols/services running on your computers are listed here:
 Port 20; for File Transfer Protocol (FTP) data
 Port 21; for File Transfer Protocol (FTP) control
 Port 23; for Telnet.
 Port 25; for Simple Mail Transfer Protocol (SMTP)
 Port 37; for Time Protocol.
 Port 49; for Terminal Access Controller Access Control System (TACACS) and TACACS+
 Port 53; for DNS.
 Port 67; for BOOTP server.
 Port 68; for BOOTP client.
 Port 69; for TFTP.
 Port 70; for Gopher.
 Port 79; for Finger.
 Port 80; for Hypertext Transfer Protocol (HTTP)
 Port 88; for Kerberos
 Port 109; for Post Office Protocol version 2 (POP2)
 Port 110; for Post Office Protocol version 3 (POP3)
 Port 115; for Simple File Transfer Protocol (SFTP)
 Port 119; for Network News Transfer Protocol (NNTP)
 Port 123; for Network Time Protocol (NTP)
 Port 137; for NetBIOS Name Service
 Port 138; for NetBIOS Datagram Service
 Port 139; for NetBIOS Session Service
 Port 143; for Internet Message Access Protocol (IMAP)
 Port 153; for Simple Gateway Monitoring Protocol (SGMP)
 Port 161; for SNMP
 Port 161; for SNMP traps
 Port 179; for BGP
 Port 389; for Lightweight Directory Access Protocol (LDAP) and Connectionless Lightweight
X.500 Directory Access Protocol (CLDAP)
 Port 443; for Secure HTTP (HTTPS)
 Port 500; for Internet Key Exchange (IKE)
 Port 546; for DHCPv6 client
 Port 547; for DHCPv6 server
 Port 631; for Internet Printing Protocol (IPP)
Determining Locations of Network Components
When planning locations for your hardware and software components, the factors that you need
to consider are primarily determined by how your users need to access your devices to carry out
their daily tasks.
When determining locations for cables, a few important factors to consider are listed here:
 To maintain the network infrastructure, you need to be knowledgeable on where cables are
located.
li>You also need to know how cables are arranged when needing to both maintain and
troubleshoot network infrastructure issues.
 When determining locations for cables and the routing strategy of your cables, you need know
what the locations are of any obstacles which could affect the performance of your cables. These
obstacles should be bypassed.
 When routing cables, there are a number of components which cables have to either pass around
or through, that have to be determined:
 Air conditioning ducts.
 Firewalls
 Plenums
 You would need to determine the manner in which the cables should be installed.
 In cases where the cables have to run down into the center of the room from the ceiling, it
important to determine the precise location of the utility pole that will hold the cables.
 You need to determine the location of each cable terminus.
 You should cater for additional cable runs for any future network expansion plans.
When determining locations for connectivity devices, a few important factors to consider are
listed here:
 You need to determine the locations of hubs and patch panels.
 The network’s size and the installation site determines the following:
 Locations of hubs and patch panels.
 Number of hubs and patch panels needed.
 You should always include ceiling heights in your planning – remember that cable runs are
typically longer than what they seem because they run around obstacles.
 The size of the network and the protocols which you plan to utilize determines how connectivity
is established. For instance, hubs and switches can be used to connect building floors. Routers
can be used to create an internetwork.
When determining locations for servers, a few important factors to consider are listed here:
 Servers need to be physically secured and protected from power strikes and interruptions.
 With internetworks, the locations of your users that need to access servers is a determining
factor for server placement.
 If you are planning to use departmental servers for your network, place these servers in locked
closets.
 A better option to using a departmental server strategy is to place all servers in a central data
center. It is easier to physically secure servers when they reside in a single data center.
 For servers that need to be accessed by all users within the organization, you need to place these
servers where they can directly be connected to the backbone network.
When determining locations for workstations, a few important factors to consider are listed here:
 Before placing any workstation, you need to determine which computer type is needed.
 You also need to determine how workstations should be placed relative to the actual desk.
When determining locations for printers and other shared components, a few important factors
to consider are listed here:
 Printers should be placed where users can easily access them.
 Be careful when placing printers that release gases when they operate as it can cause a
discomfort to users.
 When determining the location of printers, include factors such as maintenance access to the
printer, and the locations of the printer’s supplies (toner, paper).

A Definition of Web Application Architecture

Web application architecture defines the interactions between applications, middleware systems
and databases to ensure multiple applications can work together. When a user types in a URL
and taps “Go,” the browser will find the Internet-facing computer the website lives on and
requests that particular page.

The server then responds by sending files over to the browser. After that action, the browser
executes those files to show the requested page to the user. Now, the user gets to interact with the
website. Of course, all of these actions are executed within a matter of seconds. Otherwise, users
wouldn’t bother with websites.

What’s important here is the code, which has been parsed by the browser. This very code may or
may not have specific instructions telling the browser how to react to a wide swath of inputs. As
a result, web application architecture includes all sub-components and external applications
interchanges for an entire software application.

Of course, it is designed to function efficiently while meeting its specific needs and goals. Web
application architecture is critical since the majority of global network traffic, and every single
app and device uses web-based communication. It deals with scale, efficiency, robustness, and
security.

How Web Application Architecture Works

With web applications, you have the server vs. the client side. In essence, there are two programs
running concurrently:

 The code which lives in the browser and responds to user input
 The code which lives on the server and responds to HTTP requests
Image via Wikipedia

When writing an app, it is up to the web developer to decide what the code on the server should
do in relation to what the code on the browser should do. With server-side code, languages
include:

 Ruby on Rails
 PHP
 C#
 Java
 Python
 Javascript

In fact, any code that can respond to HTTP requests has the capability to run on a server. Here
are a few other attributes of server-side code:

 Is never seen by the user (except within a rare malfunction)


 Stores data such as user profiles, tweets, pages, etc…
 Creates the page the user requested

With client-side code, languages used include:

 CSS
 Javascript
 HTML
These are then parsed by the user’s browser. Moreover, client-side code can be seen and edited
by the user. Plus, it has to communicate only through HTTP requests and cannot read files off of
a server directly. Furthermore, it reacts to user input.

Web Application Architecture is Important for Supporting Future Growth

The reason why it is imperative to have good web application architecture is because it is the
blueprint for supporting future growth which may come from increased demand, future
interoperability and enhanced reliability requirements. Through object-oriented programming,
the organizational design of web application architecture defines precisely how an application
will function. Some features include:

 Delivering persistent data through HTTP, which can be understood by client-side code and
vice-versa
 Making sure requests contain valid data
 Offers authentication for users
 Limits what users can see based on permissions
 Creates, updates and deletes records

Trends in Web Application Architecture

As technology continues to evolve, so does web application architecture. One such trend is the
use of and creation of service-oriented architecture. This is where most of the code for the entire
application exists as services. In addition, each has its own HTTP API. As a result, one facet of
the code can make a request to another part of the code–which may be running on a different
server.

Another trend is a single-page application. This is where web UI is presented through a rich
JavaScript application. It then stays in the user’s browser over a variety of interactions. In terms
of requests, it uses AJAX or WebSockets for performing asynchronous or synchronous requests
to the web server without having to load the page.

The user then gets a more natural experience with limited page load interruptions. At their core,
many web applications are built around objects. The objects are stored in tables via an SQL
database. Each row in a table has a particular record. So, with relational databases, it is all about
relations. You can call on records just by listing the row and column for a target data point.

With the two above trends, web apps are now much better suited for viewing on multiple
platforms and multiple devices. Even when most of the code for the apps remain the same, they
can still be viewed clearly and easily on a smaller screen.
Best Practices for Good Web Application Architecture

You may have a working app, but it also needs to have good web architecture. Here are several
attributes necessary for good web application architecture:

 Solves problems consistently and uniformly


 Is as simple as possible
 Supports the latest standards include A/B testing and analytics
 Offers fast response times
 Utilizes security standards to reduce the chance of malicious penetrations
 Does not crash
 Heals itself
 Does not have a single point of failure
 Scales out easily
 Allows for easy creation of known data
 Errors logged in a user-friendly way
 Automated deployments

The reason the above factors are necessary is because, with the right attributes, you can build a
better app. Not to mention, by supporting horizontal and vertical growth, software deployment is
much more efficient, user-friendly and reliable. While web application architecture is vitally
important, don’t forget to check out our BuildBetter archives for more tips and resources on
building better apps from planning to post-production.
ISP

Stands for "Internet Service Provider." An ISP provides access to the Internet. Whether you're at
home or work, each time you connect to the Internet, your connection is routed through an ISP.

Early ISPs provided Internet access through dial-up modems. This type of connection took place
over regular phone lines and was limited to 56 Kbps. In the late 1990s, ISPs began offering faster
broadband Internet access via DSL and cable modems. Some ISPs now offer high-speed fiber
connections, which provide Internet access through fiber optic cables. Companies like Comcast
and Time Warner provide cable connections while companies like AT&T and Verizon provide
DSL Internet access.

To connect to an ISP, you need a modem and an active account. When you connect a modem to
the telephone or cable outlet in your house, it communicates with your ISP. The ISP verifies your
account and assigns your modem an IP address. Once you have an IP address, you are connected
to the Internet. You can use a router (which may a separate device or built into the modem) to
connect multiple devices to the Internet. Since each device is routed through the same modem,
they will all share the same public IP address assigned by the ISP.

ISPs act as hubs on the Internet since they are often connected directly to the Internet backbone.
Because of the large amount of traffic ISPs handle, they require high bandwidth connections to
the Internet. In order to offer faster speeds to customers, ISPs must add more bandwidth to their
backbone connection in order to prevent bottlenecks. This can be done by upgrading existing
lines or adding new ones.

URL is the abbreviation of Uniform Resource Locator and is defined as the


global address of documents and other resources on the World Wide Web. To visit this website,
for example, you'll go to the URL www.webopedia.com.
We all use URLs to visit webpages and other resources on the web. The URL is an address that
sends users to a specific resource online, such as a webpage, video or other document or
resource. When you search Google, for example, the search results will display the URL of the
resources that match your search query. The title in search results is simply a hyperlink to the
URL of the resource.
A URL is one type of Uniform Resource Identifier (URI); the generic term for all types of names
and addresses that refer to objects on the World Wide Web.
What Are the Parts of a URL?
The first part of the URL is called a protocol identifier and it indicates what protocol to use, and
the second part is called a resource name and it specifies the IP address or the domain
name where the resource is located. The protocol identifier and the resource name are separated
by a colon and two forward slashes.

HTTP means HyperText Transfer Protocol. HTTP is the underlying protocol used by the World
Wide Web and this protocol defines how messages are formatted and transmitted, and what
actions Web servers and browsers should take in response to various commands.
For example, when you enter a URL in your browser, this actually sends an HTTP command to
the Web server directing it to fetch and transmit the requested Web page. The other main
standard that controls how the World Wide Web works is HTML, which covers how Web pages
are formatted and displayed.
HTTP is a Stateless Protocol
HTTP is called a stateless protocol because each command is executed independently, without
any knowledge of the commands that came before it. This is the main reason that it is difficult to
implement Web sites that react intelligently to user input. This shortcoming of HTTP is being
addressed in a number of new technologies, including ActiveX, Java, JavaScript and cookies.
HTTP Status Codes are Error Messages
Errors on the Internet can be quite frustrating — especially if you do not know the difference
between a 404 error and a 502 error. These error messages, also called HTTP status codes are
response codes given by Web servers and help identify the cause of the problem.
For example, "404 File Not Found" is a common HTTP status code. It means the Web server
cannot find the file you requested. This means the webpage or other document you tried to load
in your Web browser has either been moved or deleted, or you entered the wrong URL or
document name.
Knowing the meaning of the HTTP status code can help you figure out what went wrong. On a
404 error, for example, you could look at the URL to see if a word looks misspelled, then correct
it and try it again. If that doesn't work, backtrack by deleting information between each
backslash, until you come to a page on that site that isn't a 404. From there you may be able to
find the page you're looking for.
Additional information on HTTP error codes can be found in Webopedia's common HTTP status
codes article.
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie)
is a small piece of data sent from a website and stored on the user's computer by the user's web
browser while the user is browsing. Cookies were designed to be a reliable mechanism for
websites to remember stateful information (such as items added in the shopping cart in an online
store) or to record the user's browsing activity (including clicking particular buttons, logging in,
or recording which pages were visited in the past). They can also be used to remember arbitrary
pieces of information that the user previously entered into form fields such as names, addresses,
passwords, and credit card numbers.
Other kinds of cookies perform essential functions in the modern web. Perhaps most
importantly, authentication cookies are the most common method used by web servers to know
whether the user is logged in or not, and which account they are logged in with. Without such a
mechanism, the site would not know whether to send a page containing sensitive information, or
require the user to authenticate themselves by logging in. The security of an authentication
cookie generally depends on the security of the issuing website and the user's web browser, and
on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be
read by a hacker, used to gain access to user data, or used to gain access (with the user's
credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site
request forgery for examples).[1]
The tracking cookies, and especially third-party tracking cookies, are commonly used as ways to
compile long-term records of individuals' browsing histories – a potential privacy concern that
prompted European[2] and U.S. lawmakers to take action in 2011.[3][4]European law requires that
all websites targeting European Union member states gain "informed consent" from users before
storing non-essential cookies on their device.

How to delete cookies-

This document explains how to clear the cache and cookies in Internet Explorer 8.

1. Select Tools > Internet Options.


2. Click on the General tab and then the Delete... button.
3. Make sure to uncheck Preserve Favorites website data and check both Temporary
Internet Files and Cookies then click Delete.
FURTHER TROUBLESHOOTING

The above procedure for clearing cache and cookies should work for the majority of websites,
but certain website and applications such as WiscMail require a more thorough procedure. If you
are still having issues, try to steps below.

1. Close out of Internet Options. Click on Tools and select Developer Tools.
2. In the Developer Tools window, click on Cache and select Clear Browser Cache...
3. Click Yes to confirm the clearing of the browser cache.
What is an e-commerce security?
E-commerce security is protection the various e-commerce assets from unauthorized access, its
use, or modification.

What is an e-commerce threat?


In simple words, you can say that using the internet for unfair means with an intention of
stealing, fraud and security breach.
There are various types of e-commerce threats. Some are accidental, some are purposeful, and
some of them are due to human error. The most common security threats are phishing attacks,
money thefts, data misuse,hacking, credit card frauds and unprotected services.
Inaccurate management-One of the main reason to e-commerce threats is poor management.
When security is not up to the mark it poses a very dangerous threat to the networks and systems.
Also security threats occur when there are no proper budgets are allocated for purchase of anti-
virus software licenses.
Price Manipulation-Modern e-commerce systems often face price manipulation problems.
These systems are fully automated; right from the first visit to the final payment getaway.
Stealing is the most common intention of price manipulation. It allows an intruder to slide or
install a lower price into the URL and get away with all the data.
Snowshoe Spam-Now spam is something which is very common. Almost each one of us deals
with spam mails in our mail box. The spam messages problem has never been actually solved but
now it is turning out to be a not so general issue. The reason for this is the very nature of a spam
message. A spam is something which is sent by one person; but unfortunately a new
development is taking place in the cyber world. It is called as snowshoe spam. Unlike a regular
spam it is not sent from one computer but is sent from many users. In such a case it becomes
difficult for the anti-spam software to protect the spam messages.
Malicious code threats-These code threats typically involve viruses, worms, Trojan horses.
 Viruses are normally external threats and can corrupt the files on the website if they find
their way in the internal network. They can be very dangerous as they destroy the computer
systems completely and can damage the normal working of the computer. A virus always
needs a host as they cannot spread by themselves.
 Worms are very much different and are more serious than viruses. It places itself directly
through the internet. It can infect millions of computers in a matter of just few hours.
 A Trojan horse is a programming code which can perform destructive functions. They
normally attack your computer when you download something. So always check the source
of the downloaded file.
Hacktivism-The full form of Hacktivism is hacking activism. At first it may seem like you
should hardly be aware of this cyber threat. After all it is a problem not directly related to you.
Why you should be bothered at all? However that’s not the case. Firstly hacktivists do not target
directly to those associated only with politics. It can also be a socially motivated purpose. It is
typically using social media platforms to bring to light social issues. It can also include flooding
an email address with so much traffic that it temporarily shuts down.
Wi-Fi Eavesdropping-It is also one of the easiest ways in e-commerce to steal personal data. It
is like a “virtual listening” of information which is shared over a Wi-Fi network which is not
encrypted. It can happen on public as well as on personal computers.
Other threats-Some other threats which include are data packet sniffing, IP spoofing, and port
scanning. Data packet sniffing is also normally called as sniffers. An intruder can use a sniffer to
attack a data packet flow and scan individual data packs. With IP spoofing it is very difficult to
track the attacker. The purpose here is to change the source address and give it such a look that it
should look as though it originated from another computer.

Ways to combat e-commerce threats


Developing a through implementation plan is the first step to minimize a cyber threat.
Encryption-It is the process of converting a normal text into an encoded text which cannot be
read by anyone except by the one who sends or receives the message.
Having digital certificates
It is a digital certificate which is issued by a reliable third party company. A digital certificate
contains the following things the name of the company (Only in EV SSL Certificate), the most
important digital certificate serial number, expiry date and date of issue. An EV SSL
Certificate is necessary which provides a high level of authentication to your website. The very
function of this kind of certificate is to exclusively protect an e-commerce website from
unwanted attacks such Man-In_middle Attack. Also there are different Types of SSL
Certificates available (such as Wildcard SSL, SAN, SGC, Exchange Server certificate, etc.)
which you can choose according to the need of your website.

Threats To Server Security

Server security is as important as network security because servers can hold most or all of the
organization's vital information. If a server is compromised, all of its contents may become
available for the cracker to steal or manipulate at will. There are many ways that a server can be
cracked. The following sections detail some of the main issues.

Unused Services and Open Ports

By default, most operating systems install several pieces of commonly used software. Red Hat
Linux, for example, can install up to 1200 application and library packages in a single
installation. While most server administrators will not opt to install every single package in the
distribution, they will install a base installation of packages, including several server
applications.

A common occurrence among system administrators is to install an operating system without


knowing what is actually being installed. This can be troublesome, as most operating systems
will not only install the applications, but also setup a base configuration and turn services on.
This can cause unwanted services, such as telnet, DHCP, or DNS to be running on a server or
workstation without the administrator realizing it, leading to unwanted traffic to the server or
even a path into the system for crackers. See Chapter 5 for information on closing ports and
disabling unused services.

Unpatched Services
Most server applications that are included in a default Red Hat Linux installation are solid,
thoroughly tested pieces of software. Many of the server applications have been in use in
production environments for many years, and their code has been thoroughly refined and many
of the bugs have been found and fixed.

However, there is no such thing as perfect software, and there is always room for further
refinement. Moreover, newer software is often not as rigorously tested as one might expect, due
to its recent arrival to production environments or because it may not be as popular as other
server software. Developers and system administrators often find exploitable bugs in server
applications and publish the information on bug tracking and security-related websites such
as the Bugtraq mailing list or the Computer Emergency Response Team website. CERT and
Bugtraq normally alert interested parties of the vulnerabilities. However, even then, it is up to
system administrators to patch and fix these bugs whenever they are made public, as crackers
also have access to these vulnerability tracking services and will use such information to crack
unpatched systems wherever they can. Good system administration requires vigilance, constant
tracking of bugs, and proper system maintenance to ensure a secure computing environment.

Inattentive Administration

Similar to server applications which languish unpatched by developers are administrators who
fail to patch their systems or are too ignorant to do so. According to the System Administration
Network and Security Institute (SANS), the primary cause of computers security vulnerability is
to "assign untrained people to maintain security and provide neither the training nor the time to
make it possible to do the job."[1] This applies as much to inexperienced administrators as it
does to overconfident or amotivated administrators.

Some administrators fail to patch their servers and workstations, while others fail to watch log
messages from their system kernel or from network traffic. Another common error is to leave the
default passwords or keys in services that have such authentication methods built into them. For
example, some databases leave default administration passwords under the assumption that the
system administrator will change this immediately upon configuration. Even an inexperienced
cracker can use the widely-known default password to gain administrative privileges to the
database. These are just a few- examples of inattentive administration that can eventually lead to
a compromised system.

Inherently Insecure Services

Even the most vigilant organization that does their job well and keeps up with their daily
responsibilities can fall victim to vulnerabilities if the services they choose for their network are
inherently insecure. There are certain services that were developed under the assumption that
they will be used over trusted networks; however, this assumption falls short as soon as the
service becomes available over the Internet.
The Internet was initially developed to provide many paths for military com munications.
Security was not an issue since all military communication was encoded. We all know that
security is a major issue today especially since e-commerce, electronic banking, and other major
financial transactions traverse the Internet. In this section, we will examine secrecy, integrity,
and necessity threats. We will also discuss some solutions to remedy these problems.

Secrecy Threats: To begin, let us explain the difference between secrecy and privacy
threats. Secrecy is a technical issue that requires sophisticated physical and logical mechanism
and focuses on the prevention of unauthorized disclosure of information. Privacy, on the other
hand, is a legal issue and relates to the protection of individual rights of nondisclosure. Some
common problems experienced in the communication channel are as follows:

1. unauthorized individuals steal personal information (e.g., credit card number,


name, address, etc.) by recording information packets;
2. sniffer programs read, decrypt, and record e-mail transmissions
3. backdoors (electronic holes in software) allow unauthorized users to observe
traffic, delete or steal data
4. online eavesdropper steals proprietary corporate information

The Privacy Council created a Web site to address both business and legal issues and assists
businesses to develop security policies. In addition, some sites (such as Anonymizer) provide
anonymous browser service.

Integrity Threats (or active wiretapping): This threat occurs when a message stream of
information (e.g., banking transaction) is altered by an unauthorized person. Examples of these
threats include the following:

 cybervandalism: electronically defacing an existing Web site's page by inserting


different content material (which may include offensive pornographic material)
 masquerading/spoofing: pretending to mimic someone or presenting a fake Web site to
spoof visitors (e.g., a hacker substitutes their Web site address in place of the real one by
taking advantage of backdoors). This type of action can have dire consequences for
buyers and sellers using e-commerce sites to transact business. In many cases, hackers
use spam e-mails to get visitors to link to a fake Web site where they enter personal
information - capturing information this way is known as a phishing expedition and it
occurs frequently with online banking and payment systems.

Necessity Threats: Other names for this threat include delay, denial, or denial-of-service (DoS)
threat. One goal is to disrupt or stop computer processing which ultimately causes frustrated
visitors to leave the site. Another goal is to delete information from a transmission, a file, or the
system. The Internet Worm in 1998 was the first record of a DoS attack that crippled thousands
of computers connected to the Internet.
Threats to Wireless Networks: Wireless access points (WAPs) are great in that they allow
mobile devices to connect with networks provided they are within a specified range. The
drawback is that individuals can also access the network's resources (e.g., databases, printers,
messages, and the Internet). To prevent this from happening, companies can turn on the WEP
(Wireless Encryption Protocol) which encrypts transmissions from wireless devices to the
WAPs. Companies that fail to change the default login and password for WAPs are creating an
opportunity for hackers. Wardrivers are attackers who drive around searching for accessible
networks. They then place a chalk mark on the building (wardchalking) or draw maps to record
free access points which they share with other hackers.

Encryption Solutions: One technique used to mask data so that it cannot be read by
unauthorized persons is encryption. In this process, a mathematically based program and a
secret key are used to produce an unintelligible string of characters that can only be deciphered
by the sender and receiver of the message. Some important terms used in this technique are as
follows:

 cryptography: the science of studying encryption


 plain text: normal text
 cipher text: an unintelligible string of characters
 encryption program: program that transforms normal text to cipher text
 encryption algorithm: the logic and mathematics of an encryption program that
transform normal text to cipher text
 decryption: process of decoding the cipher text
 decryption program: a type of encryption-reversing procedure

The question then is how effective are encryption techniques? The answer lies in the size of the
key used in the procedure. A 40-bit key provides minimal security. The larger the key, the
stronger the encryption which makes it impossible for a hacker to decipher the message. In
general, encryption can be divided into three functions based on the type of key and encryption
program:

1. Hash Coding: A unique hash value is created from a message using a hash
algorithm which is a one-way function. This serves as a fingerprint for the message and
therefore makes it easy to determine if anyone has tampered with the message during
transmission. The chances of a collision (duplicated hash values) are rare.

2. Asymmetric Encryption (or public-key encryption): In this case, a message is encoded


using two mathematically related numeric keys. The public key, which is used to encrypt
messages, is given to anyone who wants to transmit secured information to the holder of
both keys. This individual has the private key which is never shared because it is used to
decrypt all messages received. PGP Corporation owns Pretty Good Privacy (PGP), a
popular technology used to implement public-key encryption. It is free for personal use
but businesses have to obtain a license.

Advantages: public key encryption scales well; the public key can be posted anywhere
and does not require any special handling; digital signatures can be used to authenticate
documents.

Disadvantages: the encryption process is slower than with the private-key; they need to
be combined with the private-key system to get a better result.

3. Symmetric Encryption (or private-key encryption): This technique only uses one
numeric key to encode/decode messages and it is very fast and efficient. Since both the
sender and receiver will have access to the key, it must be well protected. The
disadvantages with this method are as follows: all messages must be encrypted, and the
technique does not scale well in large environments. The most widely used private-key
encryption system is the Data Encryption Standard (DES).

Private Key and public key are a part of encryption that encodes the information. Both keys work
in two encryption systems called symmetric and asymmetric. Symmetric encryption (private-key
encryption or secret-key encryption) utilize the same key for encryption and decryption.
Asymmetric encryption utilizes a pair of keys like public and private key for better security
where a message sender encrypts the message with the public key and the receiver decrypts it
with his/her private key.
Public and Private key pair helps to encrypt information that ensures data is protected during
transmission.

Public Key:
Public key uses asymmetric algorithms that convert messages into an unreadable format. A
person who has a public key can encrypt the message intended for a specific receiver. The
receiver with the private key can only decode the message, which is encrypted by the public key.
The key is available via the public accessible directory.

Private Key:
The private key is a secret key that is used to decrypt the message and the party knows it that
exchange message. In the traditional method, a secret key is shared within communicators to
enable encryption and decryption the message, but if the key is lost, the system becomes void.
To avoid this weakness, PKI (public key infrastructure) came into force where a public key is
used along with the private key. PKI enables internet users to exchange information in a secure
way with the use of a public and private key.
Key Size and Algorithms:
There are RSA, DSA, ECC (Elliptic Curve Cryptography) algorithms that are used to create a
public and private key in public key cryptography (Asymmetric encryption). Due to security
reason, the latest CA/Browser forum and IST advises to use 2048-bit RSA key. The key size (bit-
length) of a public and private key pair decides how easily the key can be exploited with a brute
force attack. The more computing power increases, it requires more strong keys to secure
transmitting data.
Definition of Digital Signature-

A digital signature is a technique that verifies the authenticity of the digital document in which
particular code is attached to the message that acts as a signature. Hash of the message is utilized
for the creation of the message and after that message is encrypted with the sender’s private
key. The signature ensures the source and integrity of the message.

Techniques used for performing digital signature

The Digital Signature Standard (DSS) was developed for performing the digital signature.
The National Institute of Standards and Technology (NIST) issued the DSS standard as
the Federal Information Processing Standard(FIPS) PUB 186 in 1991.

SHA-1 algorithm is used in DSS for computing the message digest against the original message
and utilizes message digest to achieve digital signature. For doing so, DSS utilizes Digital
signature algorithm (DSA). DSA is based on Asymmetric key cryptography.
Furthermore, RSA algorithm can also be used for performing digital signature, but its primary
use is to encrypt the message. Although, DSA cannot be used for the encryption.

Definition of Digital Certificate-

A Digital Certificate is simply a computer file which helps in establishing your identity. It
officially approves the relation between the holder of the certificate (the user) and a particular
public key. Thus, a digital certificate should include the user name and the user’s public key.
This will prove that the certain public key owned by a particular user.

A digital certificate consists of the following information: Subject name (User’s name is
referred to as Subject name because a digital certificate can be issued to an individual, a group or
an organization), Serial number, Validity date range and issuer name, etc.

A Certification Authority (CA) is a trusted agency that can issue digital certificates to
individuals and organizations, which want to use those certificates in the asymmetric key
cryptographic application. Generally, a CA is a well-known organization, such as financial
institution, post office, a software company, etc. The most popular CA’s are Verisign and
Entrust.

CA accomplishes various tasks, for example, it issues new certificates, maintain old ones, and
revoke the certificate that has become invalid for some sort of reasons, etc. The CA can delegate
some of its tasks to this third-party called as a Registration Authority (RA).
Digital Certificate Creation Steps:

1. Key generation – It starts with the creation of the subject’s public and private keys using
some software. This software works as a part of web browser and web server. The subject
must not share the private key. The subject then sends the public key along with the other
information like evidence about himself/herself to the RA (Registration Authority).
Although, either if the user has no knowledge about technicalities included in the creation
of the key or if there are particular demands that the key must be centrally created then
these keys can be created by RA also on the subject’s (user’s) behalf.
2. Registration: Suppose the user has created the key pair, the user now sends the public key
and the related registration information (e.g. subject name, as it is needed to show in the
digital certificate) and all the evidence of himself and herself to the RA.
For this, the software offers a wizard in which the user inserts the data and submits it
when all the data is validated. Then the data moves over the network/internet to the RA.
The format for the certificate requests has been standardized and is called certificate
signing request (CSR). This is one of the public key cryptography standards (PKCS).

3. Verification: When the registration process is completed, the RA has to check the user’s
credentials such as the provided information is correct and acceptable or not.
The second check is to ensure the user who is requesting for the certificate does indeed
possess the private key correlating to the public key that is sent as the part of the certificate
request to the RA. This inspection is called as checking the Proof Of Possession (POP) of
the private key.
4. Certificate creation: Suppose that all steps until now have been successfully executed, the
RA accepts all the details of the user to the CA. The CA does its own verification (if
required) and creates a digital certificate for the user.
There are programs for creating certificates in the X.509 standard format. The CA delivers
the certificate to the user and also keep a copy of the certificate for its own record. The
CA’s copy of the certificate is maintained in a certificate directory.
E-commerce sites use electronic payment, where electronic payment refers to paperless
monetary transactions. Electronic payment has revolutionized the business processing by
reducing the paperwork, transaction costs, and labor cost. Being user friendly and less time-
consuming than manual processing, it helps business organization to expand its market
reach/expansion. Listed below are some of the modes of electronic payments −

 Credit Card
 Debit Card
 Smart Card
 E-Money
 Electronic Fund Transfer (EFT)
Credit Card
Payment using credit card is one of most common mode of electronic payment. Credit card is
small plastic card with a unique number attached with an account. It has also a magnetic strip
embedded in it which is used to read credit card via card readers. When a customer purchases a
product via credit card, credit card issuer bank pays on behalf of the customer and customer has
a certain time period after which he/she can pay the credit card bill. It is usually credit card
monthly payment cycle. Following are the actors in the credit card system.

 The card holder − Customer


 The merchant − seller of product who can accept credit card payments.
 The card issuer bank − card holder's bank
 The acquirer bank − the merchant's bank
 The card brand − for example , visa or Mastercard.
Credit Card Payment Proces
Step Description

Step 1 Bank issues and activates a credit card to the customer on his/her
request.

Step 2 The customer presents the credit card information to the merchant site or
to the merchant from whom he/she wants to purchase a product/service.
Step 3 Merchant validates the customer's identity by asking for approval from
the card brand company.

Step 4 Card brand company authenticates the credit card and pays the
transaction by credit. Merchant keeps the sales slip.

Step 5 Merchant submits the sales slip to acquirer banks and gets the service
charges paid to him/her.

Step 6 Acquirer bank requests the card brand company to clear the credit
amount and gets the payment.

Step 6 Now the card brand company asks to clear the amount from the issuer
bank and the amount gets transferred to the card brand company.

Debit Card
Debit card, like credit card, is a small plastic card with a unique number mapped with the bank
account number. It is required to have a bank account before getting a debit card from the bank.
The major difference between a debit card and a credit card is that in case of payment through
debit card, the amount gets deducted from the card's bank account immediately and there should
be sufficient balance in the bank account for the transaction to get completed; whereas in case
of a credit card transaction, there is no such compulsion.

Debit cards free the customer to carry cash and cheques. Even merchants accept a debit card
readily. Having a restriction on the amount that can be withdrawn in a day using a debit card
helps the customer to keep a check on his/her spending.

Smart Card
Smart card is again similar to a credit card or a debit card in appearance, but it has a small
microprocessor chip embedded in it. It has the capacity to store a customer’s work-related
and/or personal information. Smart cards are also used to store money and the amount gets
deducted after every transaction.

Smart cards can only be accessed using a PIN that every customer is assigned with. Smart cards
are secure, as they store information in encrypted format and are less expensive/provides faster
processing. Mondex and Visa Cash cards are examples of smart cards.
E-Money
E-Money transactions refer to situation where payment is done over the network and the amount
gets transferred from one financial body to another financial body without any involvement of a
middleman. E-money transactions are faster, convenient, and saves a lot of time.

Online payments done via credit cards, debit cards, or smart cards are examples of emoney
transactions. Another popular example is e-cash. In case of e-cash, both customer and merchant
have to sign up with the bank or company issuing e-cash.

Electronic Fund Transfer


It is a very popular electronic payment method to transfer money from one bank account to
another bank account. Accounts can be in the same bank or different banks. Fund transfer can
be done using ATM (Automated Teller Machine) or using a computer.

Nowadays, internet-based EFT is getting popular. In this case, a customer uses the website
provided by the bank, logs in to the bank's website and registers another bank account. He/she
then places a request to transfer certain amount to that account. Customer's bank transfers the
amount to other account if it is in the same bank, otherwise the transfer request is forwarded to
an ACH (Automated Clearing House) to transfer the amount to other account and the amount is
deducted from the customer's account. Once the amount is transferred to other account, the
customer is notified of the fund transfer by the bank.

Payment gateway
From Wikipedia, the free encyclopedia

Jump to navigationJump to search

A payment gateway is a merchant service provided by an e-commerce application service


provider that authorizes credit card or direct payments processing for e-businesses, online
retailers, bricks and clicks, or traditional brick and mortar.[1] The payment gateway may be
provided by a bank to its customers, but can be provided by a specialised financial service
provider as a separate service, such as a payment service provider.
A payment gateway facilitates a payment transaction by the transfer of information between a
payment portal (such as a website, mobile phone or interactive voice response service) and the
front end processor or acquiring bank.

White label payment gateway[edit]


Some payment gateways offer white label services, which allow payment service providers, e-
commerce platforms, ISOs, resellers, or acquiring banks to fully brand the payment gateway’s
technology as their own.[4] This means PSPs or other third parties can own the end-to-end user
experience without bringing payments operations—and additional risk management and
compliance responsibility—in house.[5]

ACID (computer science)


From Wikipedia, the free encyclopedia

Jump to navigationJump to search

In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties


of database transactions intended to guarantee validity even in the event of errors, power failures,
etc. In the context of databases, a sequence of database operations that satisfies the ACID
properties (and these can be perceived as a single logical operation on the data) is called a
transaction. For example, a transfer of funds from one bank account to another, even involving
multiple changes such as debiting one account and crediting another, is a single transaction.
In 1983,[1] Andreas Reuter and Theo Härder coined the acronym ACID as shorthand for
Atomicity, Consistency, Isolation, and Durability, building on earlier work[2] by Jim Gray who
enumerated Atomicity, Consistency, and Durability but left out Isolation when characterizing the
transaction concept. These four properties describe the major guarantees of the transaction
paradigm, which has influenced many aspects of development in database systems.
According to Gray and Reuter, IMS supported ACID transactions as early as 1973 (although the
term ACID came later).[3]

acteristics[edit]
The characteristics of these four properties as defined by Reuter and Härder are as follows:
Atomicity[edit]
Main article: Atomicity (database systems)

Transactions are often composed of multiple statements. Atomicity guarantees that each
transaction is treated as a single "unit", which either succeeds completely, or fails completely: if
any of the statements constituting a transaction fails to complete, the entire transaction fails and
the database is left unchanged. An atomic system must guarantee atomicity in each and every
situation, including power failures, errors and crashes.
Consistency[edit]
Main article: Consistency (database systems)
Consistency ensures that a transaction can only bring the database from one valid state to
another, maintaining database invariants: any data written to the database must be valid
according to all defined rules, including constraints, cascades, triggers, and any combination
thereof. This prevents database corruption by an illegal transaction, but does not guarantee that a
transaction is correct.
Isolation[edit]
Main article: Isolation (database systems)

Transactions are often executed concurrently (e.g., reading and writing to multiple tables at the
same time). Isolation ensures that concurrent execution of transactions leaves the database in the
same state that would have been obtained if the transactions were executed sequentially.
Isolation is the main goal of concurrency control; depending on the method used, the effects of
an incomplete transaction might not even be visible to other transactions.
Durability[edit]
Main article: Durability (database systems)

Durability guarantees that once a transaction has been committed, it will remain committed even
in the case of a system failure (e.g., power outage or crash). This usually means that completed
transactions (or their effects) are recorded in non-volatile memory.

Examples[edit]
The following examples further illustrate the ACID properties. In these examples, the database
table has two columns, A and B. An integrity constraint requires that the value in A and the value
in B must sum to 100. The following SQL code creates a table as described above:

CREATE TABLE acidtest (A INTEGER, B INTEGER, CHECK (A + B = 100));

Atomicity failure[edit]
In database systems, atomicity (or atomicness; from Greek a-tomos, undividable) is one of the
ACID transaction properties. A series of database operations in an atomic transaction will either
all occur, or none will occur. The series of operations cannot be separated with only some of
them being executed, which makes the series of operations "indivisible". A guarantee of
atomicity prevents updates to the database occurring only partially, which can cause greater
problems than rejecting the whole series outright. In other words, atomicity means indivisibility
and irreducibility.[4] Alternatively, we may say that a Logical transaction may be made of, or
composed of, one or more (several), Physical transactions. Unless and until all component
Physical transactions are executed, the Logical transaction will not have occurred – to the effects
of the database. Say our Logical transaction consists of transferring funds from account A to
account B. This Logical transaction may be composed of several Physical transactions consisting
of first removing the amount from account A as a first Physical transaction and then, as a second
transaction, depositing said amount in account B. We would not want to see the amount removed
from account A before we are sure it has been transferred into account B. Then, unless and until
both transactions have happened and the amount has been transferred to account B, the transfer
has not, to the effects of the database, occurred.
Consistency failure[edit]
Consistency is a very general term, which demands that the data must meet all validation rules.
In the previous example, the validation is a requirement that A + B = 100. All validation rules
must be checked to ensure consistency. Assume that a transaction attempts to subtract 10 from A
without altering B. Because consistency is checked after each transaction, it is known that A + B
= 100 before the transaction begins. If the transaction removes 10 from A successfully, atomicity
will be achieved. However, a validation check will show that A + B = 90, which is inconsistent
with the rules of the database. The entire transaction must be cancelled and the affected rows
rolled back to their pre-transaction state. If there had been other constraints, triggers, or cascades,
every single change operation would have been checked in the same way as above before the
transaction was committed. Similar issues may arise with other constraints. We may have
required the data types of both A,B to be integers. If we were then to enter, say, the value 13.5
for A, the transaction will be cancelled, or the system may give rise to an alert in the form of a
trigger (if/when the trigger has been written to this effect). Another example would be with
integrity constraints, which would not allow us to delete a row in one table whose Primary key is
referred to by at least one foreign key in other tables.
Isolation failure[edit]
To demonstrate isolation, we assume two transactions execute at the same time, each attempting
to modify the same data. One of the two must wait until the other completes in order to maintain
isolation.
Consider two transactions. T1 transfers 10 from A to B. T2 transfers 10 from B to A. Combined,
there are four actions:

 T1 subtracts 10 from A.
 T1 adds 10 to B.
 T2 subtracts 10 from B.
 T2 adds 10 to A.
If these operations are performed in order, isolation is maintained, although T2 must wait.
Consider what happens if T1 fails halfway through. The database eliminates T1's effects, and
T2 sees only valid data.
By interleaving the transactions, the actual order of actions might be:

 T1 subtracts 10 from A.
 T2 subtracts 10 from B.
 T2 adds 10 to A.
 T1 adds 10 to B.
Again, consider what happens if T1 fails while modifying B (step 4). By the time T1 fails, T2 has
already modified A; it cannot be restored to the value it had before T1 without leaving an invalid
database. This is known as a write-write failure,[citation needed] because two transactions attempted
to write to the same data field. In a typical system, the problem would be resolved by reverting to
the last known good state, canceling the failed transaction T1, and restarting the interrupted
transaction T2 from the good state.
Durability failure[edit]
Consider a transaction that transfers 10 from A to B. First it removes 10 from A, then it adds 10
to B. At this point, the user is told the transaction was a success, however the changes are still
queued in the disk buffer waiting to be committed to disk. Power fails and the changes are lost.
The user assumes (understandably) that the changes persist.

Implementation[edit]
Processing a transaction often requires a sequence of operations that is subject to failure for a
number of reasons. For instance, the system may have no room left on its disk drives, or it may
have used up its allocated CPU time. There are two popular families of techniques: write-ahead
logging and shadow paging. In both cases, locks must be acquired on all information to be
updated, and depending on the level of isolation, possibly on all data that may be read as well. In
write ahead logging, atomicity is guaranteed by copying the original (unchanged) data to a log
before changing the database.[dubious – discuss] That allows the database to return to a consistent state
in the event of a crash. In shadowing, updates are applied to a partial copy of the database, and
the new copy is activated when the transaction commits.
Locking vs multiversioning[edit]
Many databases rely upon locking to provide ACID capabilities. Locking means that the
transaction marks the data that it accesses so that the DBMS knows not to allow other
transactions to modify it until the first transaction succeeds or fails. The lock must always be
acquired before processing data, including data that is read but not modified. Non-trivial
transactions typically require a large number of locks, resulting in substantial overhead as well as
blocking other transactions. For example, if user A is running a transaction that has to read a row
of data that user B wants to modify, user B must wait until user A's transaction completes. Two
phase locking is often applied to guarantee full isolation.
An alternative to locking is multiversion concurrency control, in which the database provides
each reading transaction the prior, unmodified version of data that is being modified by another
active transaction. This allows readers to operate without acquiring locks, i.e., writing
transactions do not block reading transactions, and readers do not block writers. Going back to
the example, when user A's transaction requests data that user B is modifying, the database
provides A with the version of that data that existed when user B started his transaction. User A
gets a consistent view of the database even if other users are changing data. One implementation,
namely snapshot isolation, relaxes the isolation property.
Distributed transactions[edit]
Main article: Distributed transaction

Guaranteeing ACID properties in a distributed transaction across a distributed database, where


no single node is responsible for all data affecting a transaction, presents additional
complications. Network connections might fail, or one node might successfully complete its part
of the transaction and then be required to roll back its changes because of a failure on another
node. The two-phase commit protocol (not to be confused with two-phase locking) provides
atomicity for distributed transactions to ensure that each participant in the transaction agrees on
whether the transaction should be committed or not.[5] Briefly, in the first phase, one node (the
coordinator) interrogates the other nodes (the participants) and only when all reply that they are
prepared does the coordinator, in the second phase, formalize the transaction.

Secure electronic transaction (SET) was an early protocol for electronic credit card payments.
As the name implied, SET was used to facilitate the secure transmission of consumer credit card
information via electronic avenues, such as the Internet. SET blocked out the details of credit
card information, thus preventing merchants, hackers and electronic thieves from accessing this
information.

Read more: Secure Electronic Transaction - SET https://www.investopedia.com/terms/s/secure-


electronic-transaction-set.asp#ixzz5WYcEj6RT
Follow us: Investopedia on Facebook

The underlying protocols and standards for secure electronic transactions were backed and
supported by Microsoft, IBM, MasterCard, Visa, Netscape, and others. Digital certificates were
assigned to provide the electronic access to funds, whether it was a credit line or bank account.
When a purchase was made electronically, encrypted digital certificates were what let the
customer, merchant, and financial institution complete a verified transaction.

Digital certificates were generated for participants in the transaction, along with matching digital
keys that allowed them to confirm the certificates of the other party. The algorithms used would
ensure that only a party with the corresponding digital key would be able to confirm the
transaction. This way a consumer’s credit card or bank account could be used without revealing
details like account numbers. Thus, SET was a form of security against account theft, hacking,
and other criminal actions.

Вам также может понравиться