Академический Документы
Профессиональный Документы
Культура Документы
The Internet's architecture is described in its name, a short from of the compound word "inter-
networking". This architecture is based in the very specification of the standard TCP/IP protocol,
designed to connect any two networks which may be very different in internal hardware,
software, and technical design. Once two networks are interconnected, communication with
TCP/IP is enabled end-to-end, so that any node on the Internet has the near magical ability to
communicate with any other no matter where they are. This openness of design has enabled the
Internet architecture to grow to a global scale.
In practice, the Internet technical architecture looks a bit like a multi-dimensional river system,
with small tributaries feeding medium-sized streams feeding large rivers. For example, an
individual's access to the Internet is often from home over a modem to a local Internet service
provider who connects to a regional network connected to a national network. At the office, a
desktop computer might be connected to a local area network with a company connection to a
corporate Intranet connected to several national Internet service providers. In general, small local
Internet service providers connect to medium-sized regional networks which connect to large
national networks, which then connect to very large bandwidth networks on the
Internet backbone. Most Internet service providers have several redundant network cross-
connections to other providers in order to ensure continuous availability.
The companies running the Internet backbone operate very high bandwidth networks relied on
by governments, corporations, large organizations, and other Internet service providers. Their
technical infrastructure often includes global connections through underwater cables
and satellite links to enable communication between countries and continents. As always, a
larger scale introduces new phenomena: the number of packets flowing through the switches on
the backbone is so large that it exhibits the kind of complex non-linear patterns usually found in
natural, analog systems like the flow of water or development of the rings of Saturn (RFC 3439,
S2.2).
Each communication packet goes up the hierarchy of Internet networks as far as necessary to get
to its destination network where local routing takes over to deliver it to the addressee. In the
same way, each level in the hierarchy pays the next level for the bandwidth they use, and then
the large backbone companies settle up with each other. Bandwidth is priced by large Internet
service providers by several methods, such as at a fixed rate for constant availability of a certain
number of megabits per second, or by a variety of use methods that amount to a cost per
gigabyte. Due to economies of scale and efficiencies in management, bandwidth cost drops
dramatically at the higher levels of the architecture.
Defining Network Infrastructure
A network can be defined as the grouping of hardware devices and software components which
are necessary to connect devices within the organization, and to connect the organization to other
organizations and the Internet.
Typical hardware components utilized in a networking environment are network interface cards,
computers, routers, hubs, switches, printers, and cabling and phone lines.
Typical software components utilized in a networking environment are the network services and
protocols needed to enable devices to communicate.
Only after the hardware is installed and configured, can operating systems and software be
installed into the network infrastructure. The operating systems which you install on your
computers are considered the main software components within the network infrastructure. This
is due to the operating system containing network communication protocols that enable network
communication to occur. The operating system also typically includes applications and services
that implement security for network communication.
Another concept, namely network infrastructure, is also commonly used to refer to the grouping
of physical hardware and logical components which are needed to provide a number of features
for the network, including these common features:
Connectivity
Routing and switching capabilities
Network security
Access control
The network or network infrastructure has to exist before a number of servers needed to support
applications which are needed by your users can be deployed into your networking environment:
File and print servers
Web and messaging servers
Database servers
Application servers
When you plan your network infrastructure, a number of key elements need to be clarified or
determined:
Determine which physical hardware components are needed for the network infrastructure which
you want to implement.
Determine the software components needed for the network infrastructure.
Determine the following important factors for your hardware and software components:
Specific location of these components
How the components are to be installed.
How the components are to be configured.
When you implement a network infrastructure, you need to perform a number of activities that
can be broadly grouped as follows:
Determine the hardware and software components needed.
Purchase, assemble and install the hardware components.
Install and configure the operating systems, applications and all other software.
The physical infrastructure of the network refers to the physical design of the network together
with the hardware components. The physical design of the network is also called the network’s
topology. When you plan the physical infrastructure of the network, you are usually limited in
your hardware component selection by the logical infrastructure of the network
The logical infrastructure of the network is made up of all the software components required to
enable connectivity between devices, and to provide network security. The network’s logical
infrastructure consists of the following:
Software products
Networking protocols/services.
It is therefore the network’s logical infrastructure that makes it possible for computers to
communicate using the routes defined in the physical network topology.
The logical components of the network topology define a number of important elements:
Speed of the network.
Type of switching that occurs.
Media which will be utilized.
Type of connections which can be formed.
The OSI model is made up of seven layers which are presented as a stack. Data which is passed
over the network moves through each layer. Each layer of the OSI model has its own unique
functions and protocols. Different protocols operate at the different layers of the OSI model. The
layer of the OSI reference model at which the protocol operates defines its function. Different
protocols can operate together at different layers within a protocol stack. When protocols operate
together, they are referred to as a protocol suite or protocol stack. When protocols support
multiple path LAN-to-LAN communications, they are called routable protocols. The binding
order determines the order in which the operating system runs the protocols.
The seven layers of the OSI reference model, and each layers’ associated function are listed here:
Physical Layer – layer 1: The Physical layer transmits raw bit streams over a physical medium,
and deals with establishing a physical connection between computers to enable communication.
The physical layer is hardware specific; it deals with the actual physical connection between the
computer and the network medium. The medium used is typically a copper cable that utilizes
electric currents for signaling. Other media that are becoming popular are fiber-optic and
wireless media. The specifications of the Physical layer include physical layout of the network,
voltage changes and the timing of voltage changes, data rates, maximum transmission distances,
and physical connectors to transmission mediums. The issues normally clarified at the Physical
Layer include:
Whether data is transmitted synchronously or asynchronously.
Whether the analog or digital signaling method is used.
Whether baseband or broadband signalling is used.
Data-Link Layer – layer 2: The Data-link layer of the OSI model enables the movement of data
over a link from one device to another, by defining the interface between the network medium
and the software on the computer. The Data-link layer maintains the data link between two
computers to enable communications. The functions of the Data-link layer include packet
addressing, media access control, formatting of the frame used to encapsulate data, error
notification on the Physical layer, and management of error messaging specific to the delivery of
packets. The Data-link layer is divided into the following two sublayers:
The Logical Link Control (LLC) sublayer provides and maintains the logical links used for
communication between the devices.
The Media Access Control (MAC) sublayer controls the transmission of packets from one
network interface card (NIC) to another over a shared media channel. A NIC has a unique MAC
address, or physical address. The MAC sublayer handles media access control which essentially
prevents data collisions. The common media access control methods are:
Web application architecture defines the interactions between applications, middleware systems
and databases to ensure multiple applications can work together. When a user types in a URL
and taps “Go,” the browser will find the Internet-facing computer the website lives on and
requests that particular page.
The server then responds by sending files over to the browser. After that action, the browser
executes those files to show the requested page to the user. Now, the user gets to interact with the
website. Of course, all of these actions are executed within a matter of seconds. Otherwise, users
wouldn’t bother with websites.
What’s important here is the code, which has been parsed by the browser. This very code may or
may not have specific instructions telling the browser how to react to a wide swath of inputs. As
a result, web application architecture includes all sub-components and external applications
interchanges for an entire software application.
Of course, it is designed to function efficiently while meeting its specific needs and goals. Web
application architecture is critical since the majority of global network traffic, and every single
app and device uses web-based communication. It deals with scale, efficiency, robustness, and
security.
With web applications, you have the server vs. the client side. In essence, there are two programs
running concurrently:
The code which lives in the browser and responds to user input
The code which lives on the server and responds to HTTP requests
Image via Wikipedia
When writing an app, it is up to the web developer to decide what the code on the server should
do in relation to what the code on the browser should do. With server-side code, languages
include:
Ruby on Rails
PHP
C#
Java
Python
Javascript
In fact, any code that can respond to HTTP requests has the capability to run on a server. Here
are a few other attributes of server-side code:
CSS
Javascript
HTML
These are then parsed by the user’s browser. Moreover, client-side code can be seen and edited
by the user. Plus, it has to communicate only through HTTP requests and cannot read files off of
a server directly. Furthermore, it reacts to user input.
The reason why it is imperative to have good web application architecture is because it is the
blueprint for supporting future growth which may come from increased demand, future
interoperability and enhanced reliability requirements. Through object-oriented programming,
the organizational design of web application architecture defines precisely how an application
will function. Some features include:
Delivering persistent data through HTTP, which can be understood by client-side code and
vice-versa
Making sure requests contain valid data
Offers authentication for users
Limits what users can see based on permissions
Creates, updates and deletes records
As technology continues to evolve, so does web application architecture. One such trend is the
use of and creation of service-oriented architecture. This is where most of the code for the entire
application exists as services. In addition, each has its own HTTP API. As a result, one facet of
the code can make a request to another part of the code–which may be running on a different
server.
Another trend is a single-page application. This is where web UI is presented through a rich
JavaScript application. It then stays in the user’s browser over a variety of interactions. In terms
of requests, it uses AJAX or WebSockets for performing asynchronous or synchronous requests
to the web server without having to load the page.
The user then gets a more natural experience with limited page load interruptions. At their core,
many web applications are built around objects. The objects are stored in tables via an SQL
database. Each row in a table has a particular record. So, with relational databases, it is all about
relations. You can call on records just by listing the row and column for a target data point.
With the two above trends, web apps are now much better suited for viewing on multiple
platforms and multiple devices. Even when most of the code for the apps remain the same, they
can still be viewed clearly and easily on a smaller screen.
Best Practices for Good Web Application Architecture
You may have a working app, but it also needs to have good web architecture. Here are several
attributes necessary for good web application architecture:
The reason the above factors are necessary is because, with the right attributes, you can build a
better app. Not to mention, by supporting horizontal and vertical growth, software deployment is
much more efficient, user-friendly and reliable. While web application architecture is vitally
important, don’t forget to check out our BuildBetter archives for more tips and resources on
building better apps from planning to post-production.
ISP
Stands for "Internet Service Provider." An ISP provides access to the Internet. Whether you're at
home or work, each time you connect to the Internet, your connection is routed through an ISP.
Early ISPs provided Internet access through dial-up modems. This type of connection took place
over regular phone lines and was limited to 56 Kbps. In the late 1990s, ISPs began offering faster
broadband Internet access via DSL and cable modems. Some ISPs now offer high-speed fiber
connections, which provide Internet access through fiber optic cables. Companies like Comcast
and Time Warner provide cable connections while companies like AT&T and Verizon provide
DSL Internet access.
To connect to an ISP, you need a modem and an active account. When you connect a modem to
the telephone or cable outlet in your house, it communicates with your ISP. The ISP verifies your
account and assigns your modem an IP address. Once you have an IP address, you are connected
to the Internet. You can use a router (which may a separate device or built into the modem) to
connect multiple devices to the Internet. Since each device is routed through the same modem,
they will all share the same public IP address assigned by the ISP.
ISPs act as hubs on the Internet since they are often connected directly to the Internet backbone.
Because of the large amount of traffic ISPs handle, they require high bandwidth connections to
the Internet. In order to offer faster speeds to customers, ISPs must add more bandwidth to their
backbone connection in order to prevent bottlenecks. This can be done by upgrading existing
lines or adding new ones.
HTTP means HyperText Transfer Protocol. HTTP is the underlying protocol used by the World
Wide Web and this protocol defines how messages are formatted and transmitted, and what
actions Web servers and browsers should take in response to various commands.
For example, when you enter a URL in your browser, this actually sends an HTTP command to
the Web server directing it to fetch and transmit the requested Web page. The other main
standard that controls how the World Wide Web works is HTML, which covers how Web pages
are formatted and displayed.
HTTP is a Stateless Protocol
HTTP is called a stateless protocol because each command is executed independently, without
any knowledge of the commands that came before it. This is the main reason that it is difficult to
implement Web sites that react intelligently to user input. This shortcoming of HTTP is being
addressed in a number of new technologies, including ActiveX, Java, JavaScript and cookies.
HTTP Status Codes are Error Messages
Errors on the Internet can be quite frustrating — especially if you do not know the difference
between a 404 error and a 502 error. These error messages, also called HTTP status codes are
response codes given by Web servers and help identify the cause of the problem.
For example, "404 File Not Found" is a common HTTP status code. It means the Web server
cannot find the file you requested. This means the webpage or other document you tried to load
in your Web browser has either been moved or deleted, or you entered the wrong URL or
document name.
Knowing the meaning of the HTTP status code can help you figure out what went wrong. On a
404 error, for example, you could look at the URL to see if a word looks misspelled, then correct
it and try it again. If that doesn't work, backtrack by deleting information between each
backslash, until you come to a page on that site that isn't a 404. From there you may be able to
find the page you're looking for.
Additional information on HTTP error codes can be found in Webopedia's common HTTP status
codes article.
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie)
is a small piece of data sent from a website and stored on the user's computer by the user's web
browser while the user is browsing. Cookies were designed to be a reliable mechanism for
websites to remember stateful information (such as items added in the shopping cart in an online
store) or to record the user's browsing activity (including clicking particular buttons, logging in,
or recording which pages were visited in the past). They can also be used to remember arbitrary
pieces of information that the user previously entered into form fields such as names, addresses,
passwords, and credit card numbers.
Other kinds of cookies perform essential functions in the modern web. Perhaps most
importantly, authentication cookies are the most common method used by web servers to know
whether the user is logged in or not, and which account they are logged in with. Without such a
mechanism, the site would not know whether to send a page containing sensitive information, or
require the user to authenticate themselves by logging in. The security of an authentication
cookie generally depends on the security of the issuing website and the user's web browser, and
on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be
read by a hacker, used to gain access to user data, or used to gain access (with the user's
credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site
request forgery for examples).[1]
The tracking cookies, and especially third-party tracking cookies, are commonly used as ways to
compile long-term records of individuals' browsing histories – a potential privacy concern that
prompted European[2] and U.S. lawmakers to take action in 2011.[3][4]European law requires that
all websites targeting European Union member states gain "informed consent" from users before
storing non-essential cookies on their device.
This document explains how to clear the cache and cookies in Internet Explorer 8.
The above procedure for clearing cache and cookies should work for the majority of websites,
but certain website and applications such as WiscMail require a more thorough procedure. If you
are still having issues, try to steps below.
1. Close out of Internet Options. Click on Tools and select Developer Tools.
2. In the Developer Tools window, click on Cache and select Clear Browser Cache...
3. Click Yes to confirm the clearing of the browser cache.
What is an e-commerce security?
E-commerce security is protection the various e-commerce assets from unauthorized access, its
use, or modification.
Server security is as important as network security because servers can hold most or all of the
organization's vital information. If a server is compromised, all of its contents may become
available for the cracker to steal or manipulate at will. There are many ways that a server can be
cracked. The following sections detail some of the main issues.
By default, most operating systems install several pieces of commonly used software. Red Hat
Linux, for example, can install up to 1200 application and library packages in a single
installation. While most server administrators will not opt to install every single package in the
distribution, they will install a base installation of packages, including several server
applications.
Unpatched Services
Most server applications that are included in a default Red Hat Linux installation are solid,
thoroughly tested pieces of software. Many of the server applications have been in use in
production environments for many years, and their code has been thoroughly refined and many
of the bugs have been found and fixed.
However, there is no such thing as perfect software, and there is always room for further
refinement. Moreover, newer software is often not as rigorously tested as one might expect, due
to its recent arrival to production environments or because it may not be as popular as other
server software. Developers and system administrators often find exploitable bugs in server
applications and publish the information on bug tracking and security-related websites such
as the Bugtraq mailing list or the Computer Emergency Response Team website. CERT and
Bugtraq normally alert interested parties of the vulnerabilities. However, even then, it is up to
system administrators to patch and fix these bugs whenever they are made public, as crackers
also have access to these vulnerability tracking services and will use such information to crack
unpatched systems wherever they can. Good system administration requires vigilance, constant
tracking of bugs, and proper system maintenance to ensure a secure computing environment.
Inattentive Administration
Similar to server applications which languish unpatched by developers are administrators who
fail to patch their systems or are too ignorant to do so. According to the System Administration
Network and Security Institute (SANS), the primary cause of computers security vulnerability is
to "assign untrained people to maintain security and provide neither the training nor the time to
make it possible to do the job."[1] This applies as much to inexperienced administrators as it
does to overconfident or amotivated administrators.
Some administrators fail to patch their servers and workstations, while others fail to watch log
messages from their system kernel or from network traffic. Another common error is to leave the
default passwords or keys in services that have such authentication methods built into them. For
example, some databases leave default administration passwords under the assumption that the
system administrator will change this immediately upon configuration. Even an inexperienced
cracker can use the widely-known default password to gain administrative privileges to the
database. These are just a few- examples of inattentive administration that can eventually lead to
a compromised system.
Even the most vigilant organization that does their job well and keeps up with their daily
responsibilities can fall victim to vulnerabilities if the services they choose for their network are
inherently insecure. There are certain services that were developed under the assumption that
they will be used over trusted networks; however, this assumption falls short as soon as the
service becomes available over the Internet.
The Internet was initially developed to provide many paths for military com munications.
Security was not an issue since all military communication was encoded. We all know that
security is a major issue today especially since e-commerce, electronic banking, and other major
financial transactions traverse the Internet. In this section, we will examine secrecy, integrity,
and necessity threats. We will also discuss some solutions to remedy these problems.
Secrecy Threats: To begin, let us explain the difference between secrecy and privacy
threats. Secrecy is a technical issue that requires sophisticated physical and logical mechanism
and focuses on the prevention of unauthorized disclosure of information. Privacy, on the other
hand, is a legal issue and relates to the protection of individual rights of nondisclosure. Some
common problems experienced in the communication channel are as follows:
The Privacy Council created a Web site to address both business and legal issues and assists
businesses to develop security policies. In addition, some sites (such as Anonymizer) provide
anonymous browser service.
Integrity Threats (or active wiretapping): This threat occurs when a message stream of
information (e.g., banking transaction) is altered by an unauthorized person. Examples of these
threats include the following:
Necessity Threats: Other names for this threat include delay, denial, or denial-of-service (DoS)
threat. One goal is to disrupt or stop computer processing which ultimately causes frustrated
visitors to leave the site. Another goal is to delete information from a transmission, a file, or the
system. The Internet Worm in 1998 was the first record of a DoS attack that crippled thousands
of computers connected to the Internet.
Threats to Wireless Networks: Wireless access points (WAPs) are great in that they allow
mobile devices to connect with networks provided they are within a specified range. The
drawback is that individuals can also access the network's resources (e.g., databases, printers,
messages, and the Internet). To prevent this from happening, companies can turn on the WEP
(Wireless Encryption Protocol) which encrypts transmissions from wireless devices to the
WAPs. Companies that fail to change the default login and password for WAPs are creating an
opportunity for hackers. Wardrivers are attackers who drive around searching for accessible
networks. They then place a chalk mark on the building (wardchalking) or draw maps to record
free access points which they share with other hackers.
Encryption Solutions: One technique used to mask data so that it cannot be read by
unauthorized persons is encryption. In this process, a mathematically based program and a
secret key are used to produce an unintelligible string of characters that can only be deciphered
by the sender and receiver of the message. Some important terms used in this technique are as
follows:
The question then is how effective are encryption techniques? The answer lies in the size of the
key used in the procedure. A 40-bit key provides minimal security. The larger the key, the
stronger the encryption which makes it impossible for a hacker to decipher the message. In
general, encryption can be divided into three functions based on the type of key and encryption
program:
1. Hash Coding: A unique hash value is created from a message using a hash
algorithm which is a one-way function. This serves as a fingerprint for the message and
therefore makes it easy to determine if anyone has tampered with the message during
transmission. The chances of a collision (duplicated hash values) are rare.
Advantages: public key encryption scales well; the public key can be posted anywhere
and does not require any special handling; digital signatures can be used to authenticate
documents.
Disadvantages: the encryption process is slower than with the private-key; they need to
be combined with the private-key system to get a better result.
3. Symmetric Encryption (or private-key encryption): This technique only uses one
numeric key to encode/decode messages and it is very fast and efficient. Since both the
sender and receiver will have access to the key, it must be well protected. The
disadvantages with this method are as follows: all messages must be encrypted, and the
technique does not scale well in large environments. The most widely used private-key
encryption system is the Data Encryption Standard (DES).
Private Key and public key are a part of encryption that encodes the information. Both keys work
in two encryption systems called symmetric and asymmetric. Symmetric encryption (private-key
encryption or secret-key encryption) utilize the same key for encryption and decryption.
Asymmetric encryption utilizes a pair of keys like public and private key for better security
where a message sender encrypts the message with the public key and the receiver decrypts it
with his/her private key.
Public and Private key pair helps to encrypt information that ensures data is protected during
transmission.
Public Key:
Public key uses asymmetric algorithms that convert messages into an unreadable format. A
person who has a public key can encrypt the message intended for a specific receiver. The
receiver with the private key can only decode the message, which is encrypted by the public key.
The key is available via the public accessible directory.
Private Key:
The private key is a secret key that is used to decrypt the message and the party knows it that
exchange message. In the traditional method, a secret key is shared within communicators to
enable encryption and decryption the message, but if the key is lost, the system becomes void.
To avoid this weakness, PKI (public key infrastructure) came into force where a public key is
used along with the private key. PKI enables internet users to exchange information in a secure
way with the use of a public and private key.
Key Size and Algorithms:
There are RSA, DSA, ECC (Elliptic Curve Cryptography) algorithms that are used to create a
public and private key in public key cryptography (Asymmetric encryption). Due to security
reason, the latest CA/Browser forum and IST advises to use 2048-bit RSA key. The key size (bit-
length) of a public and private key pair decides how easily the key can be exploited with a brute
force attack. The more computing power increases, it requires more strong keys to secure
transmitting data.
Definition of Digital Signature-
A digital signature is a technique that verifies the authenticity of the digital document in which
particular code is attached to the message that acts as a signature. Hash of the message is utilized
for the creation of the message and after that message is encrypted with the sender’s private
key. The signature ensures the source and integrity of the message.
The Digital Signature Standard (DSS) was developed for performing the digital signature.
The National Institute of Standards and Technology (NIST) issued the DSS standard as
the Federal Information Processing Standard(FIPS) PUB 186 in 1991.
SHA-1 algorithm is used in DSS for computing the message digest against the original message
and utilizes message digest to achieve digital signature. For doing so, DSS utilizes Digital
signature algorithm (DSA). DSA is based on Asymmetric key cryptography.
Furthermore, RSA algorithm can also be used for performing digital signature, but its primary
use is to encrypt the message. Although, DSA cannot be used for the encryption.
A Digital Certificate is simply a computer file which helps in establishing your identity. It
officially approves the relation between the holder of the certificate (the user) and a particular
public key. Thus, a digital certificate should include the user name and the user’s public key.
This will prove that the certain public key owned by a particular user.
A digital certificate consists of the following information: Subject name (User’s name is
referred to as Subject name because a digital certificate can be issued to an individual, a group or
an organization), Serial number, Validity date range and issuer name, etc.
A Certification Authority (CA) is a trusted agency that can issue digital certificates to
individuals and organizations, which want to use those certificates in the asymmetric key
cryptographic application. Generally, a CA is a well-known organization, such as financial
institution, post office, a software company, etc. The most popular CA’s are Verisign and
Entrust.
CA accomplishes various tasks, for example, it issues new certificates, maintain old ones, and
revoke the certificate that has become invalid for some sort of reasons, etc. The CA can delegate
some of its tasks to this third-party called as a Registration Authority (RA).
Digital Certificate Creation Steps:
1. Key generation – It starts with the creation of the subject’s public and private keys using
some software. This software works as a part of web browser and web server. The subject
must not share the private key. The subject then sends the public key along with the other
information like evidence about himself/herself to the RA (Registration Authority).
Although, either if the user has no knowledge about technicalities included in the creation
of the key or if there are particular demands that the key must be centrally created then
these keys can be created by RA also on the subject’s (user’s) behalf.
2. Registration: Suppose the user has created the key pair, the user now sends the public key
and the related registration information (e.g. subject name, as it is needed to show in the
digital certificate) and all the evidence of himself and herself to the RA.
For this, the software offers a wizard in which the user inserts the data and submits it
when all the data is validated. Then the data moves over the network/internet to the RA.
The format for the certificate requests has been standardized and is called certificate
signing request (CSR). This is one of the public key cryptography standards (PKCS).
3. Verification: When the registration process is completed, the RA has to check the user’s
credentials such as the provided information is correct and acceptable or not.
The second check is to ensure the user who is requesting for the certificate does indeed
possess the private key correlating to the public key that is sent as the part of the certificate
request to the RA. This inspection is called as checking the Proof Of Possession (POP) of
the private key.
4. Certificate creation: Suppose that all steps until now have been successfully executed, the
RA accepts all the details of the user to the CA. The CA does its own verification (if
required) and creates a digital certificate for the user.
There are programs for creating certificates in the X.509 standard format. The CA delivers
the certificate to the user and also keep a copy of the certificate for its own record. The
CA’s copy of the certificate is maintained in a certificate directory.
E-commerce sites use electronic payment, where electronic payment refers to paperless
monetary transactions. Electronic payment has revolutionized the business processing by
reducing the paperwork, transaction costs, and labor cost. Being user friendly and less time-
consuming than manual processing, it helps business organization to expand its market
reach/expansion. Listed below are some of the modes of electronic payments −
Credit Card
Debit Card
Smart Card
E-Money
Electronic Fund Transfer (EFT)
Credit Card
Payment using credit card is one of most common mode of electronic payment. Credit card is
small plastic card with a unique number attached with an account. It has also a magnetic strip
embedded in it which is used to read credit card via card readers. When a customer purchases a
product via credit card, credit card issuer bank pays on behalf of the customer and customer has
a certain time period after which he/she can pay the credit card bill. It is usually credit card
monthly payment cycle. Following are the actors in the credit card system.
Step 1 Bank issues and activates a credit card to the customer on his/her
request.
Step 2 The customer presents the credit card information to the merchant site or
to the merchant from whom he/she wants to purchase a product/service.
Step 3 Merchant validates the customer's identity by asking for approval from
the card brand company.
Step 4 Card brand company authenticates the credit card and pays the
transaction by credit. Merchant keeps the sales slip.
Step 5 Merchant submits the sales slip to acquirer banks and gets the service
charges paid to him/her.
Step 6 Acquirer bank requests the card brand company to clear the credit
amount and gets the payment.
Step 6 Now the card brand company asks to clear the amount from the issuer
bank and the amount gets transferred to the card brand company.
Debit Card
Debit card, like credit card, is a small plastic card with a unique number mapped with the bank
account number. It is required to have a bank account before getting a debit card from the bank.
The major difference between a debit card and a credit card is that in case of payment through
debit card, the amount gets deducted from the card's bank account immediately and there should
be sufficient balance in the bank account for the transaction to get completed; whereas in case
of a credit card transaction, there is no such compulsion.
Debit cards free the customer to carry cash and cheques. Even merchants accept a debit card
readily. Having a restriction on the amount that can be withdrawn in a day using a debit card
helps the customer to keep a check on his/her spending.
Smart Card
Smart card is again similar to a credit card or a debit card in appearance, but it has a small
microprocessor chip embedded in it. It has the capacity to store a customer’s work-related
and/or personal information. Smart cards are also used to store money and the amount gets
deducted after every transaction.
Smart cards can only be accessed using a PIN that every customer is assigned with. Smart cards
are secure, as they store information in encrypted format and are less expensive/provides faster
processing. Mondex and Visa Cash cards are examples of smart cards.
E-Money
E-Money transactions refer to situation where payment is done over the network and the amount
gets transferred from one financial body to another financial body without any involvement of a
middleman. E-money transactions are faster, convenient, and saves a lot of time.
Online payments done via credit cards, debit cards, or smart cards are examples of emoney
transactions. Another popular example is e-cash. In case of e-cash, both customer and merchant
have to sign up with the bank or company issuing e-cash.
Nowadays, internet-based EFT is getting popular. In this case, a customer uses the website
provided by the bank, logs in to the bank's website and registers another bank account. He/she
then places a request to transfer certain amount to that account. Customer's bank transfers the
amount to other account if it is in the same bank, otherwise the transfer request is forwarded to
an ACH (Automated Clearing House) to transfer the amount to other account and the amount is
deducted from the customer's account. Once the amount is transferred to other account, the
customer is notified of the fund transfer by the bank.
Payment gateway
From Wikipedia, the free encyclopedia
acteristics[edit]
The characteristics of these four properties as defined by Reuter and Härder are as follows:
Atomicity[edit]
Main article: Atomicity (database systems)
Transactions are often composed of multiple statements. Atomicity guarantees that each
transaction is treated as a single "unit", which either succeeds completely, or fails completely: if
any of the statements constituting a transaction fails to complete, the entire transaction fails and
the database is left unchanged. An atomic system must guarantee atomicity in each and every
situation, including power failures, errors and crashes.
Consistency[edit]
Main article: Consistency (database systems)
Consistency ensures that a transaction can only bring the database from one valid state to
another, maintaining database invariants: any data written to the database must be valid
according to all defined rules, including constraints, cascades, triggers, and any combination
thereof. This prevents database corruption by an illegal transaction, but does not guarantee that a
transaction is correct.
Isolation[edit]
Main article: Isolation (database systems)
Transactions are often executed concurrently (e.g., reading and writing to multiple tables at the
same time). Isolation ensures that concurrent execution of transactions leaves the database in the
same state that would have been obtained if the transactions were executed sequentially.
Isolation is the main goal of concurrency control; depending on the method used, the effects of
an incomplete transaction might not even be visible to other transactions.
Durability[edit]
Main article: Durability (database systems)
Durability guarantees that once a transaction has been committed, it will remain committed even
in the case of a system failure (e.g., power outage or crash). This usually means that completed
transactions (or their effects) are recorded in non-volatile memory.
Examples[edit]
The following examples further illustrate the ACID properties. In these examples, the database
table has two columns, A and B. An integrity constraint requires that the value in A and the value
in B must sum to 100. The following SQL code creates a table as described above:
Atomicity failure[edit]
In database systems, atomicity (or atomicness; from Greek a-tomos, undividable) is one of the
ACID transaction properties. A series of database operations in an atomic transaction will either
all occur, or none will occur. The series of operations cannot be separated with only some of
them being executed, which makes the series of operations "indivisible". A guarantee of
atomicity prevents updates to the database occurring only partially, which can cause greater
problems than rejecting the whole series outright. In other words, atomicity means indivisibility
and irreducibility.[4] Alternatively, we may say that a Logical transaction may be made of, or
composed of, one or more (several), Physical transactions. Unless and until all component
Physical transactions are executed, the Logical transaction will not have occurred – to the effects
of the database. Say our Logical transaction consists of transferring funds from account A to
account B. This Logical transaction may be composed of several Physical transactions consisting
of first removing the amount from account A as a first Physical transaction and then, as a second
transaction, depositing said amount in account B. We would not want to see the amount removed
from account A before we are sure it has been transferred into account B. Then, unless and until
both transactions have happened and the amount has been transferred to account B, the transfer
has not, to the effects of the database, occurred.
Consistency failure[edit]
Consistency is a very general term, which demands that the data must meet all validation rules.
In the previous example, the validation is a requirement that A + B = 100. All validation rules
must be checked to ensure consistency. Assume that a transaction attempts to subtract 10 from A
without altering B. Because consistency is checked after each transaction, it is known that A + B
= 100 before the transaction begins. If the transaction removes 10 from A successfully, atomicity
will be achieved. However, a validation check will show that A + B = 90, which is inconsistent
with the rules of the database. The entire transaction must be cancelled and the affected rows
rolled back to their pre-transaction state. If there had been other constraints, triggers, or cascades,
every single change operation would have been checked in the same way as above before the
transaction was committed. Similar issues may arise with other constraints. We may have
required the data types of both A,B to be integers. If we were then to enter, say, the value 13.5
for A, the transaction will be cancelled, or the system may give rise to an alert in the form of a
trigger (if/when the trigger has been written to this effect). Another example would be with
integrity constraints, which would not allow us to delete a row in one table whose Primary key is
referred to by at least one foreign key in other tables.
Isolation failure[edit]
To demonstrate isolation, we assume two transactions execute at the same time, each attempting
to modify the same data. One of the two must wait until the other completes in order to maintain
isolation.
Consider two transactions. T1 transfers 10 from A to B. T2 transfers 10 from B to A. Combined,
there are four actions:
T1 subtracts 10 from A.
T1 adds 10 to B.
T2 subtracts 10 from B.
T2 adds 10 to A.
If these operations are performed in order, isolation is maintained, although T2 must wait.
Consider what happens if T1 fails halfway through. The database eliminates T1's effects, and
T2 sees only valid data.
By interleaving the transactions, the actual order of actions might be:
T1 subtracts 10 from A.
T2 subtracts 10 from B.
T2 adds 10 to A.
T1 adds 10 to B.
Again, consider what happens if T1 fails while modifying B (step 4). By the time T1 fails, T2 has
already modified A; it cannot be restored to the value it had before T1 without leaving an invalid
database. This is known as a write-write failure,[citation needed] because two transactions attempted
to write to the same data field. In a typical system, the problem would be resolved by reverting to
the last known good state, canceling the failed transaction T1, and restarting the interrupted
transaction T2 from the good state.
Durability failure[edit]
Consider a transaction that transfers 10 from A to B. First it removes 10 from A, then it adds 10
to B. At this point, the user is told the transaction was a success, however the changes are still
queued in the disk buffer waiting to be committed to disk. Power fails and the changes are lost.
The user assumes (understandably) that the changes persist.
Implementation[edit]
Processing a transaction often requires a sequence of operations that is subject to failure for a
number of reasons. For instance, the system may have no room left on its disk drives, or it may
have used up its allocated CPU time. There are two popular families of techniques: write-ahead
logging and shadow paging. In both cases, locks must be acquired on all information to be
updated, and depending on the level of isolation, possibly on all data that may be read as well. In
write ahead logging, atomicity is guaranteed by copying the original (unchanged) data to a log
before changing the database.[dubious – discuss] That allows the database to return to a consistent state
in the event of a crash. In shadowing, updates are applied to a partial copy of the database, and
the new copy is activated when the transaction commits.
Locking vs multiversioning[edit]
Many databases rely upon locking to provide ACID capabilities. Locking means that the
transaction marks the data that it accesses so that the DBMS knows not to allow other
transactions to modify it until the first transaction succeeds or fails. The lock must always be
acquired before processing data, including data that is read but not modified. Non-trivial
transactions typically require a large number of locks, resulting in substantial overhead as well as
blocking other transactions. For example, if user A is running a transaction that has to read a row
of data that user B wants to modify, user B must wait until user A's transaction completes. Two
phase locking is often applied to guarantee full isolation.
An alternative to locking is multiversion concurrency control, in which the database provides
each reading transaction the prior, unmodified version of data that is being modified by another
active transaction. This allows readers to operate without acquiring locks, i.e., writing
transactions do not block reading transactions, and readers do not block writers. Going back to
the example, when user A's transaction requests data that user B is modifying, the database
provides A with the version of that data that existed when user B started his transaction. User A
gets a consistent view of the database even if other users are changing data. One implementation,
namely snapshot isolation, relaxes the isolation property.
Distributed transactions[edit]
Main article: Distributed transaction
Secure electronic transaction (SET) was an early protocol for electronic credit card payments.
As the name implied, SET was used to facilitate the secure transmission of consumer credit card
information via electronic avenues, such as the Internet. SET blocked out the details of credit
card information, thus preventing merchants, hackers and electronic thieves from accessing this
information.
The underlying protocols and standards for secure electronic transactions were backed and
supported by Microsoft, IBM, MasterCard, Visa, Netscape, and others. Digital certificates were
assigned to provide the electronic access to funds, whether it was a credit line or bank account.
When a purchase was made electronically, encrypted digital certificates were what let the
customer, merchant, and financial institution complete a verified transaction.
Digital certificates were generated for participants in the transaction, along with matching digital
keys that allowed them to confirm the certificates of the other party. The algorithms used would
ensure that only a party with the corresponding digital key would be able to confirm the
transaction. This way a consumer’s credit card or bank account could be used without revealing
details like account numbers. Thus, SET was a form of security against account theft, hacking,
and other criminal actions.