Вы находитесь на странице: 1из 149



Training Based On IT SUPPORT





University Roll no. 0729110007

This is to certify that the Project entitled

Was carried out by



In the partial fulfillment the requirement of award of

degree of


(SESSION 2010-2011)

The degree of UPTU

University Roll no. 0729110007

During summer vacation training of year 2010



First and foremost, we would like to

thank Mr. MAHESH PANDEY for the
valuable guidance and advice. He
inspired us greatly to work in this
project. His willingness to motivate us
contributed tremendously to our
project. We also would like to thank
him for showing us some example that
related to the topic of our project.
Besides, I would like to thank HCL for
providing us with a good environment
and facilities to complete this project.
Also, I would like to take this
opportunity to thank Mr. AMIT CHOPRA
for offering this subject. He gave us an
opportunity to participate and learn
“Using Networking and

I, ANKUR SHARMA, student of DIT

Bachelor Of Technology (B.Tech) 7th
Semester, hereby declare that the
project report title “IT—SUPPORT
ENGINEER ” is an original work carried
out by me in HCL , availing the
guidance of my project guide Mr.
MAHESH PANDEY , (IT-D) to the best of
my knowledge and belief.

Roll No. 0729110007
Branch- CSE.
Software has become the key element in the evolution of
computer based systems and products. Over the past four
decades, software has evolved from the specialized problem
solving and information analysis tool to an industry itself. The
software development process involves the scales and experience
of the people involved.

The Summer Vacation of B.Tech. Course Is a training period as

the part of curriculum; this provides valuable practical experience
to the students to and individual treasure of experience and offers
an exposure to the real time management in an organization. It is
the period during which the student is introduced and familiarized
to the industrial environment. With the advancement in computer
technology and increases automation, the software industry has
become all the more important. The introduction of computers
and electronics in the field of processing has made it essential
that the input be much more accurate and controllers much faster
in response. The state of art used in all the process-controlled
industry spares us from tension of managing the fast process
manually thus ensuring improved operating efficiency and full

Industrial training is the major part of theoretical studies as it

covers all that remains uncovered in the classroom i.e. without it,
the studies remains ineffective and incomplete. The objective of
training is to raise the performance of the studies in one or more
of its aspects. This may be achieved by providing new knowledge
and information relevant to the project, by teaching new trends
by imbuing and individual with new attitude, motives, co-
ordinates, co-operation and other personality characteristics.
Often these technologies are utilized with segments of the work
force regardless of the existing performance level, to operate

Since gaining theoretical knowledge is just not sufficient for sure

success in life, especially in an ever growing industry like this,
practical training plays an important role in building the future of
an individual. During my training period, I had the opportunity to
gain the practical experience at HCL,Noida. I seek this instance in
a very satisfactory manner & believe that it will be very beneficial
for me.

a. Company’s Introduction
b. History Of Computers And Computing In India
i. Early Computation
ii. Navigation And Astronomy
iii. Weather Prediction
iv. Symbolic Computations
c. HCL Technologies
d. HCL Info Systems Ltd
e. External Links
f. It Mission
i. The Itd Caters To All Business Functions
g. Training Program
h. It Support Engineer
i. Flow Chart
i. As An Academic Discipline
a. Algorithms
i. Algorithms Includes
ii. Flow Chart
b. Organization
c. Artificial Intelligence
i. Artificial Intelligence Includes:

d. Automation

i. Automation Parts

e. Operating Systems
i. Operating Systems Includes
f. Computer Networking
i. Computer Networking Parts
g. Software Engineering
h. Programming Fundamentals
i. Programming Fundamentals Parts
ii. Outsourcing Technical Support
iii. Multi-Tiered Technical Support
Level 1(L1)
Level 2(L2)
1. Level 3(L3)
Level 4(L4)
i. Organizations
i. Desk Side Team
ii. Network Team
iii. Server Team
iv. Other Teams
a. Architecture
b. Terminal Server
i. Terminal Service Gateway
c. Remote Desktop Connection
i. Remote Applications
d. Information Sharing & Security
i. Basic Principles
Risk Management
a. Defense In Depth
b. Remedy User

c. BMC Remedy User

d. BMC Remedy Mid-Tier

a. Domain Name Space
Parts Of A Domain Name
Top-Level Domains
Second-Level And Lower Level Domains
Internationalized Domain Name
b. Domain Name Space

c. Domain Name Controller

Windows Nt
d. Windows 2000
e. Abuse & Regulations

f. Fictitious Domain Name

g. Internet Protocol
i. Virtual Ip Address
ii. Ip Versions

iii. Ip Version 4 Addresses

iv. Ipv4 Sub Netting

v. History Of Ipv4 Subnetting

vi. Classful Ipv4 Subnetting

vii. Ipv4 Subnetting Today – Cidr

viii. Ipv4 Private Addresses

ix. Ip Version 6 Addresses

x. Ipv6 Private Addresses

xi. Ip Sub Networks

xii. Method Of Assignment

1. Static & Dynamic Ip Addresses

2. Uses Of Dynamic Addressing

3. Sticky Dynamic Ip Address

4. Address Auto Configuration

5. Uses Of Static Addressing

6. Modifications To Ip Addressing

7. Ip Address Translation

8. Tools

9. .Pst Files
h. Overview
i. Support
j. Size And Formats

k. Entourage And Outlook For Mac

l. Determine Your Account Types

a. Standards Evolution

b. Cabling

c. Technical Aspects
d. Lan Messenger
e. User Messenger

f. Protocol
g. Containing Username &Password
h. Unlock & Reset The User Password
i. Administrators And Administrative Rights
a. Windows
b. Unix, Linux, Bsd, Solaris, And Mac Os X


HCL is a leading global Technology and IT Enterprise.The
HCL Enterprise comprises two companies listed in India,
HCL Technologies and HCL Info systems.
The HCL Enterprise is
an electronics, computing and information
technology company. Based in Noida, India, the company
comprises two publicly listed companies, HCL
Technologies and HCL Infosystems.
HCL was founded in 1976 by Shiv Nadar, Arjun
Malhotra, Subhash Arora, Ajai Chowdhry, DS Puri,
& Yogesh Vaidya. HCL was focused on addressing the IT
hardware market in India for the first two decades of its
existence with some sporadic activity in the global
On termination of the joint venture with HP in 1996, HCL
became an enterprise which comprises HCL Technologies
(to address the global IT services market) and HCL Info
systems (to address the Indian and APAC IT hardware
market). HCL has since then operated as a holding
The history of computing is longer than the history of
computing hardware and modern computing
technology and includes the history of methods intended
for pen and paper or for chalk and slate, with or without
the aid of tables. The timeline of computing presents a
summary list of major developments in computing by

The earliest known tool for use in computation was
the abacus, and it was thought to have been
invented in Babylon circa 2400 BC. Its original style
of usage was by lines drawn in sand with pebbles.
Abaci, of a more modern design, are still used as
calculation tools today. This was the first known
computer and most advanced system of calculation
known to date - preceding Greek methods by 2,000
None of the early computational devices were
really computers in the modern sense, and it took
considerable advancement in mathematics and
theory before the first modern computers could be


Starting with known special cases, the calculation of
logarithms and trigonometric functions can be
performed by looking up numbers in a mathematical
table, and interpolating between known cases. For
small enough differences, this linear operation was
accurate enough for use
in navigation and astronomy in the Age of
Exploration. The uses of interpolation have thrived in
the past 500 years: by the twentieth century Leslie
Comrie and W.J. Eckert systematized the use of
interpolation in tables of numbers for punch card
In our time, even a student can simulate the motion
of the planets, an N-body differential equation, using
the concepts of numerical approximation, a feat
which even Isaac Newton could admire, given his
struggles with the motion of the Moon.
The numerical solution of differential equations,
notably the Navier-Stokes equations was an
important stimulus to computing, with Lewis Fry
Richardson's numerical approach to solving
differential equations. To this day, some of the most
powerful computer systems of the Earth are used
for weather forecasts.

By the late 1960s, computer systems could perform
symbolic algebraic manipulations well enough to
pass college-level calculus courses.
HCL Technologies is a global IT Services company
headquartered in Noida, a suburb of Delhi, India led by Mr
Vineet Nayar, HCL Technologies, along with its
subsidiaries, had consolidated revenues of US$ 5 billion,
as of 2010, and employed more than 60,000 workers.
HCL offers IT solutions, remote infrastructure
management, Engineering and R&D Services. The
company provides services across many industries.

HCL Infosystems Ltd

HCL Infosystems Ltd: HCL Infosystems, India's premier
information enabling and integration company offers its
customers technology solutions across multiple
platforms. It has partnerships with some leading global
player like Intel, Toshiba, Ericsson, Microsoft, Nokia and
Sun Microsystems among others.

HCL Infosystem's Frontline division has more than 2000

channel partners covering the entire length & breadth of
the country. The products are manufactured at two ISO
9001 certified state-of-the-art manufacturing facilities at
Pondicherry. With a mission to provide world-class
information technology solutions and services to enable
its customers to serve their customers better, HCL
Infosystems is forever setting new standards of IT in the
A listed subsidiary of HCL, is an India-based hardware and
systems integrator. It has a presence in 170 locations and
300 service centres throughout India. Its manufacturing
facilities are based
in Chennai, Pondicherry and Uttarakhand. It is
headquartered at Noida.
HCL Peripherals (a unit of HCL Infosystems Ltd.), founded
in the year 1983, is a manufacturer of computer
peripherals in India of Display Products, Thin Client
solutions, Information and Interactive Kiosks and a range
of Networking products & Solutions. HCL Peripherals has
two Manufacturing facilities, one in Pondicherry
(Electronics) and the other in Chennai (Mechanical). The
company has been given ISO 9001:2000, ISO 14001

As one of the biggest company in India HCL is having
external links working as nodes of the company. Some
are listed below

 HCL Enterprise
 HCL Technologies
 HCL Info systems
 HCL Peripherals
 HCL Infinet
 HCL Axon
 HCL Security
"Stimulate the value chain for business excellence by
providing innovative information systems using
appropriate IT."
IT mission emphasizes the need for innovative and
appropriate IT systems to drive business excellence.
Appropriate IT systems refer to making optimum and
not excessive use of technology. And most
importantly, IT has to permeate the entire value chain
including our vendors and dealers.

The ITD caters to all business

functions including:

• Marketing and sales

• Material process
• Production
• Finance
• Personnel
• Spares
• Quality
• Service
• I intend to give a brief overview the trainings
undertaken at HCL. The objective behind this
training was to give us an inside on the day to
day business operations at MSIL and how IT
division enhances the productivity and smooth-
lines these operations.

• The IT comprises of Different departments

and we were given orientations in each
department. These orientations were conducted
with the main objective of making us aware of
the business processes under each department
and the magnitude of impact initiatives have on
HCL’s business.

• During the class room training, we got the

opportunity practically go through the industrial
software package and the use of the software n
its techniques.

We also did R & D with the special software package

provided by them in the class room environment.

• IT SUPPORT ENGINEER is a range of services

providing assistance with technology
productselectronic or mechanical goods.
• In general, technical support services attempt to help
the user solve specific problems with a product—
rather than providing training, customization, or
other support services.
• It is an information and assistance resource that
troubleshoots problems with computers or similar
• Computer engineers are involved in many aspects of
computing, from the design of individual
microprocessors, personal computers,
and supercomputers, to circuit design. This field of
engineering not only focuses on how computer
systems themselves work, but also how they
integrate into the larger picture.
A flow chart that show a simple flow of work of IT support

• The first accredited computer engineering degree
program in the United States was established at Case
Western Reserve University in 1971. As of
October 2004, there were 170 ABET-accredited
computer engineering programs in the US.
• Due to increasing job requirements for engineers,
who can design and manage all forms of computer
systems used in industry, some tertiary institutions
around the world offer a bachelor's degree generally
called computer engineering.
• Both computer engineering and electronic
engineering programs include analog and digital
circuit design in their curricula. As with
most engineering disciplines, having a sound
knowledge of mathematics and sciences is necessary
for computer engineers.
• In many institutions, computer engineering students
are allowed to choose areas of in-depth study in their
junior and senior year, because the full breadth of
knowledge used in the design and application of
computers is beyond the scope of an undergraduate


The joint IEEE/ACM defines the core knowledge

• Algorithms
• Computer architecture and organization
• artificial intelligence
• automation
• Operating systems
• Micro-process or interfacing and programming
• Computer networking
• Software engineering
• Programming fundamentals


• In mathematics, computer science, and related

subjects, an 'algorithm' is an effective
method for solving a problem expressed as a
finite sequence of instructions.
• Algorithms are used for calculation, data
processing, and many other fields. (In more
advanced or abstract settings, the instructions
do not necessarily constitute a finite sequence,
and even not necessarily a sequence; see, e.g.,
"nondeterministic algorithm".
• Each algorithm is a list of well-defined
instructions for completing a task. Starting
from an initial state, the instructions describe
a computation that proceeds through a well-
defined series of successive states, eventually
terminating in a final ending state.
• A partial formalization of the concept began
with attempts to solve
the Entscheidungsproblem (the "decision
problem") posed by David Hilbert in 1928.
• Subsequent formalizations were framed as
attempts to define "effective calculability"or
"effective method" those formalizations
included the Gödel–Herbrand–Kleene recursive
functions of 1930, 1934 and 1935, Alonzo
Church’s lambda calculus of 1936, Emil Post's
"Formulation 1" of 1936, and Alan
Turing's Turing machines of 1936-7 and 1939.

o Etymology
o Formalization
o Termination
o Expressing algorithms
o Computer algorithms
o Implementation
o Algorithmic analysis
o Formal versus empirical
o Classification:
o By implementation
o By design paradigm
o By field of study

A flow chart that shows a simple series of

Instructions of Blub replacement with the help of
diagrammatic representation of Algorithm

• In computer science and computer

engineering, computer architecture or digital
computer organization is the conceptual design and
fundamental operational structure of
a computer system.
• It is a blueprint and functional description of
requirements and design implementations for the
various parts of a computer, focusing largely on the
way by which the central processing unit (CPU)
performs internally and accesses addresses in
• Computer architecture comprises at least three main

• Instruction set architecture, or ISA, is the abstract

image of a computing system that is seen by
a machine language (or assembly language)
programmer, including the instruction set, word
size, memory address modes, processor registers,
and address and data formats.
• Micro-architecture, also known as Computer
organization is a lower level, more concrete and
detailed, description of the system that involves how
the constituent parts of the system are
interconnected and how they interoperate in order to
implement the ISA.
• The size of a computer's cache for instance, is an
organizational issue that generally has nothing to do
with the ISA.
System Design which includes all of the other
hardware components within a computing
system such as:

1. System interconnects such as computer

buses and switches
2. Memory controllers and hierarchies
3.CPU off-load mechanisms such as direct
memory access(DMA)
4. Issues like multiprocessing.

Once both ISA and microarchitecture have been

specified, the actual device needs to be designed
into hardware. This design process is
called implementation. Implementation is usually
not considered architectural definition, but rather
hardware design engineering.
Implementation can be further broken down into
three (not fully distinct) pieces:

 Logic Implementation — design of blocks defined

in the microarchitecture at (primarily) the
register-transfer and gate levels.
 Circuit Implementation — transistor-level design
of basic elements (gates, multiplexers, latches
etc) as well as of some larger blocks (ALUs,
caches etc) that may be implemented at this
level, or even (partly) at the physical level, for
performance reasons.
 Physical Implementation — physical circuits are
drawn out, the different circuit components are
placed in a chip floorplan or on a board and the
wires connecting them are routed.


• Artificial intelligence (AI) is the intelligence of

machines and the branch of computer science that
aims to create it.
• Textbooks define the field as "the study and design
of intelligent agents “where an intelligent agent is a
system that perceives its environment and takes
actions that maximize its chances of success. John
McCarthy, who coined the term in 1956, defines it as
"the science and engineering of making intelligent
• The field was founded on the claim that a central
property of humans, intelligence—
the sapience of Homo sapiens—can be so precisely
described that it can be simulated by a machine.
• This raises philosophical issues about the nature of
the mind and limits of scientific hubris, issues which
have been addressed by myth,
fiction and philosophy since antiquity.
• Artificial intelligence has been the subject of
optimism, but has also suffered setbacks and, today,
has become an essential part of the technology
industry, providing the heavy lifting for many of the
most difficult problems in computer science.
• AI research is highly technical and specialized,
deeply divided into subfields that often fail to
communicate with each other. Subfields have grown
up around particular institutions, the work of
individual researchers, the solution of specific
problems, longstanding differences of opinion about
how AI should be done and the application of widely
differing tools.


1) Problems

o 1.1 Deduction,
reasoning, problem solving
o 1.2 Knowledge
o 1.3 Planning
o 1.4 Learning
o 1.5 Natural language
o 1.6 Motion and
o 1.7 Perception
o 1.8 Social intelligence
o 1.9 Creativity
o 1.10 General

2) Approaches
o 2.1 Cybernetics and
brain simulation
o 2.2 Symbolic
o 2.3 Sub-symbolic
o 2.4 Statistical
o 2.5 Integrating the

3) Tools

o 3.1 Search and

o 3.2 Logic
o 3.3 Probabilistic
methods for uncertain reasoning
o 3.4 Classifiers and
statistical learning methods
o 3.5 Neural networks
o 3.6 Control theory
o .7 Languages

• Automation is the use of control

systems and information technologies reducing
the need for human intervention. In the scope
of industrialization, automation is a step
beyond mechanization.
• Whereas mechanization provided human
operators with machinery to assist them with
the muscular requirements of
work, automation greatly reduces the need for
human sensory and and mental requirements
as well.
• Automation plays an increasingly important
role in the world economy and in daily

Automation includes:

• Impact
• Concerns about unemployment
• Dependence on social factors
• Reliability and precision
• Health and environment
• Convertibility and turn-around time
• Automation tools

• An operating system (OS) is a set of system

software programs in a computer that regulate the
ways application software programs use
the computer hardware and the ways
that users control the computer.
• For hardware functions such as input/output
and memory space allocation, operating system
programs act as an intermediary between application
programs and the computer hardware, although
application programs are usually executed directly
by the hardware.
• Operating Systems is also a field of study
within Applied Computer Science.
• Operating systems are found on almost any device
that contains a computer with multiple programs—
from cellular phones and video game
consoles to supercomputers and web servers.
• Operating systems are two-sided platforms, bringing
consumers (the first side) and program
developers (the second side) together in a single
• Some popular modern operating systems for
personal computers include Microsoft Windows, Mac
OS X, and Linux.
Basic Representation of Working of an Operation


o PLAN 9
1. The user interface
2. Graphical user interfaces
1. Program execution
2. Interrupts
3. Protected mode, supervisor mode, and virtual
4. Memory management
5. Virtual memory
6. Multitasking
7. Device drivers
8. Networking
9. Security
10. Real-time operating systems
11. Diversity of operating systems and portability
About operating system

• Computer networking is
the engineering discipline concerned
with the communication between computer
systems or devices.
• A computer network is any set of computers or
devices connected to each other with the ability to
exchange data.
• The three types of networks are: the Internet,
the intranet, and the extranet. Examples of different
network methods are:

• Local area network (LAN), which is usually a small

network constrained to a small geographic area. An
example of a LAN would be a computer network
within a building.
• Metropolitan area network (MAN), which is used for
medium size area. Examples for a city or a state.
• Wide area network (WAN) that is usually a larger
network that covers a large geographic area.
• Wireless LANs and WANs(WLAN & WWAN) are the
wireless equivalent of the LAN and WAN.
• All networks are interconnected to allow
communication with a variety of different kinds of
media, including twisted-pair copper wire
cable, coaxial cable, optical fiber, power lines and
various wireless technologies.

• Networking methods
• Local area network (LAN)
• Wide area network (WAN)
• Wireless networks (WLAN, WWAN)
• Network topology

• Software engineering (SE) is

a profession dedicated to designing, implementing,
and modifying software so that it is of higher quality,
more affordable, maintainable, and faster to build.
• The term software engineering first appeared in the
1968 NATO Software Engineering Conference, and
was meant to provoke thought regarding the
perceived "software crisis" at the time.
• Since the field is still relatively young compared to its
sister fields of engineering, there is still much debate
around what software engineering actually is, and if
it conforms to the classical definition of engineering.
• Others, such as Steve McConnell, argue that
engineering's blend of art and science to achieve
practical ends provides a useful model for software


• Computer programming (often shortened

to programming or coding) is the process of
designing, writing, testing,
debugging / troubleshooting, and maintaining
the source code of computer programs.
• This source code is written in a programming
language. The code may be a modification of an
existing source or something completely new.
• The purpose of programming is to create a program
that exhibits a certain desired behavior
• The process of writing source code often requires
expertise in many different subjects, including
knowledge of the application domain, specialized
algorithms and formal logic.

Programming fundamentals includes:

• Quality requirements
• Algorithmic complexity
• Methodologies
• Measuring language usage
• Debugging
• Programming languages
• Programmers

Outsourcing technical support

• With the increasing use of technology in modern

times, there is a growing requirement to provide
technical support.
• Many organizations locate their technical support
departments or call centers in countries with lower
• There has also been a growth in companies
specializing in providing technical support to other
• These are often referred to as MSP's (Managed
Service Providers)
• For businesses needing to provide technical
support, outsourcing provides them with the ability
to maintain a high availability of service.
• This comes as a result of peaks in call volumes
during the day, periods of high activity due to the
introduction of new products and maintenance
service packs, and the necessity to provide
consumers with a high level of service at a low cost
to the business.
• For businesses needing technical support assets
outsourcing enables their core employees to focus
more on their work in order to maintain productivity.
• It also enables them to utilize specialized personnel
whose technical knowledge base and experience
may exceed the scope of the business, thus
providing a higher level of technical support to their

Multi-tiered technical support

• Technical support is often subdivided into tiers, or

levels, in order to better serve a business or
customer base.
• The number of levels a business uses to organize
their technical support group is dependent on a
business’ need, want, or desire as it revolves around
their ability to sufficiently serve their customers or
• The reason for providing a multi-tiered support
system instead of one general support group is to
provide the best possible service in the most efficient
possible manner.
• Success of the organizational structure is dependent
on the technicians’ understanding of their level of
responsibility and commitments, their customer
response time commitments, and when to
appropriately escalate an issue and to which level.
• A common support structure revolves around a
three-tiered technical support system.

Level 1(L1)

• This is the initial support level responsible for basic

customer issues. It is synonymous with first-line
support, level 1 support, front-end support, support
line 1, and various other headings denoting basic
level technical support functions.
• The first job of a Tier I specialist is to gather the
customer’s information and to determine the
customer’s issue by analyzing the symptoms and
figuring out the underlying problem.
• When analyzing the symptoms, it is important for
the technician to identify what the customer is trying
to accomplish so that time is not wasted on
“attempting to solve a symptom instead of a
• Once identification of the underlying problem is
established, the specialist can begin sorting through
the possible solutions available.
• Technical support specialists in this group typically
handle straightforward and simple problems while
“possibly using some kind of knowledge
management tool.”

Level 2(L2)
• This is a more in-depth technical support level than
Tier I containing experienced and more
knowledgeable personnel on a particular product or
• It is synonymous with level 2 support, support line 2,
administrative level support, and various other
headings denoting advanced
technical troubleshooting and analysis methods.
• Technicians in this realm of knowledge are
responsible for assisting Tier I personnel solve basic
technical problems and for investigating elevated
issues by confirming the validity of the problem and
seeking for known solutions related to these more
complex issues.
• However, prior to the troubleshooting process, it is
important that the technician review the work order
to see what has already been accomplished by the
Tier I technician and how long the technician has
been working with the particular customer.

Level 3(L3)

• This is the highest level of support in a three-tiered

technical support model responsible for handling the
most difficult or advanced problems.
• It is synonymous with level 3 support, back-end
support, support line 3, high-end support, and
various other headings denoting expert level
troubleshooting and analysis methods.
• These individuals are experts in their fields and are
responsible for not only assisting both Tier I and Tier
II personnel, but with the research and development
of solutions to new or unknown issues.
• Note that Tier III technicians have the same
responsibility as Tier II technicians in reviewing the
work order and assessing the time already spent with
the customer so that the work is prioritized and time
management is sufficiently utilized.
• If it is at all possible, the technician will work to solve
the problem with the customer as it may become
apparent that the Tier I and/or Tier II technicians
simply failed to discover the proper solution.
Level 4(L4)
• While not universally used, a fourth level often
represents an escalation point beyond the
• This is generally a hardware or software vendor.
• Within a corporate incident management system it is
important to continue to track incidents even when
they are being action by a vendor and the Service
Level Agreement (or SLA) may have specific
provision for this.
• A typical help desk has several functions. It provides
the users a single point of contact, to receive help on
various computer issues.
• The help desk typically manages its requests via help
desk software, such as an issue tracking system, that
allows them to track user requests with a unique
number. This can also be called a "Local Bug
Tracker" or LBT. There are many software
applications to support the help desk function.
• Some are targeting enterprise level help desk (rather
large) and some are targeting departmental needs.
• In the mid 1990s, Middleton at Robert Gordon
University found through his research that many
organizations had begun to recognize that the real
value of their help desk(s) derives not solely from
their reactive response to users' issues but from the
help desk's unique position where it communicates
daily with numerous customers or employees.
• This gives the help desk the ability to monitor the
user environment for issues from technical problems
to user preferences and satisfaction.
• Such information gathered at the help desk can be
valuable for use in planning and preparation for other
units in IT.


• Large help desks have different levels to

handle different types of questions. The first-
level help desk is prepared to answer the most
commonly asked questions, or provide
resolutions that often belong in
an FAQ or knowledge base.
• Typically, an issue tracking system has been
implemented that allows a logging process to
take place at the onset of a call. If
the issue isn't resolved at the first-level, the
issue is escalated to a second, higher, level
that has the necessary resources to handle
more difficult calls.
• Organizations may have a third, higher level,
line of support which often deals with software
specific needs, such as updates and bug-fixes
that affect the client directly.
• Larger help desks have a person or team
responsible for managing the issues and are
commonly called queue managers or queue
supervisors. The queue manager is responsible
for the issue queues, which can be setup in
various ways depending on the help desk size
or structure.
• Typically, larger help desks have several teams
that are experienced in working on different
issues. The queue manager will assign an issue
to one of the specialized teams based on the
type of issue.
Desk side team
• The desk side team (sometimes known as "desktop
support") is responsible for the desktops, laptops,
and peripherals, such as PDAs.
• The help desk will assign the desktop team the
second level desk side issues that the first level was
not able to solve.
• They set up and configure computers for new users
and are typically responsible for any physical work
relating to the computers such as repairing software
or hardware issues and moving workstations to
another location.

Network team
• The network team is responsible for the network
software, hardware and infrastructure such
as servers, switches, backup systems and firewalls.
They are responsible for the network services such
as email, file, and security.
• The help desk will assign the network team issues
that are in their field of responsibility.
Server team
• The server team is responsible for most, if not all, of
the servers within the organization.
• This includes, but is not limited to, Active Directory,
Network Shares, Network Resources, Email accounts,
and all aspects of server software.

Other teams
• Some companies have a telecom team that is
responsible for the phone infrastructure such
as PBX, voicemail, VOIP, telephone
sets, modems and fax machines. They are
responsible for configuring and moving telephone
numbers, voicemail setup and configuration and are
assigned these types of issues from the help desk.
• Companies with custom application software may
also have an applications team, who are responsible
for development of any in-house software.
• The Applications team may be assigned problems
such as software bugs from the help desk.
• Requests for new features or capabilities to in-house
software that come through the help desk are also
assigned to applications groups.
• Not all of the help desk staff and supporting IT staff
are in the same location. With remote access
applications, technicians are able to solve many help
desk issues from another location or their home
• There is a need for on-site support to physically work
on some help desk issues; however, help desks are
able to be more flexible with their remote support.
They can also audit workstations.

• Remote Desktop Services, formerly known

as Terminal Services, is one of the components
of Microsoft Windows (both server and client
versions) that allows a user to access applications
and data on a remote computer over a network,
using the Remote Desktop Protocol (RDP).
• Terminal Services is Microsoft's implementation
of thin-client terminal server computing, where
Windows applications, or even the entire desktop of
the computer running terminal services, are made
accessible to a remote client machine.
• The client can either be a fully-fledged computer,
running any operating system as long as the terminal
services protocol is supported, or
a barebones machine powerful enough to support
the protocol (such as Windows FLP).
• With terminal services, only the user interface of an
application is presented at the client.
• Any input to it is redirected over the network to the
server, where all application execution takes place.
• This is in contrast to app streaming systems,
like Microsoft Application Virtualization, in which the
applications, while still stored on a centralized
server, are streamed to the client on-demand and
then executed on the client machine.
• Microsoft changed the name from Terminal Services
to Remote Desktop Services with the release of
Windows Server 2008 R2 in October 2009.
• The new and enhanced architecture takes advantage
of virtualization and makes remote access a much
flexible solution with new deployment scenarios.


• The server component of Remote Desktop Services

is Terminal Server (termdd.sys), which listens on TCP
port 3389. When an RDP client connects to this port,
it is tagged with a unique Session ID and associated
with a freshly spawned console session (Session 0,
keyboard, mouse and character mode UI only).
• The login subsystem (winlogon.exe) and
the GDI graphics subsystem is then initiated, which
handles the job of authenticating the user and
presenting the GUI.
• These executables are loaded in a new session,
rather than the console session.
• When creating the new session, the graphics and
keyboard/mouse device drivers are replaced with
RDP-specific drivers: RdpDD.sys and RdpWD.sys.
The RdpDD.sys is the device driver and it captures
the UI rendering calls into a format that is
transmittable over RDP. RdpWD.sys acts as keyboard
and mouse driver; it receives keyboard and mouse
input over the TCP connection and presents them as
keyboard or mouse inputs.
• It also allows creation of virtual channels, which allow
other devices, such as disc, audio, printers, and COM
ports to be redirected, i.e., the channels act as
replacement for these devices.
• The channels connect to the client over the TCP
connection; as the channels are accessed for data,
the client is informed of the request, which is then
transferred over the TCP connection to the
Terminal Server

• Terminal Server is the server component of

Terminal services. It handles the job of
authenticating clients, as well as making the
applications available remotely.
• It is also entrusted with the job of restricting the
clients according to the level of access they have.
The Terminal Server respects the configured
software restriction policies, so as to restrict the
availability of certain software to only a certain group
of users.
• The remote session information is stored in
specialized directories, called Session
Directory which is stored at the server. Session
directories are used to store state information about
a session, and can be used to resume interrupted
• The terminal server also has to manage these
directories. Terminal Servers can be used in
a cluster as well.
• In Windows Server 2008, it has been significantly
overhauled. While logging in, if the user logged on to
the local system using a Windows Server
Domain account, the credentials from the same sign-
on can be used to authenticate the remote session.
• However, this requires Windows Server 2008 to be
the terminal server OS, while the client OS is limited
to Windows Server 2008, Windows
Vista and Windows 7.
• In addition, the terminal server can provide access to
only a single program, rather than the entire
desktop, by means of a feature named Remote App.
• Terminal Services Web Access (TS Web Access)
makes a RemoteApp session invo-cable from the web
• Terminal Server is managed by the Terminal Server
Manager MMC snap-in.
• It can also be configured by using Group
Policy or WMI. It is, however, not available in client
versions of Windows OS, where the server is pre-
configured to allow only one session and enforce the
rights of the user account on the remote session,
without any customization.
Terminal service gateway

• The Terminal Services Gateway service

component, also known as TS Gateway,
can tunnel the Remote Desktop
Protocol session using a HTTPS channel.
• This increases the security of Remote Desktop
Services by encapsulating the session
with Transport Layer Security (TLS).
• This also allows the option to use Internet
Explorer as the RDP client.
• This feature was introduced in the Windows
Server 2008 and Windows Home
Server products.
• Important to note at the time of writing (Jan
2010), there is no support for the Mac OS client
to connect through a Terminal Services

Remote desktop connection

• Remote Desktop Connection (RDC, also

called Remote Desktop, formerly known
as Microsoft Terminal Service Client, or mstsc)
is the client application for Remote Desktop Services.
It allows a user to remotely log in to a networked
computer running the terminal services server.
• RDC presents the desktop interface of the remote
system, as if it were accessed locally. With version
6.0, if the Desktop Experience component is plugged
into the remote server, the chrome of the
applications will resemble the local applications,
rather than the remote one.
• In this scenario, the remote applications will use
the Aero theme if a Windows Vista machine running
Aero is connected to the server.
• Later versions of the protocol also support rendering
the UI in full 24 bit color, as well as resource
redirection for printers, COM ports, disk drives, mice
and keyboards.
• With resource redirection, remote applications are
able to use the resources of the local computer.
• Audio is also redirected, so that any sounds
generated by a remote application are played back
at the client system.
• In addition to regular username/password for
authorizing for the remote session.
• RDC also supports using smart cards for
authorization. With RDC 6.0, the resolution of a
remote session can be set independently of the
settings at the remote computer.
• In addition, a remote session can also span multiple
monitors at the client system, independent of the
multi-monitor settings at the server.

Remote Desktop Connection

Remote applications

• RemoteApp (or TS RemoteApp) is a special mode

of Remote Desktop Services, available only in
Remote Desktop Connection 6.1 and above
(with Windows Server 2008 being the RemoteApp
server), where a remote session connects to a
specific application only, rather than the entire
Windows desktop.
• The RDP 6.1 client ships with Windows XP SP3,
KB952155 for Windows XP SP2 users. Windows Vista
SP1 and Windows Server 2008.
• The UI for the RemoteApp is rendered in a window
over the local desktop, and is managed like any
other window for local applications.
• The end result of this is that remote applications
behave largely like local applications.
• The task of establishing the remote session, as well
as redirecting local resources to the remote
application, is transparent to the end user.
• Multiple applications can be started in a single
RemoteApp session, each with their own windows.
• A RemoteApp can be packaged either as a .rdp file or
distributed via an .msi Windows Installer package.
• When packaged as an .rdp file (which contains the
address of the RemoteApp server, authentication
schemes to be used, and other settings), a
RemoteApp can be launched by double clicking the

Information sharing & security

• Information sharing & security is the practice of

making data used for scholarly research available to
other investigators & protecting information &
information system from unauthorized access.
• Many funding agencies, institutions, and publication
venues have policies regarding data sharing because
transparency and openness are considered by many
to be part of the scientific method.
• A great deal of scientific research is not subject to
data sharing requirements, and many of these
policies have liberal exceptions.
• In the absence of any binding requirement, data
sharing is at the discretion of the scientists
themselves. In addition, in certain situations
agencies and institutions prohibit or severely limit
data sharing to protect proprietary interests, national
security, and patient/victim confidentiality.
• Information sharing & secutiy (especially
photographs and graphic descriptions of animal
research) may also be restricted to protect
institutions and scientists from misuse of data for
political purposes by animal rights extremists.
• The terms information sharing & security, computer
security and information assurance are frequently
incorrectly used interchangeably. These fields are
interrelated often and share the common goals of
theconfidentiality, integrity and availability of
information; however, there are some subtle
differences between them.
• These differences lie primarily in the approach to the
subject, the methodologies used, and the areas of
concentration. Information security is concerned with
the confidentiality, integrity and availability of data
regardless of the form the data may take: electronic,
print, or other forms.
• Computer security can focus on ensuring the
availability and correct operation of a computer
system without concern for the information stored or
processed by the computer.
• Governments, military, corporations, financial
institutions, hospitals, and private businesses amass
a great deal of confidential information about their
employees, customers, products, research, and
financial status. Most of this information is now
collected, processed and stored on
electronic computers and transmitted
across networks to other computers.

• For over twenty years, information security has held

confidentiality, integrity and availability (known as
the CIA triad) to be the core principles of information
• There is continuous debate about extending this
classic trio. Other principles such as Accountability
have sometimes been proposed for addition - it has
been pointed out that issues such as Non-
Repudiation do not fit well within the three core
concepts, and as regulation of computer systems has
increased (particularly amongst the Western nations)
Legality is becoming a key consideration for practical
security installations.
• In 2002, Donn Parker proposed an alternative model
for the classic CIA triad that he called the six atomic
elements of information.

Confidentiality is the term used to prevent the disclosure
of information to unauthorized individuals or systems.
For example, a credit card transaction on the Internet
requires the credit card number to be transmitted from
the buyer to the merchant and from the merchant to
a transaction processing network.
The system attempts to enforce confidentiality by
encrypting the card number during transmission, by
limiting the places where it might appear (in databases,
log files, backups, printed receipts, and so on), and by
restricting access to the places where it is stored.
• In information security, integrity means that data
cannot be modified without authorization. This is not
the same thing as referential integrity in databases.
• Integrity is violated when an employee accidentally
or with malicious intent deletes important data files,
when a computer virus infects a computer, when an
employee is able to modify his own salary in a payroll
database, when an unauthorized user vandalizes a
web site, when someone is able to cast a very large
number of votes in an online poll, and so on.
• There are many ways in which integrity could be
violated without malicious intent. In the simplest
case, a user on a system could mis-type someone's
• On a larger scale, if an automated process is not
written and tested correctly, bulk updates to a
database could alter data in an incorrect way,
leaving the integrity of the data compromised.
• Information security professionals are tasked with
finding ways to implement controls that prevent
errors of integrity.

• For any information system to serve its purpose, the
information must be available when it is needed.
• This means that the computing systems used to
store and process the information, the security
controls used to protect it, and the communication
channels used to access it must be functioning
• In computing, e-Business and information security it
is necessary to ensure that the data, transactions,
communications or documents (electronic or
physical) are genuine.
• It is also important for authenticity to validate that
both parties involved are who they claim they are.
• In law, non-repudiation implies one's intention to
fulfill their obligations to a contract. It also implies
that one party of a transaction cannot deny having
received a transaction nor can the other party deny
having sent a transaction.
• Electronic commerce uses technology such as digital
signatures and encryption to establish authenticity
and non-repudiation.
Risk management
• Risk management is the process of identifying
vulnerabilities and threats to the information
resources used by an organization in achieving
business objectives, and deciding what
countermeasures, if any, to take in reducing risk to
an acceptable level, based on the value of the
information resource to the organization."
• There are two things in this definition that may need
some clarification.
• First, the process of risk management is an ongoing
iterative process. It must be repeated indefinitely.
The business environment is constantly changing
and new threats and vulnerability emerge every day.
• Second, the choice of countermeasures(controls)
used to manage risks must strike a balance between
productivity, cost, effectiveness of the
countermeasure, and the value of the informational
asset being protected.

• The ISO/IEC 27002:2005 Code of practice for

information security management recommends the
following be examined during a risk assessment:

o security policy,
o organization of information security,
o asset management,
o human resources security,
o physical and environmental security,
o communications and operations management,
o access control,
o information systems acquisition, development and
o information security incident management,
o business continuity management, and

• Information security uses cryptography to transform

usable information into a form that renders it
unusable by anyone other than an authorized user;
this process is called encryption.
• Information that has been encrypted (rendered
unusable) can be transformed back into its original
usable form by an authorized user, who possesses
the cryptographic key, through the process of
• Cryptography is used in information security to
protect information from unauthorized or accidental
disclosure while the information is in transit (either
electronically or physically) and while information is
in storage.
• Cryptography provides information security with
other useful applications as well including improved
authentication methods, message digests, digital
signatures, non-repudiation, and encrypted network
communications. Older less secure application such
as telnet and ftp are slowly being replaced with more
secure applications such as sash that use encrypted
network communications.
• Wireless communications can be encrypted using
protocols such as WPA/WPA2 or the older (and less
secure) WEP. Wired communications (such as ITU-
T G.hn) are secured using AES for encryption
and X.1035 for authentication and key exchange.
Software applications such as Gnu PG or PGP can be
used to encrypt data files and Email.
• Cryptography can introduce security problems when
it is not implemented correctly.
• Cryptographic solutions need to be implemented
using industry accepted solutions that have
undergone rigorous peer review by independent
experts in cryptography.
• The length and strength of the encryption key is also
an important consideration. A key that is weak or too
short will produce weak encryption.
• The keys used for encryption and decryption must be
protected with the same degree of rigor as any other
confidential information. They must be protected
from unauthorized disclosure and destruction and
they must be available when needed.

• Information sharing & security must protect

information throughout the life span of the
information, from the initial creation of the
information on through to the final disposal of the
• The information must be protected while in motion
and while at rest. During its life time, information
may pass through many different information
processing systems and through many different parts
of information processing systems.
• There are many different ways the information and
information systems can be threatened. To fully
protect the information during its lifetime, each
component of the information processing system
must have its own protection mechanisms.
• The building up, layering on and overlapping of
security measures is called defense in depth.
• The strength of any system is no greater than its
weakest link.
• Using a defense in depth strategy, should one
defensive measure fail there are other defensive
measures in place that continue to provide
• Recall the earlier discussion about administrative
controls, logical controls, and physical controls.
• The three types of controls can be used to form the
basis upon which to build a defense-in-depth
• With this approach, defense-in-depth can be
conceptualized as three distinct layers or planes laid
one on top of the other.

Remedy user

• Remedy User is a client software application

developed by BMC Software that allows a user to
interact with Action Request System based

• With this software a user can submit, modify, and

search records within Action Request System.

• It is a Windows based software and must be installed

locally on the user's desktop.

• Remedy User allows users to record macros,

including user defined variables, to automate
repetitive tasks. The client can connect to any
number of servers, and can push and pull
information from one server to the next.

• Action Request System workflow objects known as

Active Links (which are built by the administrator)
are cached by Remedy User and run locally from the
Reduce complexity and make customer support, change,
asset, and request management a seamless integrated

This comprehensive suite includes:

• A full set of IT service management applications that
share a native, purpose-built architecture and best-
practice process flows
• The industry’s leading service desk solution
• A closed-loop change and release process tied to
incidents and problems
• Self-service request catalog for IT, security, and,
business needs
• Tracking of incident response times and service desk
performance against SLAs
• Asset and software license lifecycle and compliance
• Real-time performance and ROI metrics reportin

With BMC, you will:

• Prioritize support activities and focus on critical

business services
• Increase staff productivity and consistency by
automating processes, policies, and tasks
• Reduce MTTR and eliminate recurring incidents
through embedded problem and knowledge
management processes
• Reduce IT support costs through self-service call-

BMC Remedy Mid-Tier

• BMC Remedy Mid-Tier is a server component in

the Action Request System architecture produced
by BMC Software.
• It is designed to serve ARS applications and related
items across the Internet and make them accessible
for web based clients.
• Mid-Tier is itself not a client, as a server component
it connects to an ARS server that contains the
applications and related workflow.
• It translates client requests, interprets responses
from the server, handles web service requests, and
runs server-side processes.
• It cannot work without an AR System server.
• Typical clients of Mid-Tier are web browsers and web
service based applications.
• An administrator does not have to design web pages
specific for the web client as forms via the Mid-Tier
are shown exactly as they are originally designed via
the AR System Administrator.

Login-remedy user
Remedy ticket
Closing of remedy ticket
• A domain name is an identification label that
defines a realm of administrative autonomy,
authority, or control on the Internet, based on
the Domain Name System(DNS).
• Domain names are used in various networking
contexts and application-specific naming and
addressing purposes.
• They are organized in subordinate levels
(subdomains) of the DNS root domain, which is
• The first-level set of domain names are the top-level
domains (TLDs), including the generic top-level
domains (gTLDs), such as the prominent
domains com, net and org, and the country code top-
leve domains (ccTLDs).
• Below these top-level domains in the DNS hierarchy
are the second-level and third-level domain names
that are typically open for reservation by end-users
that wish to connect local area networks to the
Internet, run web sites, or create other publicly
accessible Internet resources.
• The registration of these domain names is usually
administered by domain name registrars who sell
their services to the public.
• Domain names are also used as simple identification
labels to indicate ownership or control of a resource.
Such examples are the realm identifiers used in
theSession Initiation Protocol (SIP),
the DomainKeys used to verify DNS domains in e-
mail systems, and in many other Uniform Resource
Identifiers (URIs).

Parts of a domain name

A domain name consists of one or more parts, technically
called labels, tha are conventionally concatenated, and
delimited by dots, such as example.com.

• The right-most label conveys the top-level domain;

for example, the domain name
www.example.com belongs to the top-level
domain com.
• The hierarchy of domains descends from the right to
the left label in the name; each label to the left
specifies a subdivision, or subdomain of the domain
to the right. For example: the label example specifies
a subdomain of the com domain, and www is a
subdomain of example.com. This tree of labels may
consist of 127 levels. Each label may contain up to
63 ASCII characters. The full domain name may not
exceed a total length of 253 characters. In practice,
some domain registries may have shorter limits.
• A hostname is a domain name that has at least one
associated IP address. For example, the domain
names www.example.com and example.com are also
hostnames, whereas the com domain is not.
Top-level domains
• The top-level domains (TLDs) are the highest level of
domain names of the Internet. They form the DNS
root zone of the hierarchical Domain Name System.
• Every domain name ends in a top-level or first-
level domain label.
• When the Domain Name System was created in the
1980s, the domain name space was divided into two
main groups of domains.
• The country code top-level domains (ccTLD) were
primarily based on the two-character territory codes
of ISO-3166 country abbreviations. In addition, a
group of seven generic top-level domains (gTLD) was
implemented which represented a set
of categories of names and multi-organizations.
• These were the
domains GOV, EDU, COM, MIL, ORG,NET, and INT.

Second-level and lower level

• Below the top-level domains in the domain name
hierarchy are the second-level domain (SLD) names.
• These are the names directly to the left of .com, .net,
and the other top-level domains.
• Next are third-level domains, which are written
immediately to the left of a second-level domain.
There can be fourth- and fifth-level domains, and so
on, with virtually no limitation. An example of an
operational domain name with four levels of domain
labels is www.sos.state.oh.us.
• The www preceding the domains is the host name of
the World-Wide Web server. Each label is separated
by a full stop (dot). 'sos' is said to be a sub-domain of
'state.oh.us', and 'state' a sub-domain of 'oh.us', etc.
In general, subdomains are domains subordinate to
their parent domain. An example of very deep levels
of subdomain ordering are the IPv6 reverse
resolution DNS zones, e.g.,, which is the reverse DNS resolution
domain name for the IP address of
a loopback interface, or the localhost name.
• Second-level (or lower-level, depending on the
established parent hierarchy) domain names are
often created based on the name of a company
(e.g., bbc.co.uk), product or service
(e.g., gmail.com). Below these levels, the next
domain name component has been used to
designate a particular host server. The
hierarchical DNS labels or components of domain
names are separated in a fully qualified name by
the full stop(dot, .).

Internationalized domain name

• The character set allowed in the Domain Name
System initially prevented the representation of
names and words of many languages in their native
scripts or alphabets.
• ICANN approved the Punycode-
based Internationalized domai name (IDNA) system,
which maps Unicode strings into the valid DNS
character set. For example, københavn.eu is mapped
to xn--kbenhavn-54a.eu. Some registries have
adopted IDNA.

Domain name space

• The domain name space consists of a tree of domain
names. Each node or leaf in the tree has zero or
more resource records, which hold information
associated with the domain name.
• The tree sub-divides into zones beginning at the root
• A DNS zone consists of a collection of connected
nodes authoritatively served by an authoritative
• Administrative responsibility over any zone may be
divided, thereby creating additional zones.
• Authority is said to be delegated for a portion of the
old space, usually in form of sub-domains, to another
nameserver and administrative entity.
• The old zone ceases to be authoritative for the new
Domain name controller
• On Windows Server Systems, a domain
controller (DC) is a server that responds to security
authentication requests (logging in, checking
permissions, etc.) within the Windows Server domain.
• A domain is a concept introduced in Windows NT
whereby a user may be granted access to a number
of computer resources with the use of a single
username and password combination.
Windows NT
• In older versions of Windows such as Windows NT
server, one domain controller per domain was
configured as the Primary Domain Controller (PDC); all
other domain controllers were Backup Domain
Controllers (BDC).
• A BDC could authenticate the users in a domain, but all
updates to the domain (new users, changed passwords,
group membership, etc) could only be made via the
PDC, which would then propagate these changes to all
BDCs in the domain.
• If the PDC was unavailable (or unable to communicate
with the user requesting the change), the updatewould
• If the PDC was permanently unavailable (e.g. if the
machine failed), an existing BDC could be promoted to
• Because of the critical nature of the PDC, best practices
dictated that the PDC should be dedicated solely to
domain services, and not used for file/print/application
services that could slow down or crash the system.

Windows 2000

• Windows 2000 and later versions introduced Active

Directory ("AD"), which largely eliminated the
concept of primary and backup domain controllers in
favor of multi-master replication.
• However, there are still a number of roles that only
one domain controller can perform, called
the Flexible single master operation roles .
Abuse & regulations

• Critics often claim abuse of administrative power

over domain names. Particularly noteworthy was
the VeriSign Site Finder system which redirected all
unregistered .com and .net domains to a VeriSign
• For example, at a public meeting with VeriSign to air
technical concerns about SiteFinder,numerous
people, active in the IETF and other technical bodies,
explained how they were surprised by VeriSign's
changing the fundamental behavior of a major
component of Internet infrastructure, not having
obtained the customary consensus.
• SiteFinder, at first, assumed every Internet query
was for a website, and it monetized queries for
incorrect domain names, taking the user to
VeriSign's search site.
• Unfortunately, other applications, such as many
implementations of email, treat a lack of response to
a domain name query as an indication that the
domain does not exist, and that the message can be
treated as undeliverable.
• The original VeriSign implementation broke this
assumption for mail, because it would always resolve
an erroneous domain name to that of SiteFinder.

• Despite widespread criticism, VeriSign only

reluctantly removed it after the Internet Corporation
for Assigned Names and Numbers (ICANN)
threatened to revoke its contract to administer the
root name servers.
• ICANN published the extensive set of letters
exchanged, committee reports, and ICANN decisions.
• There is also significant disquiet regarding the United
States' political influence over ICANN.
• This was a significant issue in the attempt to create
a .xxx top-level domain and sparked greater interest
in alternative DNS roots that would be beyond the
control of any single country.
• Additionally, there are numerous accusations of
domain name "front running", whereby registrars,
when given whois queries, automatically register the
domain name for themselves. Recently, Network
Solutions has been accused of this.

Fictitious Domain Name

• A fictitious domain name is a domain name used in a
work of fiction or popular culture to refer to a domain
that does not actually exist.
• Domain names used in works of fiction have often
been registered in the DNS, either by their creators
or by cyber squatters attempting to profit from it.

Internet protocol
• An Internet Protocol (IP) address is a numerical
label that is assigned to devices participating in
a computer network that uses the Internet
Protocol for communication between its nodes.
• An IP address serves two principal functions: host or
network interface identification and
location addressing. Its role has been characterized
as follows: "A name indicates what we seek. An
address indicates where it is. A route indicates how
to get there.
• The designers of TCP/IP defined an IP address as
a 32-bit number and this system, known as Internet
Protocol Version 4 or IPv4, is still in use today.
• However, due to the enormous growth of
the Internet and the predicted depletion of available
addresses, a new addressing system (IPv6), using
128 bits for the address, was developed in 1995 and
standardized by RFC 2460 in 1998.
• Although IP addresses are stored as binary numbers,
they are usually displayed in human-
readable notations, such as
(for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).
• The Internet Protocol is used
to route data packets between networks; IP
addresses specify the locations of the source and
destination nodes in the topology of the routing
system. For this purpose, some of the bits in an IP
address are used to designate a subnetwork
• The number of these bits is indicated in CIDR
notation, appended to the IP address;

• As the development of private networks raised the

threat of IPv4 address exhaustion, RFC 1918 set
aside a group of private address spaces that may be
used by anyone on private networks.
• They are often used with network address
translators to connect to the global public Internet.
• The Internet Assigned Numbers Authority (IANA),
which manages the IP address space allocations
globally, cooperates with five Regional Internet
Registries (RIRs) to allocate IP address blocks
to Local Internet Registries (Internet service
providers) and other entities.

Virtual IP address
• A virtual IP address (VIP or VIPA) is an IP
address that is not connected to a specific computer
or network interface card (NIC) on a computer.
• Incoming packets are sent to the VIP address, but
they are redirected to physical network interfaces.
• VIPs are mostly used for connection redundancy; a
VIP address may still be available if a computer or
NIC fails because an alternative computer or NIC
replies to connections.
IP versions
• Two versions of the Internet Protocol (IP) are in use:
IP Version 4 and IP Version 6. (See IP version
history for details.) Each version defines an IP
address differently.
• Because of its prevalence, the generic term IP
address typically still refers to the addresses defined
by IPv4.
IP version 4 addresses
• IPv4 uses 32-bit (4-byte) addresses, which limits
the address space to 4,294,967,296 (232) possible
unique addresses.
• IPv4 reserves some addresses for special purposes
such as private networks (~18 million addresses)
or multicast addresses (~270 million addresses).
• IPv4 addresses are usually represented in dot-
decimal notation (four numbers, each ranging from 0
to 255, separated by dots, e.g.
• Each part represents 8 bits of the address, and is
therefore called an octet. In less common cases of
technical writing, IPv4 addresses may be presented
inhexadecimal, octal, or binary representations.
• In most representations each octet is converted

IPv4 subnetting

History of IPv4 subnetting

• In the early stages of development of the Internet
Protocol,network administrators interpreted an IP
address in two parts, network number portion and
host number portion.
• The highest order octet (most significant eight bits)
in an address was designated as the network
number and the rest of the bits were called therest
field or host identifier and were used for host
numbering within a network.

Class ful IPv4 sub netting

• The early method soon proved inadequate as
additional networks developed that were
independent from the existing networks already
designated by a network number.
• In 1981, the Internet addressing specification was
revised with the introduction of classful network
• Classful network design allowed for a larger number
of individual network assignments.
• The first three bits of the most significant octet of an
IP address was defined as the class of the address.
• Three classes (A, B, and C) were defined for universal
unicast addressing
• Depending on the class derived, the network
identification was based on octet boundary segments
of the entire address.
• Each class used successively additional octets in the
network identifier, thus reducing the possible number
of hosts in the higher order classes (B and C).
Historical classful network architecture

Rang Number Number

Cla e of Networ Hos of of
octet in
ss first k ID t ID networ address
octet ks es

224-2 =
0XXXXX 0 - b.c. 7
A a 2 = 128 16,777,2
XX 127 d

10XXXXX 128 - 214 = 216-2 =

B a.b c.d
X 191 16,384 65,534

221 =
110XXXX 192 - 28-2 =
C a.b.c d 2,097,15
X 223 254

IPv4 subnetting today - CIDR

• Although classful network design was a successful
developmental stage, it proved unscalable in the
face of the rapid expansion of the Internet, and in the
mid 1990s it started to become abandoned because
of the introduction of Classless Inter-Domain
Routing (CIDR) for the allocation of IP address blocks
and new rules for routing IPv4 packets.
• CIDR is based on variable-length subnet masking
(VLSM) to allow allocation and routing based on
arbitrary-length prefixes.
• Today, remnants of classful network concepts
function only in a limited scope as the default
configuration parameters of some network software
and hardware components (e.g. netmask), and in the
technical jargon used in network administrators'

IPv4 private addresses

• Early network design, when global end-to-end

connectivity was envisioned for communications with
all Internet hosts, intended that IP addresses be
uniquely assigned to a particular computer or device.
• However, it was found that this was not always
necessary as private networks developed and public
address space needed to be conserved (IPv4 address
• Computers not connected to the Internet, such as
factory machines that communicate only with each
other via TCP/IP, need not have globally-unique IP
• Three ranges of IPv4 addresses for private networks,
one range for each class (A, B, C), were reserved
in RFC 1918.
IANA-reserved private IPv4 network

No. of
Start End address

Bloc 16,777,2
k (/8
5 16
x, 1
x A)

20- 172.16.0. 1,048,57

bit 0 5 6
x, 16
x B)

(/16 65,536
.0 55
x C)

IP version 6 addresses
• An illustration of an IP address (version 6),
in hexadecimal and binary.
• The rapid exhaustion of IPv4 address space, despite
conservation techniques, prompted the Internet
Engineering Task Force (IETF) to explore new
technologies to expand the Internet's addressing
capability. The permanent solution was deemed to
be a redesign of the Internet Protocol itself. This next
generation of the Internet Protocol, aimed to replace
IPv4 on the Internet, was eventually named Internet
Protocol Version 6 (IPv6) in 1995 The address size
was increased from 32 to 128 bits or 16 octets,
which, even with a generous assignment of network
blocks, is deemed sufficient for the foreseeable
• Mathematically, the new address space provides the
potential for a maximum of 2128, or about 3.403 ×
1038 unique addresses.
• The new design is not based on the goal to provide a
sufficient quantity of addresses alone, but rather to
allow efficient aggregation of subnet routing prefixes
to occur at routing nodes.
• As a result, routing table sizes are smaller, and the
smallest possible individual allocation is a subnet for
264 hosts, which is the square of the size of the entire
IPv4 Internet.
• At these levels, actual address utilization rates will
be small on any IPv6 network segment. The new
design also provides the opportunity to separate the
addressing infrastructure of a network segment—that
is the local administration of the segment's available
space—from the addressing prefix used to route
external traffic for a network.
• IPv6 has facilities that automatically change the
routing prefix of entire networks should the global
connectivity or the routing policy change without
requiring internal redesign or renumbering.
• The large number of IPv6 addresses allows large
blocks to be assigned for specific purposes and,
where appropriate, to be aggregated for efficient
routing. With a large address space, there is not the
need to have complex address conservation methods
as used in classless inter-domain routing (CIDR).

• Example of an IPv6 address:-

IPv6 private addresses

• Just as IPv4 reserves addresses for private or internal

networks, there are blocks of addresses set aside in
IPv6 for private addresses.
• In IPv6, these are referred to as unique local
addresses (ULA). RFC 4193 sets aside the routing
prefix fc00::/7 for this block which is divided into
two /8 blocks with different implied policies (cf. IPv6)
• The addresses include a 40-bit pseudorandom
number that minimizes the risk of address collisions
if sites merge or packets are misrouted.
• Early designs (RFC 3513) used a different block for
this purpose (fec0::), dubbed site-local addresses.
However, the definition of what
constituted sites remained unclear and the poorly
defined addressing policy created ambiguities for
routing. The address range specification was
abandoned and must no longer be used in new
• Addresses starting with fe80: — called link-local
addresses — are assigned only in the local link area.
• The addresses are generated usually automatically
by the operating system's IP layer for each network
interface. This provides instant automatic network
connectivity for any IPv6 host and means that if
several hosts connect to a common hub or switch,
they have an instant communication path via their
link-local IPv6 address.
• This feature is used extensively, and invisibly to most
users, in the lower layers of IPv6 network
administration (cf. Neighbor Discovery Protocol).

IP sub networks
• The technique of subnetting can operate in both IPv4
and IPv6 networks. The IP address is divided into two
parts: the network address and the host identifier.
• Thesubnet mask (in IPv4 only) or the CIDR prefix
determines how the IP address is divided into
network and host parts.
• The term subnet mask is only used within IPv4. Both
IP versions however use the Classless Inter-Domain
Routing (CIDR) concept and notation.
• In this, the IP address is followed by a slash and the
number (in decimal) of bits used for the network
part, also called the routing prefix.
• For example, an IPv4 address and its subnet mask
may be and, respectively.
The CIDR notation for the same IP address and
subnet is, because the first 24 bits of
the IP address indicate the network and subnet.
• When a computer is configured to use the same IP
address each time it powers up, this is known as a
static IP address.
• In contrast, in situations when the computer's IP
address is assigned automatically, it is known as a
dynamic IP address.

Method of assignment
• Static IP addresses are manually assigned to a
computer by an administrator. The exact procedure
varies according to platform. This contrasts with
dynamic IP addresses, which are assigned either by
the computer interface or host software itself, as
in Zerocon, or assigned by a server using Dynamic
Host Configuration Protocol (DHCP).
• Even though IP addresses assigned using DHCP may
stay the same for long periods of time, they can
generally change. In some cases, a network
administrator may implement dynamically assigned
static IP addresses.
• In this case, a DHCP server is used, but it is
specifically configured to always assign the same IP
address to a particular computer.
• This allows static IP addresses to be configured
centrally, without having to specifically configure
each computer on the network in a manual
• In the absence or failure of static or stateful (DHCP)
address configurations, an operating system may
assign an IP address to a network interface using
state-less auto-configuration methods, such
as Zeroconf.

Method of Assignment
Determinantion of satic ip
Selection of static ip
Uses of dynamic addressing

• Dynamic IP addresses are most frequently assigned

on LANs and broadband networks by Dynamic Host
Configuration Protocol (DHCP) servers.
• They are used because it avoids the administrative
burden of assigning specific static addresses to each
device on a network.
• It also allows many devices to share limited address
space on a network if only some of them will be
online at a particular time.
• In most current desktop operating systems, dynamic
IP configuration is enabled by default so that a user
does not need to manually enter any settings to
connect to a network with a DHCP server.
• DHCP is not the only technology used to assign
dynamic IP addresses. Dialup and some broadband
networks use dynamic address features of the Point-
to-Point Protocol.

Sticky dynamic IP address

• A sticky dynamic IP address or sticky IP is an informal

term used by cable and DSL Internet access
subscribers to describe a dynamically assigned IP
address that seldom changes .
• The addresses are usually assigned with the DHCP
protocol. Since the modems are usually powered-on
for extended periods of time, the address leases are
usually set to long periods and simply renewed upon
Address auto configuration

• RFC 3330 defines an address block,,

for the special use in link-local addressing for IPv4
• In IPv6, every interface, whether using static or
dynamic address assignments, also receives a local-
link address automatically in the fe80::/10 subnet.
• These addresses are only valid on the link, such as a
local network segment or point-to-point connection,
that a host is connected to.
• These addresses are not routable and like private
addresses cannot be the source or destination of
packets traversing the Internet.
• When the link-local IPv4 address block was reserved,
no standards existed for mechanisms of address
• Filling the void, Microsoft created an implementation
that is called Automatic Private IP Addressing
• Due to Microsoft's market power, APIPA has been
deployed on millions of machines and has, thus,
become a de facto standard in the industry.
• Many years later, the IETF defined a formal standard
for this functionality, RFC 3927, entitled Dynamic
Configuration of IPv4 Link-Local Addresses.

• Some infrastructure situations have to use static

addressing, such as when finding the Domain Name
System(DNS) host that will translate domain
names to IP addresses.
• Static addresses are also convenient, but not
absolutely necessary, to locate servers inside an
• An address obtained from a DNS server comes with
a time to live, or caching time, after which it should
be looked up to confirm that it has not changed.
• Even static IP addresses do change as a result of
network administration (RFC 2072)

Modifications to IP addressing

IP blocking and firewalls

• Firewalls are common on today's Internet. For
increased network security, they control access
to private networks based on the public IP of the
• Whether using a blacklist or a whitelist the IP address
that is blocked is the perceived public IP address of
the client, meaning that if the client is using a proxy
server orNAT, blocking one IP address might block
many individual people.

IP address translation

• Multiple client devices can appear to share IP

addresses: either because they are part of a shared
hosting web server environment or because an
IPv4 network address translator (NAT) or proxy
server acts as an intermediary agent on behalf of its
customers, in which case the real originating IP
addresses might be hidden from the server receiving
a request.
• A common practice is to have a NAT hide a large
number of IP addresses in a private network. Only
the "outside" interface(s) of the NAT need to have
Internet-routable addresses.
• Most commonly, the NAT device maps TCP or UDP
port numbers on the outside to individual private
addresses on the inside.
• Just as a telephone number may have site-specific
extensions, the port numbers are site-specific
extensions to an IP address.
• In small home networks, NAT functions usually take
place in a residential gateway device, typically one
marketed as a "router". In this scenario, the
computers connected to the router would have
'private' IP addresses and the router would have a
'public' address to communicate with the Internet.
• This type of router allows several computers to share
one public IP address.


• An IP address can be determined by using

the command-line tool ipconfig for Windows,&
ifconfig for unix & linux .
• Although iproute2's "ip" command is sometimes
more appropriate for Linux.
• The IP address corresponding to a domain name can
be determined by using the
commands nslookup or dig.
.Pst files

In computing, a Personal Storage Table (.pst) is an

open file format used to store copies of messages,
calendar events, and other items
within Microsoft software such as Microsoft Exchange
Client, Windows Messaging, and Microsoft Outlook.
The open format is controlled by Microsoft who provide
free specifications and free irrevocable technology
The file format is also known as:
• Personal Folder File
• Off-line Storage Table (.ost)
• Off-line Folder File
• Personal Address Book (.pab)

• In Microsoft Exchange Server, the messages, the
calendar, and other data items are delivered to and
stored on the server. Microsoft Outlook stores these
items in a personal-storage-table (.pst) or off-line-
storage-table (.ost) files that are located on the local
computer. Most commonly, the .pst files are used to
store archived items and the .ost files to maintain
off-line availability of the items.
• The size of these files no longer counts against the
size of the mailbox used; by moving files from a
server mailbox to .pst files, users can free storage
space on their mailservers. To use the .pst files from
another location the user needs to be able to access
the files directly over a network from his mail client.
While it is possible to open and use a .pst file from
over a network, this is unsupported.
• To reduce the size of .pst files, the user needs to
compact them Password protection can be used to
protect the content of the .pst files. However,
Microsoft admits that the password adds very little
protection, due to the existence of commonly
available tools which can remove or simply bypass
the password protection.
• The .pst file format is fundamentally insecure for
multiple reasons. First, the password (actually a
weak (without the first and last XOR) CRC-32 integer
representation of it) is simply stored in the .pst file,
and Outlook checks to make sure that it matches the
user-specified password and refuses to operate if
there is no match. But the actual data is still there
and is readable by the libpst project code.
• Second, Microsoft (MS) offers three values for the
encryption setting: none, compressible,
and high. None encryption is easy because the .pst
file contains data in plaintext, and a simple text
editor will show the contents.

• The .pst file format is supported by several Microsoft
client applications, including Microsoft Exchange
Client, Windows Messaging, and Microsoft Outlook.
• The .pst file format is an open format for which
Microsoft provides free specifications and irrevocable
free patent licensing through the Open Specification
• The libpst project includes tools to convert .pst files
into open formats such as mbox and LDAP Data
Interchange Format. libpst is licensed under
the GPL and is now included in Fedora 10.
• MVCOM is a commercially licensed COM Component
that provides access to .pst files without
MAPI. PSTViewer is a commercial viewer for
accessing .pst file contents without Outlook or MAPI.
• As with any file, .pst files can become corrupted.
Prior to Outlook 2003, the default .pst file format was
ANSI and had a maximum size of 2 GB.
• If the .pst file were allowed to grow over 2 GB, the
file would become unusable. Microsoft provides
PST2GB a tool that can be used to truncate a .pst file
that has grown over 2 GB.
• Microsoft also provides scanpst.exe, that can be used
to repair other .pst file-corruption issues. In Outlook
2003 and later, .pst files are created in the Unicode
format and have a default maximum size of 20 GB.
• There are tools to convert .pst to other formats or to
upload to other online e-mails like Gmail


• Outlook 2002 and earlier use ANSI (extended ASCII
with a codepage) encoding for their.pst and .ost files.
• This format has a maximum size of 2 GB (231 bytes)
and does not support unicode. A file exceeding this
size is likely to give error messages, such as ".pst
has reached maximum size limit," and could become
• Although superseded, this format continues to be
supported by Microsoft Outlook 97 and later (98,
2000, 2002 (XP), 2003, 2007), by Internet Message
Access Protocol Version 4rev1 (IMAP4) accounts and
by HTTP accounts.
• From Outlook 2003 and onward, the standard format
for .pst and .ost files is Unicode (UTF-16 little-
endian). The use of 64-Bit pointers instead of the 32-
Bit pointers of the earlier version allowed to
overcome the 2 GB limit.
• Now, there is a user-definable maximum-file size up
to 20 GB. This format is supported by Microsoft
Outlook 2003 and later (2007)
• A file that is created in the personal-folders format in
Outlook 2003 or in Microsoft Office Outlook 2007 is
not compatible with earlier versions of Microsoft
Outlook and cannot be opened by using those older
• If this limit is reached or sometimes exceeded,
retrieval of the .pst file can be difficult if not
• The file is structured as a B-tree with 512 byte nodes
and leafs

• Microsoft Entourage is Microsoft's email and personal

information program for Mac OS X.
• While superficially similar to Outlook, it is an entirely
different application, and uses a unique database
format which cannot be imported or exported,
though user data can be imported and exported to
and from another unique format called .rge (a bundle
consisting of many individual files plus metadata).
• Entourage 2008, the current version as of May 2010,
has no support for .pst files, though there exists
Microsoft's .pst import tool for Entourage 2004;
however, the tool could only import .pst files from
Outlook for Mac 2001, and not any Windows
• Entourage is being replaced by Outlook for Office
2011 for Intel Macs, which will be able to import
Outlook .pst files from Window however, data will be
stored as many individual files, rather than in a
single database such as .pst or the Entourage
• Outlook for Mac 2001, which runs under Mac OS 9
or Classic Environment (where supported under OS
X) connects exclusively to Exchange servers, and to
this day is closer to its Windows counterpart than
Entourage is; it uses .pst files, and is compatible with
Windows Outlook 2000 and 2002, as such allowing
importing of Windows 'Outlook 97-2002'
compatible .pst files.
Merging of .pst files
Code two .pst ghostbuster
.pst upgradation
.pst compress
Import & export .pst files
Local area network

• A local area network (LAN) is a computer

network covering a small physical area, like a home,
office, or small groups of buildings, such as a school,
or an airport.
• The defining characteristics of LANs, in contrast
to wide area networks (WANs), include their usually
higher data-transfer rates, smaller geographic area,
and lack of a need for leased telecommunication
• ARCNET, Token Ring and other technologies have
been used in the past, but Ethernet over twisted
pair cabling, and Wi-Fi are the two most common
technologies currently in use.

• The development and proliferation of CP/M-based

personal computers from the late 1970s and
then DOS-based personal computers from 1981
meant that a single site began to have dozens or
even hundreds of computers.
• The initial attraction of networking these was
generally to share disk space and laser printers,
which were both very expensive at the time.
• There was much enthusiasm for the concept and for
several years, from about 1983 onward, computer
industry pundits would regularly declare the coming
year to be “the year of the LAN
• In practice, the concept was marred by proliferation
of incompatible physical Layer and network protocol
implementations, and a plethora of methods of
sharing resources.
• Typically, each vendor would have its own type of
network card, cabling, protocol, and network
operating system.
• A solution appeared with the advent ofNovell
NetWare which provided even-handed support for
dozens of competing card/cable types, and a much
more sophisticated operating system than most of its

• Early LAN cabling had always been based on various

grades of coaxial cable, but IBM's Token Ring used
shielded twisted pair cabling of their own design, and
in 1984 StarLAN showed the potential of
simple Cat3 unshielded twisted pair—the same
simple cable used for telephone systems.
• This led to the development of10Base-T (and its
successors) and structured cabling which is still the
basis of most LANs today. In addition, fiber-optic
cabling is increasingly used.

Technical aspects
• Switched Ethernet is the most common Data Link
Layer implementation on local area networks. At
the Network Layer, the Internet Protocol (i.e. TCP/IP)
has become the standard. Smaller LANs generally
consist of one or more switches linked to each other
—often at least one is connected to a router, cable
modem, orADSL modem for Internet access.
• Larger LANs are characterized by their use of
redundant links with switches using the spanning
tree protocol to prevent loops, their ability to
manage differing traffic types via quality of service
(QoS), and to segregate traffic with VLANs.
LAN messenger

• A LAN messenger is an instant messaging program

designed for use within a single local area
network (LAN).
• There are advantages using a LAN messenger over a
normal instant messenger. The LAN messenger runs
inside a company or private LAN, and so an active
Internet connection or a central server is not
required (P2P).
• Only people who are inside the firewall will have
access to the system.
• Communication data does not leave the LAN, and
also the system can not be spammed from the
outside (Darknet).
• Many LAN messenger offer basic functionality for
sending private messages,file
transfer, chatrooms and graphical smileys.
• User messenger is an instant
messaging client created by Microsoft that is
currently designed to work with Windows XP (32-bit
XP only), Windows Vista, Windows 7,Windows
Mobile/Windows CE, Xbox 360, Blackberry
OS, iOS, Java ME and S60 on Symbian OS 9.x.

• The client has been part of Microsoft's Windows

Live set of online services since 2005. It connects to
Microsoft's .NET Messenger Service.

• The client was first released as MSN Messenger on

July 22, 1999, and as Windows Live Messenger on
December 13, 2005.
• In June 2009, Microsoft reported the service attracted
over 330 million active users each month.

• Used to reset the user password O unlock the user


• User Messenger uses the Microsoft Notification
Protocol(MSNP) over TCP (and optionally
over HTTP to deal with proxies) to connect to
the .NET Messenger Service—a service offered
on port 1863 of "messenger.hotmail.com."
• The current version is 18 (MSNP18), used by
Windows Live Messenger and other third-party
clients that have support for the protocol.
• The protocol is not completely secret; Microsoft
disclosed version 2 (MSNP2) to developers in 1999 in
an Internet Draft, but never released versions 8 or
higher to the public.
• The .NET Messenger Service servers currently only
accept protocol versions from 8 and higher, so the
syntax of new commands sent from versions 8 and
higher is only known by using packet
sniffers like Wireshark.
• This has been an easy task because - in comparison
to many other modern instant messaging protocols,
such as XMPP - the Microsoft Notification Protocol
does not provide any encryption and everything can
be captured easily using packet sniffers.
• The lack of proper encryption also makes
wiretapping friend lists and personal conversations a
trivial task, especially in unencrypted public Wi-Fi
• An administrator is a local account or a local security
group with complete and unrestricted access to
create, delete, and modify files, folders, and settings
on a particular computer.
• This is in contrast to other types of user accounts
that have been granted only specific permissions and
levels of access.
• An administrator account is used to make system
wide changes to the computer, such as:
• Creating or deleting user accounts on the computer
• Creating account passwords for other users on the
• Changing others' account names, pictures,
passwords, and types
• Administrative rights are permissions granted by
administrators to users allowing them to make such
changes. Without administrative rights, you cannot
perform many such system modifications, including
installing software or changing network settings. For
more, see At IU, why do I need to know the
administrator account on my computer?
• You need to know the administrative password to
your computer; otherwise, you won't be able to
modify files and settings, install programs, or fix
Admin rights

• In Windows 7, Vista, XP, 2000, and NT, the account
named "Administrator" has all possible rights, as
does everyone in the Administrator local security
group. Normal users have some minor administrative
rights (e.g., they can modify anything in their home
directories), but rights that affect the computer as a
whole are normally withheld. (Earlier versions of
Windows had no privileged or unprivileged accounts;
any user could modify anything on the computer.)
• Computer administrators cannot change computer
administrator accounts to a less-privileged type
unless there is at least one other user with a
computer administrator account type on that
computer. This ensures that there is always at least
one user with administrative rights.
• Ideally, the computer administrator account should
be used only to:
• Install, upgrade, repair, or back up the operating
system and components
• Install service packs (SPs)
• Configure critical operating system parameters (e.g.,
password policy, access control, audit policy, kernel
mode driver configuration)
• Take ownership of files that have become
• Windows 7 and Vista include a feature called User
Access Control, which prompts you for your
administrator account name and password before
you perform actions requiring administrative rights
when you're logged into a less-privileged account.
User Access Control is enabled by default, and UITS
recommends that you leave it enabled.

UNIX, Linux, BSD, Solaris, and Mac

• Unix computers and Unix-based operating systems
typically have one unrestricted account, normally
called "root" or the "super user".
• The root user has full access to all files and
directories on a Unix system and many low-level
tasks must run as root. In addition to the root user,
some Unix implementations have a group of
administrative users, sometimes called the "wheel"
• Administrator accounts do not have full access to the
operating system, but they can escalate their status
to root to perform certain tasks.
• Because the root user has such unrestricted access
to the computer, administrators typically do not log
into it or operate as root continuously. Instead, they
assume root-level access using the sudo command.
• At a command prompt, permitted users can
enter sudo and their password, and then execute the
command they normally don't have access to.
Alternatively, if administrators need to operate for a
period of time with root privileges, at a command
prompt they can enter sudo -s and their password,
and then function as root within the terminal window
for as long as they need to.
• Normal users on a Unix system do not have access
to sudo and cannot perform system-related tasks.
However, they still have the ability to install some
software and customize their environment. Each user
has a home directory where he or she can save
documents, install programs, and maintain personal
Computer management(administrator

HCL Enterprise is a leading global technology and IT

Enterprise. Since its inception in 1976 as one of the first
Indian "IT garage startups," Founder, Chairman and CEO,
Shiv Nadar has led HCL Enterprise's impressive growth.
HCL Enterprise operates two major businesses. One is
the India-facing SI business operated by HCL
Infosystems, and the other is the global IT services
business operated by HCL Technologies.

HCL is a leading global Technology and IT Enterprise with

annual revenues of US$ 4.7 billion. The HCL Enterprise
comprises two companies listed in India, HCL
Technologies ( www.hcltech.com ) and HCL Infosystems

The 3 decade old enterprise, founded in 1976, is one of

India's original IT garage start ups. Its range of offerings
span R&D and Technology Services, Enterprise and
Applications Consulting, Remote Infrastructure
Management, BPO services, IT Hardware, Systems
Integration and Distribution of Technology and Telecom
products in India. The HCL team comprises 53,000
professionals of diverse nationalities, operating across 18
countries including 360 points of presence in India. HCL
has global partnerships with several leading Fortune 1000
firms, including several IT and Technology majors.

1. www.hcl.in
2. Wikipedia
3. IT-support Engineer
4. hclinsys.in/smart-Library