Вы находитесь на странице: 1из 137

HCL COMNET SYSTEMS

&
SERVICES LTD

Lee Johnny.J

CEC

WINDOWS CLUSTERING
Microsoft Windows 2000 Server Operating System

Introduction:
Windows 2000 (also referred to as Win2K) is a preemptive,
interruptible, graphical and business-oriented operating system designed to work
with either uniprocessor or symmetric multi-processor computers. It is part of the
Microsoft Windows NT line of operating systems and was released on February 17,
2000.It has been succeeded by Windows XP in October 2001 and Windows Server
2003 in April 2003. It is a hybrid kernel operating system.

1.1 Server family features

The Windows 2000 server family consists of Windows 2000 Server, Windows 2000
Advanced Server and Windows 2000 Datacenter Server.

Common features:

a) Improvements of Windows Explorer: (Inclusion of media player & MIDI).


b) NTFS V.3.0: (disk quotas, file-system-level encryption, sparse files and
reparse points).
c) Encrypting File System: (The Encrypting File System (EFS) introduced
strong file system-level encryption to Windows. It allows any folder or drive
on an NTFS volume to be encrypted transparently by the user).
d) Basic and dynamic disk storage: (Windows 2000 introduced the Logical
Disk Manager for dynamic storage. All versions of Windows 2000 support
three types of dynamic disk volumes (along with basic disks): simple
volumes, spanned volumes and striped volumes).
e) Accessibility: Microsoft increased the usability of Windows 2000 over
Windows NT 4.0 for people with visual and auditory impairments and other
disabilities.
f) Languages, locales & Games.
g) System utilities: Microsoft Management Console (MMC), which is used to
create, save, and open administrative tools. Each of these is called a console,
PROPOSAL
and most allow an administrator to administer other Windows 2000
computers from one centralized computer.
h) Recovery Console: The Recovery Console is run from outside the installed
copy of Windows to perform maintenance tasks that can neither be run from
within it nor feasibly be run from another computer or copy of Windows 2000.

Windows 2000 Server Edition:

1. Professional (Client).
2. 2000 Server.
3. Advanced Server.
4. Datacenter Server.

1. Windows 2000 Professional:

a) It was designed as the desktop operating system for businesses and power
users.
b) It is the client version of Windows 2000. It offers greater security and stability
than many of the previous Windows desktop operating systems.
c) It supports up to two processors, and can address up to 4 GB of RAM. The
system requirements are a Pentium processor of 133 MHz or greater, at least
32 MB of RAM, 700 MB of hard drive space, and a CD-ROM drive
(recommended: Pentium II, 128 MB of RAM, 2 GB of hard drive space, and
CD-ROM drive).

2. Windows 2000 Server:

a) SKUs share the same user interface with Windows 2000 Professional, but
contain additional components for the computer to perform server roles and
run infrastructure and application software.
b) A significant new component introduced in the server SKUs is Active
Directory, which is an enterprise-wide directory service based on LDAP.

HCL Confidential 3
PROPOSAL
c) Additionally, Microsoft integrated Kerberos network authentication, replacing
the often-criticized NTLM authentication system used in previous versions.
This also provided a purely transitive-trust relationship between Windows
2000 domains in a forest (a collection of one or more Windows 2000 domains
that share a common schema, configuration, and global catalog, being linked
with two-way transitive trusts).
d) Furthermore, Windows 2000 introduced a Domain Name Server which allows
dynamic registration of IP addresses.
e) Windows 2000 Server requires 128 MB of RAM and 1 GB hard disk space;
however requirements may be higher depending on installed components.

3. Windows 2000 Advanced Server

a) It is a variant of Windows 2000 Server operating system designed for


medium-to-large businesses.
b) It offers clustering infrastructure for high availability and scalability of
applications and services, including main memory support of up to 8
gigabytes (GB) on Physical Address Extension (PAE) systems and the ability
to do 8-way SMP.
c) It supports TCP/IP load balancing and enhanced two-node server clusters
based on the Microsoft Cluster Server (MSCS) in Windows NT Server 4.0
Enterprise Edition.
d) Limited number of copies of an IA-64 version, called Windows 2000 Advanced
Server, Limited Edition were made available via OEMs.
e) System requirements are similar to those of Windows 2000 Server; however
they may need to be higher to scale to larger infrastructure.

5. Windows 2000 Datacenter Server:

a) It is a variant of Windows 2000 Server designed for large businesses that


move large quantities of confidential or sensitive data frequently via a central
server.
b) Like Advanced Server, it supports clustering, failover and load balancing.

HCL Confidential 4
PROPOSAL
c) Limited number of copies of an IA-64 version, called Windows 2000
Datacenter Server, Limited Edition were made available via OEMs.
d) Its minimum system requirements are normal, but it was designed to be
capable of handing advanced, fault-tolerant and scalable hardware—for
instance computers with up to 32 CPUs and 64 GB of RAM, with rigorous
system testing and qualification, hardware partitioning, coordinated
maintenance and change control.

Microsoft Windows 2003 Server Operating System

Introduction:

Windows Server 2003 (also referred to as Win2K3) is a server operating


system produced by Microsoft. Introduced on April 24, 2003 as the successor to
Windows 2000 Server, it is considered by Microsoft to be the cornerstone of its
Windows Server System line of business server products. An updated version,
Windows Server 2003 R2 was released to manufacturing on 6 December 2005. Its
successor, Windows Server 2008, was released on February 4, 2008.

According to Microsoft, Windows Server 2003 is more scalable and delivers better
performance than its predecessor, Windows 2000.

Overview:

1. Released on April 24, 2003, Windows Server 2003 (which carries the version
number 5.2) is the follow-up to Windows 2000 Server, incorporating
compatibility and other features from Windows XP.
2. Windows Server 2003 includes compatibility modes to allow older applications to
run with greater stability. It was made more compatible with Windows NT 4.0
domain-based networking.

HCL Confidential 5
PROPOSAL
3. Windows Server 2003 brought in enhanced Active Directory compatibility, and
better deployment support, to ease the transition from Windows NT 4.0 to
Windows Server 2003 and Windows XP Professional.
4. Changes to various services include those to the IIS web server, which was
almost completely rewritten to improve performance and security, Distributed
File System, which now supports hosting multiple DFS roots on a single server,
Terminal Server, Active Directory, Print Server, and a number of other areas.

Features:

• Internet Information Services (IIS) v6.0 - A significantly improved version of


IIS.
• Increased default security over previous versions, due to the built-in firewall
and having most services disabled by default.
• Significant improvements to Message Queuing.
• Manage Your Server - a role management administrative tool that allows an
administrator to choose what functionality the server should provide.
• Improvements to Active Directory, such as the ability to deactivate classes
from the schema, or to run multiple instances of the directory server (ADAM)
• Improvements to Group Policy handling and administration
• Improved disk management, including the ability to back up from shadows of
files, allowing the backup of open files.
• Improved scripting and command line tools, which are part of Microsoft's
initiative to bring a complete command shell to the next version of Windows.
• Support for a hardware-based "watchdog timer", which can restart the server
if the operating system does not respond within a certain amount of time.

Windows 2003 Server Editions:

1. Standard Edition
2. Enterprise Edition
3. Datacenter Edition
4. Web Edition

HCL Confidential 6
PROPOSAL

1. Standard Edition:

a) Standard Edition is aimed towards small to medium sized businesses.


b) Standard Edition supports file and printer sharing, offers secure Internet
connectivity, and allows centralized desktop application deployment.
c) Its supports 4GB RAM and Four ways SMP (Symmetric multi processing).
d) The 64-bit version of Windows Server 2003, Standard Edition is capable
of addressing up to 32 GB of RAM and it also supports Non-Uniform
Memory Access (NUMA), something the 32-bit version does not do.

2. Enterprise Edition:

a) Enterprise Edition is aimed towards medium to large businesses.


b) It supports 32GB RAM for 32 bit processor and 64GB RAM for 64 bit
processor and 8 ways SMP.
c) It is a full-function server operating system that supports up to eight
processors and provides enterprise-class features such as eight-node
clustering using Microsoft Cluster Server (MSCS) software and support for
up to 32 GB of memory through PAE (added with the /PAE boot string).
d) Enterprise Edition also comes in 64-bit versions for the Itanium and x64
architectures. Both 32-bit and 64-bit versions support Non-Uniform
Memory Access (NUMA).

3. Data center Edition:

a) Datacenter Edition is designed for infrastructures demanding high security and


reliability.
b) Windows Server 2003 is available for x86 32-bit, Itanium, and x64
processors.

HCL Confidential 7
PROPOSAL
c) It supports a maximum of up to 32 processors on 32-bit or 64-bit hardware.
32-bit architecture also limits memory addressability to 128 GB, while the 64-
bit versions support up to 2 TB.
d) Windows Server 2003, Datacenter Edition, also allows limiting processor and
memory usage on a per-application basis.
e) Windows Server 2003, Datacenter Edition has better support for Storage Area
Networks (SAN). It features a service which uses Windows sockets to emulate
TCP/IP communication over native SAN service providers, thereby allowing a
SAN to be accessed over any TCP/IP channel. With this, any application that
can communicate over TCP/IP can use a SAN, without any modification to the
application.
f) Datacenter Edition also supports 8-node clustering. Clustering increases
availability and fault tolerance of server installations, by distributing and
replicating the service among many servers.

4. Web Edition:

a) Web Edition is mainly for building and hosting Web applications, Web
pages, and XML Web services.
b) It is designed to be used primarily as an IIS 6.0 Web server and
provides a platform for rapidly developing and deploying XML Web
services and applications that use ASP.NET technology.
c) Windows Server 2003 Web Edition supports a maximum of 2
processors (SMP) with support for a maximum of 2GB of RAM.
d) Windows Server 2003, Web Edition cannot act as a domain controller.
e) Additionally, it is the only version of Windows Server 2003 that does
not include client number limitation upon Windows update services as
it does not require Client Access Licenses.

Windows Server 2003 R2 Operating System

HCL Confidential 8
PROPOSAL
Windows Server 2003 R2, an update of Windows Server 2003, was
released to manufacturing on 6 December 2005. It is distributed on two CDs, with
one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally
installable features for Windows Server 2003. The R2 update was released for all x86
and x64 versions, but not for Itanium versions.
New features:

• Branch Office Server Management


o Centralized management tools for file and printers
o Enhanced Distributed File System (DFS) namespace management
interface
o More efficient WAN data replication with Remote Differential
Compression.
• Identity and Access Management
o Extranet Single Sign-On and identity federation
o Centralized administration of extranet application access
o Automated disabling of extranet access based on Active Directory
account information
o User access logging
o Cross-platform web Single Sign-On and password synchronization
using Network Information Service (NIS)
• Storage Management
o File Server Resource Manager (storage utilization reporting)
o Enhanced quota management
o File screening limits files types allowed
o Storage Manager for Storage Area Networks (SAN) (storage array
configuration)
• Server Virtualization
o A new licensing policy allows up to 4 virtual instances on Enterprise
Edition and Unlimited on Datacenter Edition
• Utilities and SDK for UNIX-Based Applications add-on, giving a
relatively full UNIX development environment.

HCL Confidential 9
PROPOSAL
o Base Utilities
o SVR-5 Utilities
o Base SDK
o GNU SDK
o GNU Utilities
o Perl 5
o Visual Studio Debugger Add-in.

Microsoft Windows 2008 Server Operating System

Windows Server 2008 is the most recent release of Microsoft Windows's


server line of operating systems. Released on February 27, 2008, it is the successor
to Windows Server 2003, released nearly five years earlier. Like Windows Vista,
Windows Server 2008 is built on the Windows NT 6.0 kernel.

HCL Confidential 10
PROPOSAL

Windows 2008 Server Editions:


1. Windows Server 2008 Standard Edition (x86 and x64)
2. Windows Server 2008 Enterprise Edition (x86 and x64)
3. Windows Server 2008 Datacenter Edition (x86 and x64)
4. Windows HPC Server 2008
5. Windows Web Server 2008 (x86 and x64)
6. Windows Storage Server 2008 (x86 and x64)
7. Windows Small Business Server 2008 (Codenamed "Cougar") (x64) for small
businesses
8. Windows Essential Business Server 2008 (Codenamed "Centro") (x64) for
medium-sized businesses [18]
9. Windows Server 2008 for Itanium-based Systems

Features:
1. Server Core

Windows Server 2008 includes a variation of installation called a Server Core.


Server Core is a significantly scaled-back installation where no Windows Explorer
shell is installed. All configuration and maintenance is done entirely through
command line interface windows, or by connecting to the machine remotely using
Microsoft Management Console. However, Notepad and some control panel applets,
such as Regional Settings, are available.

2. Active Directory roles


Active Directory is expanded with identity, certificate, and rights
management services. Active Directory until Windows Server 2003 allowed network
administrators to centrally manage connected computers, to set policies for groups
of users, and to centrally deploy new applications to multiple computers. This role of
Active Directory is being renamed as Active Directory Domain Services (ADDS).A
number of other additional services are being introduced, including Active Directory
Federation Services (ADFS), Active Directory Lightweight Directory Services (AD
LDS), (formerly Active Directory Application Mode, or ADAM), Active Directory

HCL Confidential 11
PROPOSAL
Certificate Services (ADCS), and Active Directory Rights Management Services
(ADRMS).

3. Terminal Services:
Windows Server 2008 features major upgrades to Terminal Services.
Terminal Services now supports Remote Desktop Protocol 6.0. The most notable
improvement is the ability to share a single application over a Remote Desktop
connection, instead of the entire desktop. This feature is called Terminal Services
RemoteApp.

4. Windows Power Shell:


Windows Server 2008, it’s the first Windows operating system to ship with
Windows Power Shell, Microsoft's new extensible command line shell and task-based
scripting technology. Power Shell is based on object-oriented programming and
version 2.0 of the Microsoft .NET Framework and includes more than 120 system
administration utilities, consistent syntax and naming conventions, and built-in
capabilities to work with common management data such as the Windows Registry,
certificate store, or Windows Management Instrumentation. Power Shell’s scripting
language was specifically designed for IT administration, and can be used in place of
cmd.exe and Windows Script Host.

5. Self-healing NTFS:

In previous Windows versions, if the operating system detected corruption in


the file system of an NTFS volume, it marked the volume "dirty"; to correct errors on
the volume, it had to be taken offline. With self-healing NTFS, an NTFS worker
thread is spawned in the background which performs a localized fix-up of damaged
data structures, with only the corrupted files/folders remaining unavailable without
locking out the entire volume and needing the server to be taken down. The
operating system now features S.M.A.R.T. detection techniques to help determine
when a hard disk may fail. This feature was first presented within Windows Vista.

HCL Confidential 12
PROPOSAL
6. Hyper-V:
Hyper-V is a hypervisor-based virtualization system, forming a core part of
Microsoft's virtualization strategy. It virtualizes servers on an operating system's
kernel layer. It can be thought of as partitioning a single physical server into multiple
small computational partitions. Hyper-V includes the ability to act as a Xen
virtualization hypervisor host allowing Xen-enabled guest operating systems to run
virtualized. A beta version of Hyper-V ships with certain x86-64 editions of Windows
Server 2008. Microsoft released the final version of Hyper-V on June 26, 2008 as a
free download for these editions. Also, a standalone version of Hyper-V is planned.
This version will also only support the x86-64 architecture.

7. Windows System Resource Manager:


Windows System Resource Manager (WSRM) is being integrated into
Windows Server 2008. It provides resource management and can be used to control
how much resources a process or a user can use based on business priorities.
Process Matching Criteria, which is defined by the name, type or owner of the
process, enforces restrictions on the resource usage by a process that matches the
criteria. CPU time, bandwidth that it can use, number of processors it can be run on,
and memory allocated to a process can be restricted. Restrictions can be set to be
imposed only on certain dates as well.

8. Server Manager:
Server Manager is a new roles-based management tool for Windows Server
2008. It is a combination of Manage Your Server and Security Configuration Wizard
from Windows Server 2003. Server Manager is an improvement of the Configure my
server dialog that launches by default on Windows Server 2003 machines. However,
rather than serve only as a starting point to configuring new roles, Server Manager
gathers together all of the operations users would want to conduct on the server,
such as, getting a remote deployment method set up, adding more server roles etc
and provides a consolidated, portal-like view about the status of each role.
It is not currently possible to use the Server Manager remotely, but a client version
is planned.

HCL Confidential 13
PROPOSAL

Difference between Windows 2000 & 2003 Server:

Domains can be renamed or moved to a different level in an AD tree. Schema


attributes can be deleted as well as added.
Any Domain Controller can cache the Global Catalogue thus preventing user
logon problems if no Global Catalogue server is available.
AD Replication can be set not to use compression.
Cross-Forest Transitive Trusts can be created.
Many administrative tools allow drag-and-drop and there are more
configuration and management wizards.
Most services are disabled by default in 2003 instead of enabled as in Windows
2000.
Support for IPv6. Ping and Tracert have extra IPv6 options.
Supports XML web services.
A new service called Volume Shadow Copy takes periodic snapshots of a hard
drive making it easier to take backups and recover deleted files. Users can
even be allowed to recover previous versions of files by themselves by using
the Previous Versions client.
A Global Catalogue server can be built from backup media instead of by
replication.
IPSec Nat Traversal - NAT-T - allows IPSec VPN clients and servers to pass
through NAT firewalls. This is likely to lead to the wider adoption of L2TP VPNs.
Distributed File System DFS has had significant improvements made to it. For
example DFS replicas can now be prestaged to avoid excessive initial file
replication. Multiple DFS Roots per server can be created (Enterprise and
Datacenter editions only).
Print queue redundancy can be achieved by storing them on multiple servers.
Active Directory Migration Tool v.2.0 can now migrate users, computers,
groups and passwords from an NT domain and can also perform the cross-
forest migration of objects.
Terminal Server allows clients to map their local drives and printers
A terminal server client can connect to the console session where a greater
range of administrative tasks can be performed.

HCL Confidential 14
PROPOSAL
Terminal Server Session Directory allows users to reconnect to the same
session on Terminal Server clusters (Enterprise and Datacenter editions only).
Remote Installation Services now works for servers.
Active Directory in Application mode (AD/AM). An application can have its own
separate instance of Active Directory which hasn't got any of the limitations
that the Network Operating System imposes on the main AD.
The backup and restore of DHCP settings has been incorporated into the DHCP
manager while in 2000 you had to change registry keys and move files
manually
The FTP server allows different default directories to be assigned to different
users.
There's a Security Configuration and Analysis tool to check a server's security
settings
DNS AD-integrated zones are stored in the Application Partition of a forest so
aren't replicated to domain controllers which aren't DNS servers.
Regedit.exe and Regedt32.exe have been amalgamated into a single utility
which takes the best features of each. Both files still exist but run the same
utility.
The DNS server has added flexibility with the new options of stub zones and
conditional forwarding.
Internet Information Server 6 has the ability to keep worker processes from
different websites and web applications separate so that if one application
crashes then other websites running on the same server remain unaffected.
Group Policy has been improved: Resultant Set of Policy tool, 220 new
templates, better folder redirection, WiFi access policy and a Group policy
management console. gpupdate utilty replaces "secedit /refreshpolicy".
There are some new command-line administration tools which are useful for
automating operations on 100s of users at once.
New "Saved Queries" applet in Active Directory Users and Groups
Improvements to RRAS: PPPoE dial-on-demand for Broadband circuits,
Background Intelligent Transfer Service, NAT Traversal using UPnP, improved
management console.
Remote Storage. Infrequently used files are moved to on-line backup when disk
space becomes low.
A new boot.ini option called "secondary plex" allows booting when a software

HCL Confidential 15
PROPOSAL
RAID volume has failed
Task Manager has 2 extra tabs - one showing a graph of network usage per
adaptor and the other showing details of connected users.
Emergency Management Services Console Redirection. Redirect the screen
through a COM port so that a remote administrator can view the boot process.
Robocopy.exe - a Resource Kit tool to maintain identical folder trees in multiple
locations.
Clustering service supports Majority Node Set clusters which don't require
shared disk storage and it also supports multiple redundant paths to external
storage such as SANs. Cluster Service account password can be changed with
cluster on-line. (Enterprise and Datacenter editions only).
Automated System Recovery is a new backup option to facilitate a server being
rebuilt from scratch including recreating the partition structure.
Windows System Resource Manager allows limits to be placed on system
resources such as CPU and RAM usage on a per-process or per-application
basis (Enterprise and Datacenter editions only).
In Windows 2000 server OS, we can create 1 Million users whereas in 2003
server we can create the 1 Billion users.

Difference between Workgroup and Domain

Workgroup

HCL Confidential 16
PROPOSAL

Sys 1

Sys 2

Sys 3

 All computers are peers; no computer has control over another computer.

 Each computer has a set of user accounts. To use any computer in the
workgroup, you must have an account on that computer.

 There are typically no more than ten to twenty computers.

 All computers must be on the same local network or subnet

 In workgroup there is no security

 No Centralized Administration

 Sharing the resources

HCL Confidential 17
PROPOSAL
Domain

Sys1

Sys 2 Sys 2 Sys3 Sys4

 One or more computers are servers. Network administrators use servers to


control the security and permissions for all computers on the domain. This
makes it easy to make changes because the changes are automatically made
to all computers.

 If you have a user account on the domain, you can log on to any computer on
the domain without needing an account on that computer.

 There can be hundreds or thousands of computers.

 The computers can be on different local networks.

 It is a centralized and more security than workgroup

 By using group policy you can restrict any applications in the entire domain.

HCL Confidential 18
PROPOSAL

Active Directory Server 2003

Before Active Directory: In Windows NT Server We are used the Network Directory
Service, in this there is no directory hierarchy only flat structure and PDC and BDC
available.

Active Directory Introduction:

Active Directory is a directory service it provides number of different


services relating to the storage device such as User Accounts, Group Accounts,
Shared Folders, Printer and so on.

What we need to install the Active Directory:

 An NTFS Partition with enough Space 250 MB

 NIC, OS, DNS, and IP Address

 Windows 2000 or Windows 2003 CD Media or atleast i386 folder

 Domain name that you want to use

Purpose o Active Directory (AD)

 Provides user logon Authentication services

 To organize and manage user Accounts, Computers, Groups and n/w


resources

 Enables authorized users to easily locate n/w resources

Active Directory Features or Benefits

 Centralized data store

 Integration with domain name system. Active directory clients such as


windows 2000 professional or windows XP use DNS to locate domain
controllers

 Replication of Information

 Manageability

 Policy based Administration

 Kerberos Authentication

HCL Confidential 19
PROPOSAL
 Flexible install/uninstall

Active Directory Objects

Object: An object is a collection of distinct set of attributes of a network


resources such as users, computers, printers, servers, domain, tree, databases and
security policies are organized in objects.

Attributes: Attributes in active directory differ according to objects.

For Example: A user is an object with attributes such as first name, last name and
Job title

A computer is an object with attribute such as name and location.

Object Attributes

User’s username, password, email address phone


number group membership

Computer Computer name and Location

HCL Confidential 20
PROPOSAL

Active Directory Components

Active directory consists of logical and physical components. These


components are used to develop the structure of our organization.

Active Directory Logical Structure

Domai
n

Organizational Unit

User Computer Shared


Printers
Accounts Accounts Folders

HCL Confidential 21
PROPOSAL

Active Directory Logical Structure

Forest

D D

D D
D D

Domain Tree
Domain Tree

Logical Structure components:

1. Domain
2. Organizational Unit
3. Tree
4. Forest

HCL Confidential 22
PROPOSAL
Domain: It is a Centralized Unit of logical structure. It is a collection of objects such
as users, computers, printers, shared folder and so on.

There are four domain functional levels:

1. Windows 2000 mixed

2. Windows 2000 native

3. Windows 2003 interim

4. Windows Server 2003

1. Windows 2000 mixed: Allows Windows 2003 domain controller to communicate


with controllers in the domain running Windows NT4, Windows 2000 or Windows
2003.

When you configure a new Windows Server 2003 domain, the default domain
functional level is Windows 2000 mixed. Under this domain functional level, Windows
NT, 2000, and 2003 domain controllers are supported. However, certain features
such as group nesting, universal groups, and so on are not available.

2. Windows 2000 native: Allows Windows 2003 domain controller to communicate


with controllers in the domain running Windows 2000 or Windows 2003.

Upgrading the functional level of a domain to Windows 2000 Native should only be
done if there are no Windows NT domain controllers remaining on the network. By
upgrading to Windows 2000 Native functional level, additional features become
available including: group nesting, universal groups, SID History, and the ability to
convert
security groups and distribution groups.

3. Windows 2000 Interim: Allows Windows 2003 domain controller to


communicate with controllers in the domain running Windows NT4 or Windows 2003

4. Windows Server 2003: Allows Windows 2003 domain controller to communicate


with controllers in the domain running only Windows 2003.

What is difference between windows 2000 mixed mode and Windows 2003 Interim
mode?

In 2000 Interim mode supports POSIX( Portable Operating System Interface


UNIX Environment.

HCL Confidential 23
PROPOSAL

Organizational Unit:

 Organizes the domain objects into logical administrative groups


 OU contains objects such as users, computers, groups, application file shares
and so on.
 OU is like a file folder it holds important information

Delegation of control: To delegate the Active directory duties to


Other administrators or users.

For Example: Suppose you are a senior administrator for an organization. You have
an OU called Accounts where all user, group and computer accounts are stored. The
creating of users, group, and computer accounts is not a difficult task in terms of
configuration.
Delegation of control to delegate the administrative duties of creating user, group,
and computer accounts to new administrative trainee.

With Delegation of Control, you can limit the tasks an administrator can perform until
he or she technically capable of handling more complex tasks.

Here right click the Accounts OU then click Delegation control then give the Tasks to
delegate. Click delegation control what type of active directory objects want to
control. Then select the permissions you want to delegate.

Tree: A tree is a collection of domains which having the same namespace.

HCL Confidential 24
PROPOSAL
A domain contains domain controllers.

India.com

Chennai.india.com Mumbai.india.com

Forest: A forest is a one or more domains that share same schema Site, replication
information and searchable components (Global Catalog).

There are three forest functional levels:

1. Windows 2000 default


2. Windows Server 2003 Interim
3. Windows Server 2003

1. Windows 2000 default: Allows the windows 2003 domain controller to


communicate with domain controllers in the forest running windows NT4, windows
2000 and windows 2003

2. Windows Server 2003 Interim: Allows the windows 2003 domain controller to
communicate with domain controllers in the forest running windows NT4 and
windows 2003.

3. Windows Server 2003: Allows the windows 2003 domain controller to


communicate with domain controllers in the forest running windows 2003.

Physical Components of Active Directory:

1. Site
2. Domain Controller

Site: A site is essentially a TCP/IP subnet. A site allows the administrator to


configure the active directory access and replication topology.

There are three possible site configurations.

1. Single site in a domain

HCL Confidential 25
PROPOSAL
2. Single sites across multiple domains
3. Multiple sites in single domain
Domain controller:
 A domain controller contains copy of the local domain database.

 A domain can have many domain controllers and each domain controller

maintains the copy of the domain’s directory.

Function’s of the domain controller:


 Storing a copy of all active directory information of a domain

 Managing information changes and replicating the changes to other domain

controller in the same domain

 Replicating directory information for all the objects in the domain to each

other automatically

 Replicating important updates such as disabling of a user account

New Active Directory Features

With the new Active Directory features in Standard Edition, Enterprise Edition, and
Datacenter Edition, more efficient administration of Active Directory is available to
you.

New features can be divided into those available on any domain controller running
Windows Server 2003, and those available only when all domain controllers of a
domain or forest are running Windows Server 2003.

Features Available If Any Domain Controller Is Running Windows Server


2003

The following list summarizes the Active Directory features that are enabled by
default on any domain controller running Windows Server 2003.

HCL Confidential 26
PROPOSAL
Multiple selection of user objects: Modify common attributes of multiple user
objects at one time.

Drag-and-drop functionality: Move Active Directory objects from container to


container by dragging and dropping one or more objects to a desired location in the
domain hierarchy. You can also add objects to group membership lists by dragging
and dropping one or more objects (including other group objects) onto the target
group.

Efficient search capabilities: Search functionality is object-oriented and provides


an efficient browse-less search that minimizes network traffic associated with
browsing objects.

Saved queries: Save commonly used search parameters for reuse in Active
Directory Users and Computers.

Active Directory command-line tools: Run new directory service commands for
administration scenarios.

Selective class creation: Create instances of specified classes in the base schema
of a Windows Server 2003 forest. You can create instances of several common
classes, including: country or region, person, organizational Person, groupOfNames,
device, and certification Authority.

InetOrgPerson class: The inetOrgPerson class has been added to the base schema
as a security principal and can be used in the same manner as the user class. The
userPassword attribute can also be used to set the account password.

Application directory partitions: Configure the replication scope for application-


specific data among domain controllers running Standard Edition, Enterprise Edition,
and Datacenter Edition. For example, you can control the replication scope of
Domain Name System (DNS) zone data stored in Active Directory so that only
specific domain controllers in the forest participate in DNS zone replication.

Add additional domain controllers to existing domains using backup media:


Reduce the time it takes to add an additional domain controller in an existing domain
by using backup media.

Universal group membership caching: Prevent the need to locate a global


catalog across a wide area network (WAN) during logons by storing user universal
group memberships on an authenticating domain controller.

Features Available When All Domain Controllers Are Running Windows


Server 2003

New domain- or forest-wide Active Directory features can be enabled only when all
domain controllers in a domain or forest are running Windows Server 2003 and the
domain functionality or forest functionality has been set to Windows Server 2003.

HCL Confidential 27
PROPOSAL
The following list summarizes the domain- and forest-wide Active Directory features
that can be enabled when either a domain or forest functional level has been raised
to Windows Server 2003.

Domain controller rename tool: Rename domain controllers without first


demoting them.

Domain rename: Rename any domain running Windows Server 2003 domain
controllers. You can change the NetBIOS name or DNS name of any child, parent,
tree- or forest-root domain.

Forest trusts: Create a forest trust to extend two-way transitivity beyond the scope
of a single forest to a second forest.

Forest restructuring: Move existing domains to other locations in the domain


hierarchy.

Defunct schema objects: Deactivate unnecessary classes or attributes from the


schema.
Dynamic auxiliary classes: Provides support for dynamically linking auxiliary
classes to individual objects, and not just to entire classes of objects. In addition,
auxiliary classes that have been attached to an object instance can subsequently be
removed from the instance.

Global catalog replication tuning: Preserves the synchronization state of the


global catalog when an administrative action results in an extension of the partial
attribute set. This minimizes the work generated as a result of a partial attribute set
extension by only transmitting attributes that were added.

Replication enhancements: Linked value replication allows individual group


members to be replicated across the network instead of treating the entire group
membership as a single unit of replication.

Raising Domain Functional Levels

The following table describes the domain-wide features that are enabled for
the corresponding domain functional level:

Domain Feature Windows 2000 Windows 2000 Windows Server


mixed native 2003

HCL Confidential 28
PROPOSAL
Domain controller Disabled Disabled Enabled
rename tool
Update logon Disabled Disabled Enabled
timestamp
Kerberos KDC Disabled Disabled Enabled
key version
numbers
User password on Disabled Disabled Enabled
InetOrgPerson
object
Universal Groups Enabled for Enabled Enabled
distribution groups.
Allows both security Allows both security
Disabled for and distribution and distribution
security groups. groups. groups.
Group Nesting Enabled for Enabled Enabled
distribution groups.
Allows full group Allows full group
Disabled for nesting. nesting.
security groups,
except for domain
local security
groups that can
have global groups
as members.
Converting Disabled Enabled Enabled
Groups
No group Allows conversion Allows conversion
conversions between security between security
allowed. groups and groups and
distribution groups. distribution groups.

The following table describes the forest-wide features that are enabled for
the corresponding forest functional level:

Forest Feature Windows 2000 Windows Server 2003


Global catalog replication Disabled Enabled
tuning
Defunct schema objects Disabled Enabled

HCL Confidential 29
PROPOSAL
Forest trust Disabled Enabled
Linked value replication Disabled Enabled
Domain rename Disabled Enabled
Improved replication Disabled Enabled
algorithms
Dynamic auxiliary classes Disabled Enabled
InetOrgPerson objectClass Disabled Enabled
change

How to install and where is the database file of AD:

1. Click Start, point to Run and type "dcpromo".

2. The wizard windows will appear. Click Next.

3. In the Operating System Compatibility windows read the requirements


for the domain's clients and if you like what you see - press Next.

HCL Confidential 30
PROPOSAL

4. Choose Domain Controller for a new domain and click Next.

HCL Confidential 31
PROPOSAL
5. Choose create a new Domain in a new forest and click Next.

6. Enter the full DNS name of the new domain, for example - kuku.co.il -
this must be the same as the DNS zone you've created in step 3, and the same
as the computer name suffix you've created in step 1. Click Next.

HCL Confidential 32
PROPOSAL

This step might take some time because the computer is searching for the DNS
server and checking to see if any naming conflicts exist.

7. Accept the down-level NetBIOS domain name, in this case it's KUKU.
Click Next

8. Accept the Database and Log file location dialog box (unless you want to
change them of course). The location of the files is by default %systemroot

HCL Confidential 33
PROPOSAL
%\NTDS, and you should not change it unless you have performance issues in
mind. Click Next.

9. Accept the Sysvol folder location dialog box (unless you want to change it
of course). The location of the files is by default %systemroot%\SYSVOL, and
you should not change it unless you have performance issues in mind. This
folder must be on an NTFS v5.0 partition. This folder will hold all the GPO and
scripts you'll create, and will be replicated to all other Domain Controllers. Click
Next.

HCL Confidential 34
PROPOSAL

10. If your DNS server, zone and/or computer name suffix were not
configured correctly you will get the following warning:

HCL Confidential 35
PROPOSAL
This means the Dcpromo wizard could not contact the DNS server, or it did
contact it but could not find a zone with the name of the future domain. You
should check your settings. Go back to steps 1, 2 and 3. Click Ok.
You have an option to let Dcpromo do the configuration for you. If you want,
Dcpromo can install the DNS service, create the appropriate zone, configure it to
accept dynamic updates, and configure the TCP/IP settings for the DNS server IP
address.
To let Dcpromo do the work for you, select "Install and configure the DNS
server...".
Click Next.
Otherwise, you can accept the default choice and then quit Dcpromo and check
steps 1-3.

11. If your DNS settings were right, you'll get a confirmation window.

Just click Next.

12. Accept the Permissions compatible only with Windows 2000 or Windows
Server 2003 settings, unless you have legacy apps running on Pre-W2K servers.

HCL Confidential 36
PROPOSAL

13. Enter the Restore Mode administrator's password. In Windows Server 2003 this
password can be later changed via NTDSUTIL. Click Next.

HCL Confidential 37
PROPOSAL

14. Review your settings and if you like what you see - Click Next.

HCL Confidential 38
PROPOSAL

15. See the wizard going through the various stages of installing AD.
Whatever you do - NEVER click Cancel!!! You'll wreck your computer if you do. If
you see you made a mistake and want to undo it, you'd better let the wizard
finish and then run it again to undo the AD.

HCL Confidential 39
PROPOSAL

16. If all went well you'll see the final confirmation window. Click Finish.

HCL Confidential 40
PROPOSAL

17. You must reboot in order for the AD to function properly.

18. Click Restart now.

1.2 Checking the AD installation

You should now check to see if the AD installation went well.

1. First, see that the Administrative Tools folder has all the AD management
tools installed.

2. Run Active Directory Users and Computers (or type "dsa.msc" from the
Run command). See that all OUs and Containers are there.

3. Run Active Directory Sites and Services. See that you have a site named
Default-First-Site-Name, and that in it your server is listed.

4. Open the DNS console. See that you have a zone with the same name as
your AD domain (the one you've just created, remember? Duh...). See that
within it you have the 4 SRV record folders. They must exist.

HCL Confidential 41
PROPOSAL

= Good
If they don't (like in the following screenshot), your AD functions will be broken
(a good sign of that is the long time it took you to log on. The "Preparing
Network Connections" windows will sit on the screen for many moments, and
even when you do log on many AD operations will give you errors when trying to
perform them).

HCL Confidential 42
PROPOSAL

= Bad

This might happen if you did not manually configure your DNS server and let the
DCPROMO process do it for you.
Another reason for the lack of SRV records (and of all other records for that
matter) is the fact that you DID configure the DNS server manually, but you
made a mistake, either with the computer suffix name or with the IP address of
the DNS server (see steps 1 through 3).
To try and fix the problems first see if the zone is configured to accept dynamic
updates.

1. Right-click the zone you created, and then click Properties.

2. On the General tab, under Dynamic Update, click to select "Nonsecure


and secure" from the drop-down list, and then click OK to accept the change.

You should now restart the NETLOGON service to force the SRV registration.
You can do it from the Services console in Administrative tools:
Or from the command prompt type "net stop netlogon", and after it finishes type
"net start netlogon".
Let it finish, go back to the DNS console, click your zone and refresh it (F5). If
all is ok you'll now see the 4 SRV record folders.

HCL Confidential 43
PROPOSAL
If the 4 SRV records are still not present double check the spelling of the zone in
the DNS server. It should be exactly the same as the AD Domain name. Also
check the computer's suffix (see step 1). You won't be able to change the
computer's suffix after the AD is installed, but if you have a spelling mistake
you'd be better off by removing the AD now, before you have any users, groups
and other objects in place, and then after repairing the mistake - re-running
DCPROMO.

5. Check the NTDS folder for the presence of the required files.

6. Check the SYSVOL folder for the presence of the required subfolders.

7. Check to see if you have the SYSVOL and NETLOGON shares, and their
location.

If all of the above is ok, I think it's safe to say that your AD is properly installed
Where is the database file store in AD?
%systemroot%/NTDS/NTDS.dit
Where dit stands for directory information tree, and default size is 40 MB.
What is the sysvol folder in AD?
Sysvol folder stores the Server’s copy of the domain public files. The contents such
as group policy, users etc of the sysvol folder are replicated to all domain controllers
in the domain.
How to Backup AD?

1.3 Method #1: Using NTBACKUP

1. Open NTBACKUP by either going to Run, then NTBACKUP and pressing Enter
or by going to Start -> Accessories -> System Tools.
2. If you are prompted by the Backup or Restore Wizard, I suggest you un-check
the "Always Start in Wizard Mode" checkbox, and click on the Advanced Mode
link.

HCL Confidential 44
PROPOSAL

3. Inside NTBACKUP's main window, click on the Backup tab.

4. Click to select the System State checkbox. Note you cannot manually select
components of the System State backup. It's all or nothing.

5. Enter a backup path for the BKF file. If you're using a tape device, make sure
NTBACKUP is aware and properly configured to use it.

HCL Confidential 45
PROPOSAL

6. Press Start Backup.

7. The Backup Job Information pops out, allowing you to configure a scheduled
backup job and other settings. For the System State backup, do not change
any of the other settings except the schedule, if so desired. When done, press
Start Backup.

HCL Confidential 46
PROPOSAL

8.

9. After a few moments of configuration tasks, NTBACKUP will begin the backup
job.

HCL Confidential 47
PROPOSAL

10. When the backup is complete, review the output and close NTBACKUP.

Next, you need to properly label and secure the backup file/tape and if
possible, store a copy of it on a remote and secure location.

1.4 Method #2: Using the Command Prompt

You can use the command line version of NTBACKUP in order to perform backups
from the Command Prompt.
For example, to create a backup job named "System State Backup Job" that backs
up the System State data to the file D:\system_state_backup.bkf, type:
ntbackup backup systemstate /J "System State Backup Job" /F "D:\system_state_backup.b

What are the contents should be in System state:


 Active directory (The database & log files)
 Sysvol (policies & scripts)
 Boot files
 Registry

HCL Confidential 48
PROPOSAL
 COM class Registration database
 CA (Certificate Authority)

What are the Services running DC’s


1. Netlogon
2. RPC
3. LDAP
4. Kerberos
5. Windows Timer Server

How to restore the active directory:


There are three types of restore
1. Non-Authoritative restore
2. Authoritative restore
3. Primary
Non-Authoritative restore:
 The default method of restoring an active directory is Non-Authoritative.
 This method will restore an active directory to the server in question and will
then receive all of the recent updates from its replication partners in the
domain.

HCL Confidential 49
PROPOSAL
 For example, a server that has a System State backup from two days ago
goes down. A restore of the two-day old active directory would be performed
and it would then be updated from the other domain controllers when the
next replication takes place. No other steps would be required
Authoritative Restore:
 Authoritative restores do not have to be made of the entire directory, as you
can choose to restore only parts of the directory
 When only parts of the active directory are restored, say an organizational
unit this information is pushed out to the remaining DCs and they are
overwritten.
 However, the rest of the directory's information is then replicated to the
restored DC's directory and it is updated
 An example of when an Authoritative restore would be used is when an
organizational unit is deleted but everything else in the active directory is
working as required
 If the environment only has a single domain controller, then there is never a
reason to perform an authoritative restore as there are no replication partners

Examples for Authoritative restore

E:\ntdsutil>ntdsutil
ntdsutil: authoritative restore
authoritative restore: restore object OU=bosses,DC=ourdom,DC=com

Opening DIT database... Done.

The current time is 06-17-05 12:34.12.


Most recent database update occurred at 06-16-05 00:41.25.
Increasing attribute version numbers by 100000.

Counting records that need updating...


Records found: 0000000012

Example of Non-Authoritative restore


 Restart the domain controller then press F8
 Click the option Active directory service restore
 Log on to the server then type ntbackup.exe in run command

HCL Confidential 50
PROPOSAL
 Restore the system state backup then restart the server.

Primary restore: When your all the Domain controllers are failed that case we use
primary restore.

Replication:
 What is Replication: Replication refers to reflecting that changes made in a

domain controller in a forest to other domain controllers.

 These changes are provided to domain controllers within and outside the side

 What kind of information should be Replicated?

 The directory information is logically partitioned into four categories that are

referred as directory partitions.

 A directory partition is also known as naming context.

 The directory information stored in the ntds.dit

Schema partition:
Defines rules for objects creation and modification for all objects in the forest.
Replicated to all domain controller in the forest.

Configuration Partition:
Defines forest Information Including Trees, domains, domains, Trust Relationship,
and sites
Replicated to all domain controllers in the forest.

Domain partition:
Has complete information about all domain objects such as OU’s, Groups and users.
Replicated only to domain controllers in the same domain

Application partition:
It is replicated only to specific domain controller

HCL Confidential 51
PROPOSAL
It provides redundancy, availability or fault tolerance.

For Example:
If you use a DNS that is integrated with the ADS you have two application partitions
for DNS zones.
1. Forest DNS zones
2. Domain DNS zones

Active directory replication the date within the sites or between the sites
1. Intrasite Replication
2. Intersite Replication

Intrasite Replication:
Knowledge Consistency Checker (KCC) is a process that runs on
a domain controller that generates replication topology with a domain using Ring
structure.
KCC monitors the topology about fifteen minutes.
If a domain controller in the ring fails or it is removed, KCC reconfigure the topology.

DC
1

IntrasiteDCReplication DC
2 3

Intersite Replication:
Intersite replication is configurable and schedule.

HCL Confidential 52
PROPOSAL

SITE1

SITE 2 SITE 3

What is Multimaster replication?

Is a method of to transfer data or changes the data across multiple servers.

Intersite transfers can be either IP or SMTP

Intersite transport node properties:

Description: Sites in the link and not in the link

Cost: Replication schedule you can specify the internal between replication.
IP: IP is recommended to use whenever possible for replication of Active directory

SMTP: To use SMTP you need to install certificate services to encrypt and verify the
directory replication.
Trust

Definition: To allow users in one domain to access resources in another. AD uses


trust, Trust is automatically produced when domain are created.

HCL Confidential 53
PROPOSAL
Trust relationship: Trust relationship connects two domain such as the trusting
domain and trusted domain.

The Characteristic of trust

Method of creation: you can create trust manually or automatically

Transitivity: Transitive trusts are not bounded by domains in the trust relationship
and non-transitive trusts are bounded by the domain in the trust relationship.

Example:

DC1

DC2 DC3

Domain A trusts B and C, Domain B trust domain A & C this is called transitive.

Domain A trust B only no relationship between C is called non-transitive

Direction: Trusts can be unidirectional or bidirectional

In unidirectional trust domain X trust domain y. whereas domain X and y both trust
each other called bidirectional.

HCL Confidential 54
PROPOSAL

Printe
r
Users
Files

Domain X (trusting domain) Domain Y (trusted domain)

Types of trust relationship:

1. Tree-root trust
2. Parent-child trust
3. Shortcut trust
4. External trust
5. Forest trust
6. Realm trust

Trusting domain: The domain containing the resource called the trusting domain.

Trusted domain: The domain containing the user account is called the trusted
domain.

Tree-root trust: Tree-root trust is established automatically on adding a new tree


root domain in a forest. The trust is bidirectional and transitive.

Parent-child trust: Parent-child trust relationship automatically established on


adding a tree child domain in a tree. The trust is bidirectional and transitive.

Shortcut Trust: The administrator manually creates shortcut trusts between two
domains in a forest.

The user logon times can be improved by this trust.

It is a transitive and unidirectional or bidirectional

External Trust: The administrator creates external trust manually between windows
2003 domains in different forest.

The trust allows you to access the resources from separate forests

It is non-transitive and bidirectional or unidirectional.

HCL Confidential 55
PROPOSAL
Forest Trust: A new kind of trust with windows server 2003. It allows one forest to
trust all domains within another forest. It is non-transitive and can be unidirectional
or bidirectional.

Realm Trust: The administrator creates realm trust manually between windows
2003 server and non-windows Kerberos realm in different forest.

It is transitive or non-transitive and unidirectional or bidirectional.

Object Naming:

An active directory object is identified by its name these names are determined by
LDAP standards.

Object Naming conventions

1. Distinguished Name (DN)


2. Relative Distinguished Name (RDN)
3. User Principal Name (UPN)
4. Globally Unique Identifiers (GUID)

Distinguished Name (DN)

DN shows the complete path to the object or where the object resides within
the AD

Provides Unique Identification to an object

CN=user11, OU=specificOU, OU=MYOU DC=domain1, DC=com

Where CN= Indicates the object common name

OU= Indicates the organizational Unit name

DC= Indicates the domain controller name

Relative Distinguished Name (RDN)

Allows locating objects by querying their attributes.

User Principal Name (UPN)

Provides a user-friendly name that is easy to remember.


A UPN consist of the user account name and the domain name of the user Account.

HCL Confidential 56
PROPOSAL
For Example:

The user mugdha in the system might have a UPN called


user11@domain1.com.

Globally Unique Identifiers (GUID)

Represents a 128 bit hexadecimal number. This number is unique for each
object in the enterprise.

The GUID of an object never changes even if the DN is changed or the object’s
location is changed.

It is used to identify the domain controller.

For Example: The GUID of an user object can be {OC7F2243-74D14B4E-A37!-


7D76EEFB2}

User Accounts

Windows 2003 server supports three types of use accounts they are

1. Local user account


2. Domain user account
3. Built-in user account

Local User Account:


Allows the user to logon to the computer on which account is created. A user
cannot access resources on a domain using local user account.

Domain User Account:


Allows the user logon to the domain and can access the resources in the
network.

Built-in User Account:


The built-in Account in 2003 is Administrator and Guest.

Administrator:
Manages the computer Administrator can modify, create and delete user account.

HCL Confidential 57
PROPOSAL
Guest: Allows users to access the domain who do not have a domain user account.
The users can access the network resources and guest account does not
have a password.
The guest account in windows 2003 is disabled by default, you can rename
and disable the guest account but cannot delete the account.

User Profiles
1. Local User Profiles (LUP)
2. Roaming User Profiles (RUP)
3. Mandatory User Profiles (MUP)

Local User Profiles: User profiles are stored locally on the system. Profiles are
store in the path c:\documents and settings\username Local user profile include
desktop settings, network places, my documents and even application data.
Roaming User Profiles: If users work at more than one computer you can
configure the roaming user profiles. RUP’s stored on the server and is downloaded to
the local computer whenever a user logs on.
Mandatory User Profiles: It is a read-only roaming profile that is stored at server.
A Single mandatory user profiles can be assigned to multiple users who need same
desktop settings.
Group Types
The Group type identifies the use of a group.
For Example: A security group assigns permissions whereas a distribution group
sends emails.
Active directory service in 2003 supports two types of groups.
1. Distribution
2. Security
Distribution: Applications use distribution groups for non-security related functions.
In distribution groups to create email distribution lists.
Distribution groups can be used only with email applications such as exchange to
send email to collections of users.
In distributed groups are not security enabled which means they cannot be listed in
DACL (Discretionary Access Control List).

HCL Confidential 58
PROPOSAL
Security: Security groups to assign permissions to shared resources. A group that
can be listed in DACL used to define permissions on resources and objects.

Security Descriptor: Security descriptors include information about who owns the
object, which can access it and in what way.
Access Control List (ACL): The ACL contains the security descriptors and it stores a
list of user access permissions.

Two types of ACL


DACL (Discretionary Access Control List): The part of an object’s security
descriptor that grants or denies specific users or groups permission. Only the owner
can change permissions granted or denied in DACL.
SACL (System Access Control List): The part of an object’s security descriptor
that specifies which events are to be audited per user or group.
Auditing: The process that tracks the activities of users by recording selected types
of events in the security log.

Access Control Entry (ACE): An entry in an object’s DACL that grants permissions
to a user or group.
An ACE is also an entry in SACL that specifies the events to be audited for a user or
group.

Security Identifier (SID): When an account is created it is given a unique access


number known as a security identifier.

Group Scopes or Groups


1. Domain Local Scopes
2. Global Scopes
3. Universal Scopes

Domain Local Groups: Domain local groups assign permissions to resources within
single domain. Domain local groups can contain user and computer accounts from
the same domain.
You can change domain local group to universal group if the group is not a member
of other domain local group.

HCL Confidential 59
PROPOSAL
Global group: Global groups provide access permissions to other trusted domains
under the same forest.
A global group can contains only user or computer accounts and global groups of the
same domain.
You can change the global group to universal group if the global group is not a
member of other global groups in the domain.

Universal group: A universal group assigns access permissions in any domain


under the forest. You can create universal group only windows 2000 native and
windows 2003 domain functional.
You can change the universal group to global group if the group is not a member of
another universal group.
Universal to domain local group there is no restrictions.

Global Catalog
A Global Catalog server is a domain controller that contains copy of all objects in its
own domain and partial copy of all objects within other domains in a forest.

The Global Catalog performs three functions:


1. Allow user to logon using the Universal group membership information to the
domain controller.
2. Helps in searching the directory information across domain controller
3. Resolves user principals name for the authenticating domain controller
How to create additional GC servers
When you create the first domain controller for a new domain, that particular domain
controller is designated as the GC server. Depending on your network, you might
need to add an additional GC server(s). The Active Directory Sites and Services
console is the tool used to add an additional GC server. You have to be a member of
one of the following groups to create additional GC servers: Domain Admins or
Enterprise Admins.
To create an additional GC server:

1. Click Start, Administrative Tools, and then click Active Directory Sites and
Services.
2. In the console tree, expand Sites, and then expand the site that contains the
domain controller which you want to configure as a GC server.

HCL Confidential 60
PROPOSAL
3. Expand the Servers folder, and locate and then click the domain controller
that you want to designate as a GC server.
4. In the details, pane, right-click NTDS Settings and click Properties on the
shortcut menu.
5. The NTDS Settings Properties dialog box opens.
6. The General tab is where you specify the domain controller as a GC server.
7. Enable the Global Catalog checkbox.
8. Click OK.

Global Catalog Architecture

HCL Confidential 61
PROPOSAL

Global Catalog Architecture Components

Clients
Global catalog clients, including search clients and Address Book clients, as
well as domain controllers performing replication and universal group security
identifier (SID) retrieval during logon in a multidomain forest.
Network
The physical IP network.

Interfaces
LDAP over port 389 for read and write operations and LDAP over port 3268
for global catalog search operations. NSPI and replication (REPL) use proprietary RPC
protocols. Retrieval of universal group membership occurs over RPC as part of the
replication RPC interface. Windows NT 4.0 clients and backup domain controllers
(BDCs) communicate with Active Directory through the Security Accounts Manager
(SAM) interface.

HCL Confidential 62
PROPOSAL

Directory System Agent (DSA)


The directory service component that runs as Ntdsa.dll on each domain
controller, providing the interfaces through which services and processes gain access
to the directory database.

Extensible Storage Engine (ESE)


The directory service component that runs as Esent.dll. ESE manages the
tables of records that comprise the directory database.

Ntds.dit database file


The Active Directory data store

Global Catalog Physical Structure

HCL Confidential 63
PROPOSAL
As shown in the preceding diagram, a global catalog server stores a replica of its own
domain (full and writable) and a partial, read-only replica of all other domains in the
forest.

All directory partitions on a global catalog server, whether full or partial, are stored
in the directory database file (Ntds.dit) on that server. That is, there is not a
separate storage area for global catalog attributes; they are treated as additional
information in the directory database of the global catalog server.

Global Catalog Server Physical Components

Active Directory forest:


The set of domains that comprise the Active Directory logical structure and
that are searchable in the global catalog.

Domain controller:
Server that stores one full, writable domain directory partition plus forestwide
configuration and schema directory partitions. Global catalog servers are always
domain controllers.

Global catalog server:


Domain controller that stores one full, writable domain plus forestwide
configuration and schema directory partitions, as well as a partial, read-only replica
of all other domains in the forest.

Ntds.dit Database file:


That stores replicas of the Active Directory objects held by any domain
controller, including global catalog servers.

Universal group membership caching Feature:

The Universal group membership caching allows domain controllers to cache


the universal group membership information of the users for authenticating purpose.

This caching is useful in the absence of the global catalog servers.

How to enable the Universal Group Membership caching feature

1. Click Start, Administrative Tools, and then click Active Directory Sites and
Services.
2. In the console tree, click the particular site that you want to enable universal
group membership caching for.
3. In the details pane, right-click NTDS Settings and click Properties on the
shortcut menu.
4. The NTDS Settings Properties dialog box opens.

HCL Confidential 64
PROPOSAL
5. Check the Enable Universal Group Membership Caching checkbox.
6. Click OK.

Flexible Single Master Operations (FSMO)

Fsmo roles are Server roles in a forest they are forestwide and domainwide roles.

1. Schema Master Role


2. Domain Naming Master Role
3. Relative Identifier RID Master Role
4. Primary Domain Controller (PDC) Emulator Master Role
5. Infrastructure Master Role

Schema master and domain naming master are forestwide roles.

Rid, PDC Emulator and Infrastructure master are domainwide roles.

Schema Master Role:

 Schema master is a set of rules which is used to define the structure of active
directory.
 It maintains detail information of all objects and it is a forestwide rule.
 Schema master control all updates and modifications to the schema once the
schema update is complete it is replicated schema master to all other DC in
the directory.

Schema Master Role:

The schema master domain controller controls all updates and modifications to the
schema. To update the schema of a forest, you must have access to the schema
master. There can be only one schema master in the entire forest.

To identify the schema master role

HCL Confidential 65
PROPOSAL
Follow these steps in order to perform the task

1. Open the Active Directory Schema snap-in.

2. In the console tree, right-click Active Directory Schema, and then click
Operations Master.

HCL Confidential 66
PROPOSAL

3. Under Current Schema Master, view the current schema operations master.

To install the Active Directory Schema snap-in.

 Click Start, click run, type mmc, and then click OK.

HCL Confidential 67
PROPOSAL
 On the Console menu click File and then click Add/Remove Snap-in.
 Click Add.
 Click Active Directory Schema.
 Click Add.
 Click Close to close the Add Standalone Snap-in dialog box.
 Click OK to add the snap-in to the console.

Domain Naming Master Role:

 Monitors and controls the Adding, Changing and deleting any domain
controllers in the forest. It is a forestwide rule.

How to view Domain naming master role

 Go to Start | Administrative Tools | Active Directory Domains And Trusts.

 Right-click Active Directory Domains And Trusts, and select Operations


Master from the list.

RID Master Role:

 It assigns RID and SID to the newly created objects like users and computers.
 If RID Master is down you can create security objects up to RID pools are
available in the DC.
 RID master is used when moving an object between domain and it is
domainwide role.

PDC Emulator Master Role:

 It works as the PDC to any NT BDC’s in our environment


 It works as the time server to maintain the same time in the network
 It works to change the passwords, lockout etc and it is domainwide role.

Infrastructure Master Role:

This works when we are renaming the group membership object this role takes care.

HCL Confidential 68
PROPOSAL
It is possible for an object in one domain to be referenced by another domain.

For Example: When a user from domain A is placed in a local group in domain B,
the reference information stored in the domain B group is

1. The Global Unique Identifier (GUID) of the object, which never changes
during the objects lifetime, even if it is moved between domains.
2. The security Identifier (SID) of the object, which would change if moved
between domains.
3. The Distinguished Name (DN) of the object, which changes if the object is
moved in anyway.

This information is stored in a record known as a phantom record.

The infrastructure master is responsible for that the SID’s and DN’s of the phantom
records of objects referenced from other domains are kept up to date by comparing
the content of its database with that of the global catalog.

How to view these domain roles

Type DSA in run command and right click the domain controller then click operation
master

HCL Confidential 69
PROPOSAL

How to Transfer the FSMO Roles in Active Directory

Transferring the Schema Master Role: through GUI

You can use the Schema Master tool to transfer the role. However, the
Schmmgmt.dll dynamic-link library must be registered in order to make the Schema
tool available as an MMC snap-in.

Registering the Schema Tool:

Click Start, and then click Run.

Type regsvr32 schmmgmt.dll, and then click OK. A message should be displayed
stating that the registration was successful.

Transferring the Schema Master Role:

Click Start, click run, type mmc, and then click OK.

HCL Confidential 70
PROPOSAL
On the Console menu click Add/Remove Snap-in.

Click Add.

Click Active Directory Schema.

Click Add.

Click Close to close the Add Standalone Snap-in dialog box.

Click OK to add the snap-in to the console.

Right-click the Active Directory Schema icon, and then click Change Domain
Controller.

Note: If you are not on the domain controller where you want to transfer the role,
you need to take this step. It is not necessary if you are connected to the domain
controller whose role you want to transfer.

Click Specify Domain Controller, type the name of the domain controller that will be
the new role holder, and then click OK.

Right-click Active Directory Schema and then click Operation Masters.

In the Change Schema Master dialog box: click Change.

Click OK.

Click OK.

Click Cancel to close the dialog box

How to transfer roles through command line:

To transfer the FSMO roles from the Ntdsutil command:


Caution: Using the Ntdsutil utility incorrectly may result in partial or complete
loss of Active Directory functionality.

1. On any domain controller, click Start, click Run, type Ntdsutil in the Open
box, and then click OK.

HCL Confidential 71
PROPOSAL
Microsoft Window s [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.

C:\WINDOWS>ntdsutil
ntdsutil:

2. Type roles, and then press ENTER.

ntdsutil: roles
fsmo maintenance:

Note: To see a list of available commands at any of the prompts in the Ntdsutil
tool, type ?, and then press ENTER.

3. Type connections, and then press ENTER.

fsmo maintenance: connections


server connections:

4. Type connect to server <server name>, where <server name> is the


name of the server you want to use, and then press ENTER.

server connections: connect to server server100


Binding to server100 ...
Connected to server100 using credentials of locally
server connections:

5. At the server connections: prompt, type q, and then press ENTER again.

server connections: q
fsmo maintenance:

6. Type transfer <role>. where <role> is the role you want to transfer.

For example, to transfer the RID Master role, you would type transfer rid
master:
Options are:

HCL Confidential 72
PROPOSAL
Transfer domain naming master
Transfer infrastructure master
Transfer PDC
Transfer RID master
Transfer schema master

7. You will receive a warning window asking if you want to perform the
transfer. Click on Yes.
8. After you transfer the roles, type q and press ENTER until you quit
Ntdsutil.exe.
9. Restart the server and make sure you update your backup.

Transferring the Domain Naming Master via GUI

1. Open the Active Directory Domains and Trusts snap-in from the
Administrative Tools folder.
2. If you are NOT logged onto the target domain controller, in the snap-in, right-
click the icon next to Active Directory Domains and Trusts and press Connect
to Domain Controller.
3. Select the domain controller that will be the new role holder and press OK.
4. Right-click the Active Directory Domains and Trusts icon again and press
Operation Masters.
5. Press the Change button.
6. Press OK to confirm the change.
7. Press OK all the way out.

Transferring RID, PDC, and Infrastructure Master roles:

1. Open the Active Directory Users and Computers snap-in from the
Administrative Tools folder.
2. If you are NOT logged onto the target domain controller, in the snap-in,
right-click the icon next to Active Directory Users and Computers and press
Connect to Domain Controller.

HCL Confidential 73
PROPOSAL
3. Select the domain controller that will be the new role holder, the target,
and press OK.
4. Right-click the Active Directory Users and Computers icon again and
press Operation Masters.
5. Select the appropriate tab for the role you wish to transfer and press the
Change button.
6. Press OK to confirm the change.
7. Press OK all the way out.

Group Policy Management


What is Group Policy?
Group policies are collections of settings that state how the programs and
operating system work for the users in an organization.
Group policies can be setup for computers, user accounts, domains and
Organizational Units.
There are 550 policies in the Group policies available.
Group policies items have three different setting options they are:

1. Enable
2. Disabled
3. Not configured (default)

Two types of Group Policy Object:


1. Computer Configuration

1. Software setting
2. Windows setting
3. Security setting

2. User Configuration

1. Software setting
2. Windows setting
3. Administrative template

Group policies stored location: %systemroot%system32/Grouppolicy


What is Group policy Template?

HCL Confidential 74
PROPOSAL
The Group policy Template (GPT) is the portion of a GPO that is stored in the
SYSVOL of the domain controllers within the domain.
It is responsible for storing the settings that are configured in the GPO.
It also responsible for storing the administrative templates.
What is Group Policy Container?
Group policy container is a portion of a GPO that stored domain controller within
the domain.
GPC is responsible for keeping references to client site extensions.
What are the different levels to apply Group Policy Object
GPO can be applied to three different levels:

1. Site Level
2. Domain Level
3. Organization Level

Site Level:
Group policy configured for the entire sites at the higher level. The settings are
applied to all the servers and domain that are within the site.
Domain Level:
Group policies are configured for the entire domain. The settings are applied to
the entire domains.
Organizational Unit Level:
Group policies are configured for the entire organizational unit. The setting are
applied only in the Organizational Unit

Group Policy Architecture

HCL Confidential 75
PROPOSAL

Group Policy Components


Server (Domain Controller)
In an Active Directory forest, the domain controller is a server that contains a
writable copy of the Active Directory database, participates in Active Directory
replication, and controls access to network resources.

Active Directory
Active Directory, the Windows-based directory service, stores information about
objects in a network and makes this information available to users and network
administrators. Administrators link GPOs to Active Directory containers such as sites,
domains, and OUs that include user and computer objects. In this way, Group Policy
settings can be targeted to users and computers throughout the organization.

Group Policy object (GPO)


A GPO is a collection of Group Policy settings, stored at the domain level as a virtual
object consisting of a Group Policy container (GPC) and a Group Policy template
(GPT). The GPC, which contains information on the properties of a GPO, is stored in

HCL Confidential 76
PROPOSAL
Active Directory on each domain controller in the domain. The GPT contains the data
in a GPO and is stored in the Sysvol in the /Policies sub-directory. GPOs affect users
and computers that are contained in sites, domains, and OUs.

Sysvol
Sysvol is a shared directory that stores the server copy of the domain’s public files,
which are replicated among all domain controllers in the domain. The Sysvol contains
the data in a GPO: the GPT, which includes Administrative Template-based Group
Policy settings, security settings, script files, and information regarding applications
that are available for software installation. It is replicated using the File Replication
Service (FRS).

Local Group Policy object


The local Group Policy object (local GPO) is stored on each individual computer, in
the hidden %systemroot%\System32\GroupPolicy directory. Each computer running
Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition, Windows XP
Media Center Edition, or Windows Server 2003 has exactly one local GPO, regardless
of whether the computers are part of an Active Directory environment.

Local GPOs do not support certain extensions, such as Folder Redirection or Group
Policy Software Installation. Local GPOs do support many security settings, but the
Security Settings extension of Group Policy Object Editor does not support remote
management of local GPOs. Local GPOs are always processed, but are the least
influential GPOs in an Active Directory environment, because Active Directory-based
GPOs have precedence.

Group Policy Object Editor


Group Policy Object Editor is a Microsoft Management Console (MMC) snap-in that is
used to edit GPOs. It was previously known as the Group Policy snap-in, Group Policy
Editor, or Gpedit.
Server-Side Snap-Ins
The MMC snap-in is loaded, by default, in Group Policy Object Editor. Server-side
snap-in extensions provide the user interface to allow you to configure various policy
settings while client-side extensions implement the actual policy settings on target
client computers.

HCL Confidential 77
PROPOSAL
Snap-in extensions include Administrative Templates, Scripts, Security Settings,
Software Installation, Folder Redirection, Remote Installation Services, Internet
Explorer Maintenance, Disk Quotas, Wireless Network Policy, and QoS Packet
Scheduler. Snap-ins may in turn be extended. For example, the Security Settings
snap-in includes several extension snap-ins. Developers can also create their own
MMC extension snap-ins to Group Policy Object Editor to provide additional Group
Policy settings.
Client-Side Extensions
Client-side extensions (CSEs) run within dynamic-link libraries (DLLs) and are
responsible for implementing Group Policy at the client computer. The following CSEs
are loaded, by default, in Windows Server 2003:
Administrative Templates, Wireless Network Policies, Folder Redirection, Disk
Quotas, QoS Packet Scheduler, Scripts, Security, Internet Explorer Maintenance, EFS
Recovery, Software Installation, and IP Security.
Group Policy Management Console (GPMC)
GPMC is a new tool designed to simplify implementation and management of Group
Policy. It consists of a new MMC snap-in and a set of scriptable interfaces for
managing Group Policy. The Group Policy Management Console provides:

• A user interface based on how customers use and manage Group Policy, rather
than on how the technology is built.

• Import/Export, Copy/Paste, and searching of GPOs.

• Simplified management of Group Policy-related security.

• Reporting (printing, saving, read-only access to GPOs) for GPO and Resultant Set
of Policy (RSoP) data.

• Backup/Restore of GPOs.

• Scripting of GPO operations that are exposed within this tool (but NOT scripting of
settings within a GPO).

Resultant Set of Policy (RSoP) snap-in


The Resultant Set of Policy (RSoP) snap-in is an MMC snap-in that that simplifies
Group Policy implementation and troubleshooting. RSoP uses Windows Management
Instrumentation (WMI) to determine how Group Policy settings are applied to users
and computers. For RSoP functionality, it is recommended to use the reporting
features in GPMC.

HCL Confidential 78
PROPOSAL
Winlogon
A component of the Windows operating system that provides interactive logon
support, Winlogon is the service in which the Group Policy engine runs.

Group Policy engine


The Group Policy engine is the framework that handles common functionalities
across client-side extensions including scheduling of Group Policy application,
obtaining GPOs from relevant configuration locations, and filtering and ordering of
GPOs.
File System
The NTFS file system on client computers.

Registry
A database repository for information about a computer’s configuration, the
registry contains information that Windows continually references during operation,
such as:

1. Profiles for each user.

2. The programs installed on the computer and the types of documents that each
can create.

3. Property settings for folders and program icons.

4. The hardware on the system.

5. Which ports are being used.

The registry is organized hierarchically as a tree, and it is made up of keys and their
subkeys, hives, and entries. The Group Policy engine has read and writes access to
the Registry.

Registry settings can be controlled via the Group Policy Administrative Templates
extension.

Event Log
The Event log is a service, located in Event Viewer, which records events in the
system, security, and application logs. The Group Policy engine has write access to
the Event Log on client computers and domain controllers. The Help and Support
Center on each computer has read access to the Event Log.

HCL Confidential 79
PROPOSAL

Help and Support Center


The Help and Support Center is a component on each computer that provides HTML
reports on the Group Policy settings currently in effect on the computer.

Resultant Set of Policy (RSoP) infrastructure

All Group Policy processing information is collected and stored in a Common


Information Model Object Management (CIMOM) database on the local computer.
This information, such as the list, content and logging of processing details for each
GPO, can then be accessed by tools using WMI.

In logging mode (Group Policy Results), RSoP queries the CIMOM database on the
target computer, receives information about the policies and displays it in GPMC. In
planning mode (Group Policy Modeling), RSoP simulates the application of policy
using the Group Policy Directory Access Service (GPDAS) on a domain controller.
GPDAS simulates the application of GPOs and passes them to virtual client-side
extensions on the domain controller. The results of this simulation are stored to a
local CIMOM database on the domain controller before the information is passed back
and displayed in GPMC.

WMI

WMI is a management infrastructure that supports monitoring and controlling of


system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.

WMI makes data about a target computer available for administrative use. Such data
can include hardware and software inventory, settings, and configuration
information. For example, WMI exposes hardware configuration data such as CPU,
memory, disk space, and manufacturer, as well as software configuration data from
the registry, drivers, file system, Active Directory, the Windows Installer service,
networking configuration, and application data. WMI Filtering in Windows Server
2003 allows you to create queries based on this data. These queries (also called WMI
filters) determine which users and computers receive all of the policy configured in
the GPO where you create the filter.

HCL Confidential 80
PROPOSAL

DHCP

DHCP is an open industry protocol used to assign the ip address to the hosts
automatically. While the host pc connected to the network.

Benefits of DHCP:-

DHCP automates the host configuration process for key configuration parameters.

No need to configure the IP manually it reduces the admin work


When the infrastructure changes.

Client get accurate IP configuration human errors are eliminated.

How client obtain an IP address:-

HCL Confidential 81
PROPOSAL
DHCP Processes

DHCP DISCOVER

DHCP OFFER

DHCP REQUEST

DHCP ACKNOWLEDGEMENT

DHCP Client DHCP Server

Client broad casts a DHCP DISCOVER message to find a DHCP server (The
client doesn’t have its own IP address and the destination server IP also). The DHCP
DISCOVER message is sent to a LAN broad cast with 0.0.0.0 as the source IP and
255.255.255.255 as the destination address. The DHCP DISCOVER message is a
request for the location of a DHCP server and IP addressing information. It contains
client MAC address and computer name. So that DHCP server’s know which client
sends the request.

DHCP OFFER:-

Once the DHCP server receives the DISCOVER message with the following
information.
 Source (DHCP SERVER) IP address
 Destination (DHCP CLIENT) IP address
 An offered IP address
 Client hardware (NIC) address
 Subnet mask
 Length of lease.
DHCP REQUEST:-

Once the client receives an offer from at least one DHCP server it broad casts a
DHCP REQUEST message to all DHCP servers. The DHCPREQUEST contains the
following information.

 The IP address of the DHCP server chosen by the client.

 The requested IP address for the client

 Subnet mask

DHCP ACKNOWLEDGEMENT:-

HCL Confidential 82
PROPOSAL
The DHCP server with the accepted offer sends a success full acknowledgement to
the client in the form of a DHCPACK message. The DHCP ACK contains the following
information.
A valid lease for an IP address including the renewal times.

DHCP LEASE:-

The IP lease has finite life time. The client must periodically renew the lease after
obtaining it.

If your TCP/IP network configuration doesn’t change often or if we have enough free
IP on the address pool we can increase the lease time. The default lease time is 8
days. If the address pool has less IP we need to keep reservation time short. The
reason is that if the pool of IP address is used up. Machines that added or moved
from network might be unable to obtain an IP address from a DHCP server.

HOW DHCP RENEWS A LEASE:-

If a windows DHCP client renews a lease while booting these messages are send
through broadcast IP packets.

If the renewal is made while DHCP client is running both the client and server will
communicate through unicast messages.

AUTHORIZING A DHCP SERVER:-

In implementations of DHCP prior to WIN2000 any user can create a DHCP server on
network, an action that could lead IP conflicts. For example if the client receives an
IP address from incorrect configured DHCP server which will prevent user from
logging on.

HOW TO AUTHORIZE A DHCP SERVER:-

Active directory must be present to authorize DHCP servers and block unauthorized
servers.

Let’s examine two scenarios


The DHCP server is a part of domain:-

The DHCP server initializes and determines if it is a part of the directory domain. It
will contact the directory service to determine if it is authorized. The directory service
confirms the server is authorized. After receiving this conformation the server will
broad cast a DHCPINFORM message to determine if other directory services are
started after this is completed the server begins servicing DHCP clients accordingly.

If the DHCP server is not a part of domain it will check for member server by sending
a DHCP INFORM message once in every 5 minutes and shuts down its service.

HCL Confidential 83
PROPOSAL
DHCP SCOPE:-

Scopes determine which IP addresses are allocated to clients. You can configure
many scopes on a DHCP server. The DHCP server does not communicate the scope
information with each other.

EXCLUSION RANGES:-

An exclusion range is a limited sequence of IP addresses with in a scope range that


are to be excluded from DHCP service offerings.

DHCP RESERVATION:-

Reservations enable permanent address lease assignment by the DHCP server. We


need to specify the MAC address of the hardware device which the IP address are
reserved. Therefore when creating a reservation you must know the MAC address for
each device.

BACKUP AND RESTORE DHCP SERVER:-


Win 2k3 DHCP servers support automatic and manual backups.
To provide fault tolerance in the case of a failure it is important to backup the DHCP
data base. When you perform a backup, the entire DHCP data base is saved. The
backup includes,

• scopes
• Reservations
• Leases
• Options

AUTOMATIC BACKUP:-

By default the DHCP service automatically backs up to the database and related
registry entries to the local drive. This occurs every 60 min’s. It will store in the %
system root% system 32 \DHCP\BACKUP directory. We can change the backup
location.
Automatic backup use only the automatic restore and it will perform by the DHCP
service when corruption is detected.

MANUALLY BACKUP:-
We can also backup the DHCP database manually to an offline storage location such
as a tape drive or disk. It supports only manual restore.

Backup

It is a process of protecting user data or system state data on to separate storage


devices.

NT supported only one type of storage media, i.e. tapes.

HCL Confidential 84
PROPOSAL
2000&03 supports tapes, floppies, HDDS (Hard Disk Drives), zip floppies, RSD
(Remote Storage Devices)

Back up utilities:
The default backup utility provided by NT, 2000, 2003.
NTbackup utility Comes along with the OS. Provides minimum benefits could have
optimum benefits.

There are some third part utilities

1 Veritas - BackupExec
2 Veritas - Foundation suite (for UNIX flavors)
3 Veritas - volume manager
4 Tivoli storage manager (IBM)
5 Netback up

Starting back up utility:


On DC
Or member server
Start
Run – ntbackup (or) start > programs> accessories>system tools>backup

Backing up a folder:
Create a folder in D drive and a file in that
Start - run – ntbackup – click on advanced mode
Back up
Next
Select 2nd option (backup selected files.)
Expand my computer from D drive select the folder you’ve created
Next

Select the destination to save the back up


Next – select the type of back up (ex. Normal)
Check the box disables volume shadow copy
Next – finish

Verifying
Delete the backed up folder

Restoring the backed up folder:


Start – run – (ntbackup)
Advanced – restore – next
Select the backed-up file – next – finish

Back up types

HCL Confidential 85
PROPOSAL
 Normal
 Copy
 Incremental
 Differential
 Daily

1. Normal Backup: It is a full backup backs up all selected files & folders after
back up removes the Archie bit (A).Achieve Bit: It is a bit used by backup
utility to know whether a file is backed up.
It is used as a backup marker.

2. Copy backup: Copy backs up all selected folders but does not remove
archive bit after backing up. Copy is used between normal backup and
incremental backup.

3. Incremental backup: backs up all selected files & folders which are changed
since backup marks the files as having been backed up. Removes the archive
bit after back up.

4. Differential backup: backs up all selected files & folders. After backup does
not remove the archive bit. It backs up all the files changed since normal
back up.

5. Daily backup: it backs up all selected files & folders created or changed
during the day after backed up does not remove the archive bit.

Recommended backup strategy:


1. If we select incremental back up it is faster and restoration is slower. I.e.
more number of tapes have to be restored
2. If we go with differential backup, backup is slow, but restoration is fast i.e.,
just by restoring 2 tapes.

System state data:

SSD is a data store if we want to backup complete AD we can back up system state
data from backup utility.

Taking a back up of system state data:

Start - run – ntbackup – click on advanced mode – backup – next


Select 3rd one system state data – next – save in E drive - create a folder (SSD) in
this folder create a file with filename .bkf – next – advanced - next

HCL Confidential 86
PROPOSAL
Restoration
There are two types of restoration

 Non-authoritative restore
 Authoritative restore

Restoration of system state data can be done either authoritative or non


authoritative

Non-authoritative restore: is a normal restore useful when we have only one DC


in the network. It does not increment the USN values of the objects after restoration.
It uses older USN values only.

USN Numbers: (Update Sequence Number)


It is a number assigned to the object and gets modify according to the changes
made on the object.

Authoritative restore: This is useful when we want to restore a specific object or


specific object by incrementing the USN value.
Useful when we have multiple DCs in the N/W.
i.e. one Dc and multiple ADCs

The OSI Reference Model

The application, presentation, and session layers are all application-oriented in that
they are responsible for presenting the application interface to the user. All three are
independent of the layers below them and are totally oblivious to the means by
which data gets to the application. These three layers are called the upper layers.

The lower four layers deal with the transmission of data, covering the packaging,
routing, verification, and transmission of each data group. The lower layers don't
worry about the type of data they receive or send to the application, but deal simply
with the task of sending it. They don't differentiate between the different applications
in any way.

The following sections explain each layer to help you understand the architecture of
the OSI-RM

The Application Layer

The application layer is the end-user interface to the OSI system. It is where the
applications, such as electronic mail, USENET news readers, or database display

HCL Confidential 87
PROPOSAL
modules, reside. The application layer's task is to display received information and
send the user's new data to the lower layers.

In distributed applications, such as client/server systems, the application layer is


where the client application resides. It communicates through the lower layers to the
server.

The Presentation Layer

The presentation layer's task is to isolate the lower layers from the application's data
format. It converts the data from the application into a common format, often called
the canonical representation. The presentation layer processes machine-dependent
data from the application layer into a machine-independent format for the lower
layers.

The presentation layer is where file formats and even character formats (ASCII and
EBCDIC, for example) are lost. The conversion from the application data format
takes place through a "common network programming language" (as it is called in
the OSI Reference Model documents) that has a structured format.

The presentation layer does the reverse for incoming data. It is converted from the
common format into application-specific formats, based on the type of application
the machine has instructions for. If the data comes in without reformatting
instructions, the information might not be assembled in the correct manner for the
user's application.

The Session Layer

The session layer organizes and synchronizes the exchange of data between
application processes. It works with the application layer to provide simple data sets
called synchronization points that let an application know how the transmission and
reception of data are progressing. In simplified terms, the session layer can be
thought of as a timing and flow control layer.

The session layer is involved in coordinating communications between different


applications, letting each know the status of the other. An error in one application
(whether on the same machine or across the country) is handled by the session layer
to let the receiving application know that the error has occurred. The session layer
can resynchronize applications that are currently connected to each other. This can
be necessary when communications are temporarily interrupted, or when an error
has occurred that results in loss of data.

The Transport Layer

The transport layer, as its name suggests, is designed to provide the "transparent
transfer of data from a source end open system to a destination end open system,"

HCL Confidential 88
PROPOSAL
according to the OSI Reference Model. The transport layer establishes, maintains,
and terminates communications between two machines.

The transport layer is responsible for ensuring that data sent matches the data
received. This verification role is important in ensuring that data is correctly sent,
with a resend if an error was detected. The transport layer manages the sending of
data, determining its order and its priority.

The Network Layer

The network layer provides the physical routing of the data, determining the path
between the machines. The network layer handles all these routing issues, relieving
the higher layers from this issue.

The network layer examines the network topology to determine the best route to
send a message, as well as figuring out relay systems. It is the only network layer
that sends a message from source to target machine, managing other chunks of data
that pass through the system on their way to another machine.

The Data Link Layer

The data link layer, according to the OSI reference paper, "provides for the control of
the physical layer, and detects and possibly corrects errors that can occur." In
practicality, the data link layer is responsible for correcting transmission errors
induced during transmission (as opposed to errors in the application data itself,
which are handled in the transport layer).

The data link layer is usually concerned with signal interference on the physical
transmission media, whether through copper wire, fiber optic cable, or microwave.
Interference is common, resulting from many sources, including cosmic rays and
stray magnetic interference from other sources.

The Physical Layer

The physical layer is the lowest layer of the OSI model and deals with the
"mechanical, electrical, functional, and procedural means" required for transmission
of data, according to the OSI definition. This is really the wiring or other transmission
form.

When the OSI model was being developed, a lot of concern dealt with the lower two
layers, because they are, in most cases, inseparable. The real world treats the data
link layer and the physical layer as one combined layer, but the formal OSI definition

HCL Confidential 89
PROPOSAL
stipulates different purposes for each. (TCP/IP includes the data link and physical
layers as one layer, recognizing that the division is more academic than practical.)

Local Area Networks

TCP/IP works across LANs and WANs, and there are several important aspects of LAN
and WAN topologies you should know about. You can start with LANs and look at
their topologies. Although there are many topologies for LANs, three topologies are
dominant: bus, ring, and hub.

A Quick Overview of TCP/IP Components

To understand the roles of the many components of the TCP/IP protocol family, it is
useful to know what you can do over a TCP/IP network. Then, once the applications
are understood, the protocols that make it possible are a little easier to comprehend.
The following list is not exhaustive but mentions the primary user applications that
TCP/IP provides.

Telnet

The Telnet program provides a remote login capability. This lets a user on one
machine log onto another machine and act as though he or she were directly in front
of the second machine. The connection can be anywhere on the local network or on
another network anywhere in the world, as long as the user has permission to log
onto the remote system.
You can use Telnet when you need to perform actions on a machine across the
country. This isn't often done except in a LAN or WAN context, but a few systems
accessible through the Internet allow Telnet sessions while users play around with a
new application or operating system.

HCL Confidential 90
PROPOSAL
File Transfer Protocol

File Transfer Protocol (FTP) enables a file on one system to be copied to another
system. The user doesn't actually log in as a full user to the machine he or she wants
to access, as with Telnet, but instead uses the FTP program to enable access. Again,
the correct permissions are necessary to provide access to the files.
Once the connection to a remote machine has been established, FTP enables you to
copy one or more files to your machine. (The term transfer implies that the file is
moved from one system to another but the original is not affected. Files are copied.)
FTP is a widely used service on the Internet, as well as on many large LANs and
WANs.

Simple Mail Transfer Protocol

Simple Mail Transfer Protocol (SMTP) is used for transferring electronic mail. SMTP is
completely transparent to the user. Behind the scenes, SMTP connects to remote
machines and transfers mail messages much like FTP transfers files. Users are
almost never aware of SMTP working, and few system administrators have to bother
with it. SMTP is a mostly trouble-free protocol and is in very wide use.

Simple Network Management Protocol

Simple Network Management Protocol (SNMP) provides status messages and


problem reports across a network to an administrator. SNMP uses User Datagram
Protocol (UDP) as a transport mechanism. SNMP employs slightly different terms
from TCP/IP, working with managers and agents instead of clients and servers
(although they mean essentially the same thing). An agent provides information
about a device, whereas a manager communicates across a network with agents.

Internet Protocol

Internet Protocol (IP) is responsible for moving the packets of data assembled by
either TCP or UDP across networks. It uses a set of unique addresses for every
device on the network to determine routing and destinations.

Internet Control Message Protocol

Internet Control Message Protocol (ICMP) is responsible for checking and generating
messages on the status of devices on a network. It can be used to inform other
devices of a failure in one particular machine. ICMP and IP usually work together.

Value Description
0 Echo Reply
3 Destination Not Reachable
4 Source Quench

HCL Confidential 91
PROPOSAL
5 Redirection Required
8 Echo Request
11 Time to Live Exceeded
12 Parameter Problem
13 Timestamp Request
14 Timestamp Reply
15 Information Request (now obsolete)
16 Information Reply (now obsolete)
17 Address Mask Request
18 Address Mask Reply

IP Addresses

TCP/IP uses a 32-bit address to identify a machine on a network and the network to
which it is attached. IP addresses identify a machine's connection to the network, not
the machine itself—an important distinction. Whenever a machine's location on the
network changes, the IP address must be changed, too. The IP address is the set of
numbers many people see on their workstations or terminals, such as 127.40.8.72,
which uniquely identifies the device.
IP (or Internet) addresses are assigned only by the Network Information Center
(NIC), although if a network is not connected to the Internet, that network can
determine its own numbering. For all Internet accesses, the IP address must be
registered with the NIC.

Local Area Networks

LANs are an obvious target for TCP/IP, because TCP/IP helps solve many
interconnection problems between different hardware and software platforms. To run
TCP/IP over a network, the existing network and transport layer software must be
replaced with TCP/IP, or the two must be merged together in some manner so that
the LAN protocol can carry TCP/IP information within its existing protocol
(encapsulation).

Routing

Routing refers to the transmission of a packet of information from one machine


through another. Each machine that the packet enters analyzes the contents of the
packet header and decides its action based on the information within the header. If
the destination address of the packet matches the machine's address, the packet
should be retained and processed by higher-level protocols. If the destination

HCL Confidential 92
PROPOSAL
address doesn't match the machine's, the packet is forwarded further around the
network. Forwarding can be to the destination machine itself, or to a gateway or
bridge if the packet is to leave the local network.
Routing is a primary contributor to the complexity of packet-switched networks. It is
necessary to account for an optimal path from source to destination machines, as
well as to handle problems such as a heavy load on an intervening machine or the
loss of a connection. The route details are contained in a routing table, and several
sophisticated algorithms work with the routing table to develop an optimal route for
a packet.
Creating a routing table and maintaining it with valid entries are important aspects of
a protocol. Here are a few common methods of building a routing table:

• A fixed table is created with a map of the network, which must be modified
and reread every time there is a physical change anywhere on the network.

• A dynamic table is used that evaluates traffic load and messages from other
nodes to refine an internal table.

• A fixed central routing table is used that is loaded from the central repository
by the network nodes at regular intervals or when needed.

Each method has advantages and disadvantages. The fixed table approach, whether
located on each network node or downloaded at regular intervals from a centrally
maintained fixed table, is inflexible and can't react to changes in the network
topology quickly. The central table is better than the first option, simply because it is
possible for an administrator to maintain the single table much more easily than a
table on each node.
The dynamic table is the best for reacting to changes, although it does require better
control, more complex software, and more network traffic. However, the advantages
usually outweigh the disadvantages, and a dynamic table is the method most
frequently used on the Internet.

Routing Daemons

To handle the routing tables, most UNIX systems use a daemon called routed. A few
systems run a daemon called gated. Both routed and gated can exchange RIP
messages with other machines, updating their route tables as necessary. The gated
program can also handle EGP and HELLO messages, updating tables for the
internetwork. Both routed and gated can be managed by the system administrator to
select favorable routes, or to tag a route as not reliable.
The configuration information for gated and routed is usually stored as files named
gated.cfg, gated.conf, or gated.cf. Some systems specify gated information files for
each protocol, resulting in the files gated.egp, gated.hello, and gated.rip. A sample
configuration file for EGP used by the gated process is shown here:

# @(#)gated.egp 4.1 Lachman System V STREAMS TCP source

HCL Confidential 93
PROPOSAL
# sample EGP config file

traceoptions general kernel icmp egp protocol ;

autonomoussystem 519 ;

rip no;

egp yes {

group ASin 519 {

neighbor 128.212.64.1 ;

};

};

static {
default gateway 128.212.64.1 pref 100 ;

};

propagate proto egp as 519 {

proto rip gateway 128.212.64.1 {

announce 128.212 metric 2 ;

};

proto direct {

announce 128.212 metric 2 ;

};

};

propagate proto rip {

proto default {

announce 0.0.0.0 metric 1 ;

};

proto rip {

HCL Confidential 94
PROPOSAL
noannounce all ;

};

} ;

Interior Gateway Protocols (IGP)

There are several IGPs in use, none of which have proven themselves dominant.
Usually, the choice of an IGP is made on the basis of network architecture and
suitability to the network's software requirements. Earlier today, RIP and HELLO
were mentioned. Both are examples of IGPs. Together with a third protocol called
Open Shortest Path First (OSPF), these IGPs are now examined in more detail.
Both RIP and HELLO calculate distances to a destination, and their messages contain
both a machine identifier and the distance to that machine. In general, messages
tend to be long, because they contain many entries for a routing table. Both
protocols are constantly connecting between neighbors to ensure that the machines
are active and communicating, which can cause network traffic to build.

The Routing Information Protocol (RIP)

The Routing Information Protocol found wide use as part of the University of
California at Berkeley's LAN software installations. Originally developed from two
routing protocols created at Xerox's Palo Alto Research Center, RIP became part of
UCB's BSD UNIX release, from which it became widely accepted. Since then, many
versions of RIP have been produced, to the point where most UNIX vendors have
their own enhanced RIP products. The basics are now defined by an Internet RFC.
RIP uses a broadcast technology (showing its LAN heritage). This means that the
gateways broadcast their routing tables to other gateways on the network at regular
intervals. This is also one of RIP's downfalls, because the increased network traffic
and inefficient messaging can slow networks down compared to other IGPs. RIP
tends to obtain information about all destinations in the autonomous system to which
the gateways belong. Like GGP, RIP is a vector-distance system, sending a network
address and distance to the address in its messages.
A machine in a RIP-based network can be either active or passive. If it is active, it
sends its routing tables to other machines. Most gateways are active devices. A
passive machine does not send its routing tables but can send and receive messages
that affect its routing table. Most user-oriented machines (such as PCs and
workstations) are passive devices. RIP employs the User Datagram Protocol (UDP)
for messaging, employing port number 520 to identify messages as originating with
RIP.

The HELLO Protocol

HCL Confidential 95
PROPOSAL
The HELLO protocol is used often, especially where TCP/IP installations are involved.
It is different from RIP in that HELLO uses time instead of distance as a routing
factor. This requires the network of machines to have reasonably accurate timing,
which is synchronized with each machine. For this reason, the HELLO protocol
depends on clock synchronization messages.

The format of a HELLO message is shown in Figure 5.12. The primary header fields
are as follows:

• A checksum of the entire message

• The current date of the sending machine

• The current time of the sending machine

• A timestamp used to calculate round-trip delays

• An offset that points to the following entries

• The number of hosts that follow as a list

The Open Shortest Path First (OSPF) Protocol

The Open Shortest Path First protocol was developed by the Internet Engineering
Task Force, with the hope that it would become the dominant protocol within the
Internet. In many ways, the name "shortest path" is inaccurate in describing this
protocol's routing process (both RIP and HELLO use a shortest path method—RIP
based on distance and HELLO on time). A better description for the system would be
"optimum path," in which several criteria are evaluated to determine the best route
to a destination. The HELLO protocol is used for passing state information between
gateways and for passing basic messages, whereas the Internet Protocol (IP) is used
for the network layer.
OSPF uses the destination address and type of service (TOS) information in an IP
datagram header to develop a route. From a routing table that contains information
about the topology of the network, an OSPF gateway (more formally called a router
in the RFC, although both terms are interchangeable) determines the shortest path
using cost metrics, which factor in route speed, traffic, reliability, security, and
several other aspects of the connection. Whenever communications must leave an
autonomous network, OSPF calls this external routing. The information required for
an external route can be derived from both OSPF and EGP.
There are two types of external routing with OSPF. A Type 1 route involves the same
calculations for the external route as for the internal. In other words, the OSPF
algorithms are applied to both the external and internal routes. A Type 2 route uses
the OSPF system only to calculate a route to the gateway of the destination system,
ignoring any routes of the remote autonomous system. This has an advantage in
that it can be independent of the protocol used in the destination network, which
eliminates a need to convert metrics.

HCL Confidential 96
PROPOSAL
OSPF enables a large autonomous network to be divided into smaller areas, each
with its own gateway and routing algorithms. Movement between the areas is over a
backbone, or the parts of the network that route messages between areas. Care
must be taken to avoid confusing OSPF's areas and backbone terminology with those
of the Internet, which are similar but do not mean precisely the same thing. OSPF
defines several types of routers or gateways:

• An Internal Router is one for which all connections belong to the same area,
or one in which only backbone connections are made.

• A Border Router is a router that does not satisfy the description of an Internal
Router (it has connections outside an area).

• A Backbone Router has an interface to the backbone.

• A Boundary Router is a gateway that has a connection to another autonomous


system.

OSPF is designed to enable gateways to send messages to each other about


internetwork connections. These routing messages are called advertisements, which
are sent through HELLO update messages. Four types of advertisements are used in
OSPF:

• A Router Links advertisement provides information on a local router's


(gateway) connections in an area. This message is broadcast throughout the
network.

• A Network Links advertisement provides a list of routers that are connected to


a network. It is also broadcast throughout the network.

• A Summary Links advertisement contains information about routes outside


the area. It is sent by border routers to their entire area.

• An Autonomous System Extended Links advertisement contains information


on routes in external autonomous systems. It is used by boundary routers but
covers the entire system.

OSPF maintains several tables for determining routes, including the protocol data
table (the high-level protocol in use in the autonomous system), the area data table
or backbone data table (which describes the area), the interface data table
(information on the router-to-network connections), the neighbor data table
(information on the router-to-router connections), and a routing data table (which
contains the route information for messages). Each table has a structure of its own,
the details of which are not needed for this level of discussion. Interested readers are
referred to the RFC for complete specifications.

HCL Confidential 97
PROPOSAL
1.4.0.1.1OSPF Packets

As mentioned earlier, OSPF uses IP for the network layer. The OSPF specifications
provide for two reserved multicast addresses: one for all routers that support OSPF
(224.0.0.5) and one for a designated router and a backup router (224.0.0.6). The IP
protocol number 89 is reserved for OSPF. When IP sends an OSPF message, it uses
the protocol number and a Type of Service (TOS) field value of 0. Usually, the IP
precedence field is set higher than normal IP messages, also.
OSPF uses two header formats. The primary OSPF message header. Note that the
fields are not shown in their scale lengths in this figure for illustrative purposes. The
Version Number field identifies the version of the OSPF protocol in use (currently
version 1).

INTRODUCTION
TO
CLUSTER

HCL Confidential 98
PROPOSAL

Cluster
The basic idea of a cluster is multiple physical servers acting as a single virtual server.

Clustering is the group of Computers that function as single system. It provides high

availability and high fault tolerance for Server or Service or Applications. If one member

of the cluster is unavailable the other computer takes over the load so that the service or

applications are always available. Clustering Technique introduced in Windows NT 4.0

Enterprise Edition.

Cluster consists of
 Nodes

 Network

 Operating system

HCL Confidential 99
PROPOSAL
 Cluster middleware

Cluster classification

 High performance clusters (HPC)

 High throughput clusters (HTC)

 High availability clusters (HA)

 Load balancing clusters

 Hybrid clusters

WINDOWS CLUSTER TYPES

HCL Confidential 100


PROPOSAL

WINDOWS CLUSTER

Types of Windows Cluster:

1. Server Cluster

Server cluster supports to stateful applications. It shares the common database means

application running in database long time. For example SQL Server Application, Mail

server application. These are called server cluster.

HCL Confidential 101


PROPOSAL
2. Network Load Balancing (NLB) Server

Network Load Balancing (NLB) cluster is a stateless application support means two

or more cluster shares its own individual database. For example FTP cluster, DHCP

cluster and IIS cluster.

3. Component Load Balancing (CLB) Server

Component Load Balancing (CLB) Server is moved to Application Centre 2000. It

supports to COM+ applications like VB,.Net, etc.,

HCL Confidential 102


PROPOSAL

SERVER CLUSTER ARCHITECTURE

Server Cluster Architecture:

HCL Confidential 103


PROPOSAL

HCL Confidential 104


PROPOSAL

NETWORK LOAD BALANCING (NLB)


CLUSTER ARCHITECTURE

Network Load Balancing (NLB) Cluster Architecture:

HCL Confidential 105


PROPOSAL

HCL Confidential 106


PROPOSAL

COMPONENT LOAD BALANCING (CLB)


CLUSTER ARCHITECTURE

Component Load Balancing cluster Architecture:

HCL Confidential 107


PROPOSAL

HCL Confidential 108


PROPOSAL

CLUSTER COMPONENTS

HCL Confidential 109


PROPOSAL

Cluster Service Components:


The cluster service runs on the Windows Server 2003 or Windows 2000 operating system
using network drivers, device drivers, and resource instrumentation processes designed
specifically for server clusters and its component processes. These closely related,
cooperating components of the cluster service are:
Checkpoint Manager—saves application registry keys in a cluster directory stored on
the quorum resource.
Database Manager—maintains cluster configuration information.
Event Log Replication Manager—replicates event log entries from one node to all other
nodes in the cluster.
Failover Manager—performs resource management and initiates appropriate actions,
such as startup, restart, and failover.
Global Update Manager—provides a global update service used by cluster components.
Log Manager—writes changes to recovery logs stored on the quorum resource.
Membership Manager—manages cluster membership and monitors the health of other
nodes in the cluster.
Node Manager—assigns resource group ownership to nodes based on group preference
lists and node availability.
Resource Monitors—monitors the health of each cluster resource using callbacks to
resource DLLs. Resource Monitors run in a separate process, and communicate with the
Cluster Server through remote procedure calls (RPCs) to protect cluster server from
individual failures in cluster resources.
Backup/Restore Manager—backs up, or restores, quorum log file and all checkpoint
files, with help from the Failover Manager and the Database Manager.

Node
Event Log Manager Checkpoint
Replication Manager
Manager
Backup/
Global
Restore
Update
Manager
Manager
Failover Database
Manager Log Manager
Cluster Service
Manager

Resource Windows File System Windows Registry


Monitors…

HCL Confidential 110


PROPOSAL

Resource
DLLs…

Figure 5 - Diagram of cluster service components


Node Manager
The Node Manager runs on each node and maintains a local list of nodes that belong to
the cluster. Periodically, the Node Manager sends messages—called heartbeats—to its
counterparts running on other nodes in the cluster to detect node failures. It is essential
that all nodes in the cluster always have exactly the same view of cluster membership.
In the event that one node detects a communication failure with another cluster node, it
multicasts a message to the entire cluster, causing all members to verify their view of the
current cluster membership. This is called a regroup event. The cluster service prevents
write operations to any disk devices common to all nodes in the cluster until the
membership has stabilized. If the Node Manager on an individual node does not respond,
the node is removed from the cluster and its active resource groups are moved to another
active node. To select the node to which a resource group should be moved, Node
Manager identifies the node on which a resource group prefers to run and the possible
owners (nodes) that may own individual resources. On a two-node cluster, the Node
Manager simply moves resource groups from a failed node to the surviving node. On a
cluster with three or more nodes, Node Manager selectively distributes resource groups
among the surviving nodes.
Node Manager also acts as a gatekeeper, allowing “joiner” nodes into the cluster, as well
as processing requests to add or evict a node.
Database Manager
The Database Manager provides the functions needed to maintain the cluster
configuration database, which contains information about all of the physical and logical
entities in a cluster. These entities include the cluster itself, cluster node membership,
resource groups, resource types, and descriptions of specific resources, such as disks and
IP addresses.
Persistent and volatile information stored in the configuration database is used to track
the current and desired state of the cluster. Each Database Manager running on each node
in the cluster cooperates to maintain consistent configuration information across the
cluster. One-phase commits are used to ensure the consistency of the copies of the
configuration database on all nodes. The Database Manager also provides an interface for
use by the other cluster service components, such as the Failover Manager and the Node
Manager. This interface is similar to the registry interface exposed by the Win32
application programming interface (API) set. The key difference is that changes made to
cluster entities are recorded by the Database Manager in the both the registry and in the
quorum resource (changes are written to the quorum resource by the Log Manager).
Registry changes are then replicated to other nodes by the Global Update Manager.

HCL Confidential 111


PROPOSAL
The Database Manager supports transactional updates of the cluster hive and exposes the
interfaces only to internal cluster service components. This transactional support is
typically used by the Failover Manager and the Node Manager in order to get replicated
transactions.
Database Manager functions, with the exceptions of those for transactional support, are
exposed to clients by the cluster API. The primary clients for these Database Manager
APIs are resource DLLs that use the Database Manager to save private properties to the
cluster database. Other clients typically use the Database Manager to query the cluster
database.

Checkpoint Manager
To ensure that the cluster service can recover from a resource failure, the Checkpoint
Manager checks registry keys when a resource is brought online and writes checkpoint
data to the quorum resource when the resource goes offline. Cluster-aware applications
use the cluster configuration database to store recovery information. Applications that are
not cluster-aware store information in the local server registry.
The Checkpoint Manager also supports resources having application-specific registry
trees instantiated at the cluster node where the resource comes online (a resource can
have one or more registry trees associated with it). The Checkpoint Manager watches for
changes made to these registry trees if the resource is online. If it detects that changes
have been made, it creates a dump of the registry tree on the owner node of the resource
and then moves the file to the owner node of the quorum resource. The Checkpoint
Manager performs some amount of “batching” so that frequent changes to registry trees
do not place too heavy a load on the cluster service.
Log Manager
The Log Manager, along with the Checkpoint Manager, ensures that the recovery log on
the quorum resource contains the most recent configuration data and change checkpoints.
If one or more cluster nodes are down, configuration changes can still be made to the
surviving nodes. While these nodes are down, the Database Manager uses the Log
Manager to log configuration changes to the quorum resource.
As the failed nodes return to service, they read the location of the quorum resource from
their local cluster hives. Since the hive data could be stale, mechanisms are built in to
detect invalid quorum resources that are read from a stale cluster configuration database.
The Database Manager will then request the Log Manager to update the local copy of the
cluster hive using the checkpoint file in the quorum resource, and then replay the log file
in the quorum disk starting from the checkpoint log sequence number. The result is a
completely updated cluster hive.
Cluster hive snapshots are taken whenever the quorum log is reset and once every four
hours.
Failover Manager
The Failover Manager is responsible for stopping and starting resources, managing
resource dependencies, and for initiating failover of resource groups. To perform these

HCL Confidential 112


PROPOSAL
actions, it receives resource and system state information from Resource Monitors and
the cluster node.
The Failover Manager is also responsible for deciding which nodes in the cluster should
own which resource group. When resource group arbitration finishes, nodes that own an
individual resource group turn control of the resources within the resource group over to
Node Manager. When failures of resources within a resource group cannot be handled by
the node that owns the group, Failover Managers on each node in the cluster work
together to re-arbitrate ownership of the resource group.
If a resource fails, Failover Manager might restart the resource, or take the resource
offline along with its dependent resources. If it takes the resource offline, it will indicate
that the ownership of the resource should be moved to another node and be restarted
under ownership of the new node. This is referred to as failover.

Failover
Failover can occur automatically because of an unplanned hardware or application
failure, or can be triggered manually by the person who administers the cluster. The
algorithm for both situations is identical, except that resources are shut down in an
orderly fashion for a manually initiated failover, while their shut down may be sudden
and disruptive in the failure case.
When an entire node in a cluster fails, its resource groups are moved to one or more
available servers in the cluster. Automatic failover is similar to planned administrative
reassignment of resource ownership. It is, however, more complicated, because the
orderly steps of a normal shutdown may have been interrupted or may not have happened
at all. As a result, extra steps are required in order to evaluate the state of the cluster at
the time of failure.
Automatic failover requires determining what groups were running on the failed node and
which nodes should take ownership of the various resource groups. All nodes in the
cluster that are capable of hosting the resource groups negotiate among themselves for
ownership. This negotiation is based on node capabilities, current load, application
feedback, or the node preference list. The node preference list is part of the resource
group properties and is used to assign a resource group to a node. Once negotiation of the
resource group is complete, all nodes in the cluster update their databases and keep track
of which node owns the resource group.
In clusters with more than two nodes, the node preference list for each resource group can
specify a preferred server plus one or more prioritized alternatives. This enables
cascading failover, in which a resource group may survive multiple server failures, each
time cascading or failing over to the next server on its node preference list. Cluster
administrators can set up different node preference lists for each resource group on a
server so that, in the event of a server failure, the groups are distributed amongst the
cluster’s surviving servers.
An alternative to this scheme, commonly called N+I failover, sets the node preference
lists of all cluster groups. The node preference list identifies the standby cluster nodes to

HCL Confidential 113


PROPOSAL
which resources should be moved during the first failover. The standby nodes are servers
in the cluster that are mostly idle or whose own workload can be easily pre-empted in the
event a failed server’s workload must be moved to the standby node.
A key issue for cluster administrators when choosing between cascading failover and N+I
failover is the location of the cluster’s excess capacity for accommodating the loss of a
server. With cascading failover, the assumption is that every other server in the cluster
has some excess capacity to absorb a portion of any other failed server’s workload. With
N+I failover, it is assumed that the “+I” standby servers are the primary location of
excess capacity.

Failback
When a node comes back online, the Failover Manager can decide to move some
resource groups back to the recovered node. This is referred to as failback. The properties
of a resource group must have a preferred owner defined in order to failback to a
recovered or restarted node. Resource groups for which the recovered or restarted node is
the preferred owner will be moved from the current owner to the recovered or restarted
node.
Failback properties of a resource group may include the hours of the day during which
failback is allowed, plus a limit on the number of times failback is attempted. In this way
the cluster service provides protection against failback of resource groups at peak
processing times, or to nodes that have not been correctly recovered or restarted.

Global Update Manager


The Global Update Manager (GUM) is used by internal cluster components such as the
Failover Manager, Node Manager, or Database Manager in order to replicate changes to
the cluster database across cluster nodes in an atomic (either all healthy nodes are
updated, or none are updated) and serial (total order is maintained) fashion. GUM
updates are typically initiated as a result of a cluster API call. When a GUM update is
initiated at a client node, it first requests a “locker” node to obtain a global (where
“global” means “across all cluster nodes”) lock. If the lock is not available the client will
wait for it.
When the lock becomes available, the locker will grant the lock to the client, and issue
the update locally (on the locker node). The client will then issue the update to all other
healthy nodes, including itself. If an update succeeds on the locker, but fails on some

HCL Confidential 114


PROPOSAL
other node, then that node will be removed from the current cluster membership. If the
update fails on the locker node itself, the locker merely returns the failure to the client.
Backup/Restore Manager
The cluster service exposes one API for cluster database backup, Backup Cluster
Database. Backup Cluster Database contacts the Failover Manager layer first, which then
forwards the request to the owner node of the quorum resource. The Database Manager
layer in the owner node is then invoked which then makes a backup of the quorum log
file and all checkpoint files.
Apart from the API, the cluster service also registers itself at startup as a backup writer
with the Volume Shadow Copy Service (VSS). When backup clients invoke the VSS to
perform system state backup, it invokes the cluster service to perform the cluster database
backup via a series of entry point calls. The server code in the cluster service directly
invokes the Failover Manager to perform the backup and the rest of the operation is
common with the Backup Cluster Database API.
The cluster service exposes another API, Restore Cluster Database, for restoring the
cluster database from a backup path. This API can only be invoked locally from one of
the cluster nodes. When this API is invoked, it first stops the cluster service, restores the
cluster database from the backup, sets a registry value that contains the backup path, and
then starts the cluster service. The cluster service at startup detects that a restore is
requested and proceeds to restore the cluster database from the backup path to the
quorum resource.

Eventlog Replication Manager


The cluster service interacts with the eventlog service in a cluster to replicate eventlog
entries to all cluster nodes. When the cluster service starts up on a node, it invokes a
private API in the local eventlog service and requests the eventlog service to bind back to
the cluster service. The eventlog service, in response, will bind to the clusapi interface
using LRPC. From then on whenever the eventlog service receives an event to be
logged, it will log it locally, and then it will drop that event into a persistent batch queue
and schedule a timer thread to fire within the next 20 seconds if there is no timer thread
active already. When the timer thread fires, it will drain the batch queue and send the
events as one consolidated buffer to the cluster service via the cluster API interface to
which the eventlog service has bound already.
Once the cluster service receives batched events from the eventlog service, it will drop
those events into a local “outgoing” queue and return from the RPC. An event
broadcaster thread in the cluster service will drain this queue and send the events in the
queue via intracluster RPC to all active remote cluster nodes. The server side API drops
the received events into an “incoming” queue. An eventlog writer thread then drains this
queue and requests the local eventlog service through a private RPC to write the events
locally.
The cluster service uses LRPC to invoke the eventlog private RPC interfaces. The
eventlog service also uses LRPC to invoke the cluster API interface for requesting the
cluster service to replicate events.

HCL Confidential 115


PROPOSAL
Membership Manager
The Membership Manager (also known as the Regroup Engine) is responsible for
maintaining a consistent view of which cluster nodes are currently up or down at a
particular moment in time. The heart of the component is a regroup algorithm that is
invoked whenever there is evidence that one or more nodes have failed. At the end of the
algorithm, all participating nodes will reach identical conclusions on the new cluster
membership.

QUORUM
HCL Confidential 116
PROPOSAL

Quorum:

Each cluster has a special resource known as the quorum resource. A quorum resource

can be any resource that does the following:

• Provides a means for arbitration leading to membership and cluster state

decisions.

• Provides physical storage to store configuration information.

A quorum log is simply a configuration database for the server clustering. It holds cluster

configuration information such as which servers are part of the cluster, what resources are

HCL Confidential 117


PROPOSAL
installed in the cluster, and what state those resources are in (for example, online or

offline). The quorum log is located by default in \MSCS\quolog.log.

HCL Confidential 118


PROPOSAL

QUORUM TYPES

Standard Quorum
As mentioned above, a quorum is simply a configuration database for Microsoft Cluster
Service, and is stored in the quorum log file. A standard quorum uses a quorum log file
that is located on a disk hosted on a shared storage interconnect that is accessible by all
members of the cluster.
Note: It is possible to configure server clusters to use the local hard disk on a server to
store the quorum, but this is only supported for testing and development purposes, and
should not be used in a production environment.

HCL Confidential 119


PROPOSAL
Each member connects to the shared storage by using some type of interconnect (such as
SCSI or Fibre Channel), with the storage consisting of either external hard disks (usually
configured as RAID disks), or a storage area network (SAN), where logical slices of the
SAN are presented as physical disks.
Note: It is important that the quorum uses a physical disk resource, as opposed to a disk
partition, as the entire physical disk resource is moved during failover.
Standard quorums are available in Windows NT 4.0, Enterprise Edition, Windows 2000,
Advanced Server, Windows 2000, Datacenter Edition, Windows Server 2003, Enterprise
Edition, and Windows Server 2003, Datacenter Edition, and are illustrated in Figure 7.

Network

Node 1 Node 2 Node 3 Node 4

Quorum Data Data

Figure 7 -- Diagram of a standard quorum in a four-node cluster

Majority Node Set Quorums


A majority node set (MNS) quorum is a single quorum resource from a server cluster
perspective. However, the data is actually stored by default on the system disk of each
member of the cluster. The MNS resource takes care to ensure that the cluster
configuration data stored on the MNS is kept consistent across the different disks. Figure
Network
8 depicts a four-node cluster with an MNS quorum configuration.
Majority node set quorums are available in Windows Server 2003 Enterprise Edition, and
Windows Server 2003 Datacenter Edition.

Node 1 Node 2 Node 3 Node 4


HCL Confidential 120

Quorum Quorum Quorum Quorum


PROPOSAL

Figure 8 -- Diagram of an MNS quorum in a four-node cluster

While the disks that make up the MNS could, in theory, be disks on a shared storage
fabric, the MNS implementation that is provided as part of Windows Server 2003 uses a
directory on each node’s local system disk to store the quorum data. If the configuration
of the cluster changes, that change is reflected across the different disks.
This ensures that a majority of the nodes have an up-to-date copy of the data. The cluster
service itself will only start up, and therefore bring resources online, if a majority of the
nodes configured as part of the cluster are up and running the cluster service. If there are
fewer nodes, the cluster is said not to have quorum and therefore the cluster service waits
(trying to restart) until more nodes try to join. Only when a majority or quorum of nodes
are available, will the cluster service start up, and the resources be brought online. In this
way, since the up-to-date configuration is written to a majority of the nodes regardless of
node failures, the cluster will always guarantee that it starts up with the latest and most
up-to-date configuration.
In the case of a failure or split-brain, all partitions that do not contain a majority of nodes
are terminated. This ensures that if there is a partition running that contains a majority of
the nodes, it can safely start up any resources that are not running on that partition, safe in
the knowledge that it can be the only partition in the cluster that is running resources
(since all other partitions are terminated).
Given the differences in the way the shared disk quorum clusters behave compared to
MNS quorum clusters, care must be taken when deciding which model to choose. For
example, if you only have two nodes in your cluster, the MNS model is not
recommended, as failure of one node will lead to failure of the entire cluster, since a
majority of nodes is impossible.

HCL Confidential 121


PROPOSAL

HEARTBEAT

Heartbeat:

HCL Confidential 122


PROPOSAL
Heartbeat is the communication message between the clusters. It much likes a ping which

is used to test whether the cluster is available or not. If the heartbeat is fail, the failover

process occurs.

Heartbeat message is divided into two types:

1. Unicast Message

2. Multicast Message

HCL Confidential 123


PROPOSAL

INSTALLATION
&
CONFIGURATION

HCL Confidential 124


PROPOSAL

Windows 2003 server Clustering Installation:


Important Preparation Steps:

• Double check to ensure that all the nodes are working properly and are configured
identically (hardware, software, drivers, etc.).
• Verify that none of the nodes has been configured as a Domain Controller.
• Check to verify that all drives are NTFS and are not compressed.
• Verify that you have disabled NetBIOS for all private network cards.
• Verify that there are no network shares on any of the shared drives.
• Check to verify that no antivirus software has been installed on the nodes.
Antivirus software can reduce the availability of clusters and must not be installed
on them. If you want to check for possible viruses on a cluster, you can always
install the software on a non-node and then run scans on the cluster nodes
remotely.

1. Open the cluster administrator from the administrative tools.

HCL Confidential 125


PROPOSAL

2. From the Action drop-down box, select Create New Cluster and click OK. This brings
up the New Server Cluster Wizard, as show

.
3. Click Next to begin the wizard.

HCL Confidential 126


PROPOSAL
4. Type the domain name and cluster name in the appropriate box.

HCL Confidential 127


PROPOSAL
5. Enter the computer name or browse the computer name.

6. Advanced configuration Option.

HCL Confidential 128


PROPOSAL

7. This step is very important. What the Cluster Wizard does is to verify that everything
is in place before it begins the actual installation of the cluster service on the node. As
you can see above, the wizard goes through many steps, and if you did all of your
preparation correctly, when the testing is done, you will see a green bar under Tasks
completed, and you will be ready to proceed. But if you have not done all the preliminary
steps properly, you may see yellow or red icons next to one or more of the many tested
steps, and a green or red bar under Tasks completed.
Note:
While the green bar does indicate that you can proceed, it does not mean the cluster will
be completed successfully or will be configured like you want it to be completed. If you
see any yellow warning icons, you can drill down into them and see exactly what the
warning is. Read each warning very carefully. If the warning is something unimportant to
you, it can be ignored. But in most cases, the yellow warnings need to be addressed. This
may mean you will have to abort the cluster service installation at this time to fix the
problem. Then you can try to install it again.
If you get any red warning icons next to any of the test steps, then you will also get a red
bar at the bottom, which means that you have a major problem that needs to be corrected
before you can proceed. Drill down to see the message and act accordingly. Most likely,
you will have to abort the installation, fix the issue, and then try installation again.

HCL Confidential 129


PROPOSAL
8. Assuming that the installation is green and you are ready to proceed, click next. Then
you have to type the IP address of the cluster node.

9. Then you have to type the user name and password and corresponding domain.

HCL Confidential 130


PROPOSAL

Verifying the Nodes with Cluster Administrator

1.5

Once you have successfully installed the two nodes of your cluster, it is a good idea to
view the nodes from Cluster Administrator. When you bring up Cluster Administrator for
the first time after creating a cluster, you may have to tell it to Open a Connection to
Cluster, and type in the name of the virtual cluster you just created. Once you have done
this, the next time you open Cluster Administrator it will automatically open this cluster
for you by default.

HCL Confidential 131


PROPOSAL
After opening up Cluster Administrator, what you see will be very similar to the figure
below.

HCL Confidential 132


PROPOSAL

TROUBLESHOOTING

TROUBLESHOOTING:
When the physical disks are not powering up or spinning, Cluster service cannot initialize
any quorum resources.

HCL Confidential 133


PROPOSAL
Cause: Cables are not correctly connected, or the physical disks are not configured to
spin when they receive power.
Solution: After checking that the cables are correctly connected, check that the physical
disks are configured to spin when they receive power.
The Cluster service fails to start and generates an Event ID 1034 in the Event log after
you replace a failed hard disk, or change drives for the quorum resource.
Cause: If a hard disk is replaced, or the bus is reenumerated, the Cluster service may not
find the expected disk signatures, and consequently may fail to mount the disk.
Solution: Write down the expected signature from the Description section of the Event
ID 1034 error message. Then follow these steps:
1 Backup the server cluster.
.
2 Set the Cluster service to start manually on all nodes, and then turn off all but one
. node.
3 If necessary, partition the new disk and assign a drive letter.
.
4 Use the confdisk.exe tool (available in the Microsoft Windows Server 2003 Resource
. Kit) to write that signature to the disk.
5 Start the Cluster service and bring the disk online
.
6 If necessary, restore the cluster configuration information.
.
7 Turn on each node, one at a time.
.

Drive on the shared storage bus is not recognized.


Cause: Scanning for storage devices is not disabled on each controller on the shared
storage bus.
Solution: Verify that scanning for storage devices is disabled on each controller on the
shared storage bus.
Many times, the second computer you turn on does not recognize the shared storage bus
during the BIOS scan if the first computer is running. This situation can manifest itself in
a "Device not ready" error being generated by the controller, or in substantial delays
during startup.
To correct this, disable the option to scan for devices on the shared controller.
Note
• This symptom can manifest itself as one of several errors, depending on the attached
controller. It is normally accompanied with a one- to two-minute start delay and an
error indicating the failure of some device.
Configuration cannot be accessed through Disk Management.
Under normal cluster operations, the node that owns a quorum resource locks the drive
storing the quorum resource, preventing the other nodes from using the device. If you
find that the cluster node that owns a quorum resource cannot access configuration

HCL Confidential 134


PROPOSAL
information through Disk Management, the source of the problem and the solution might
be one of the following:
Cause: A device does not have physical connectivity and power.
Solution: Reseat controller cards, reseat cables, and make sure the drive spins up when
you start.
Cause: You attached the cluster storage device to all nodes and started all the nodes
before installing the Cluster service on any node.
Solution: After you attach all servers to the cluster drives, you must install the Cluster
service on one node before starting all the nodes. Attaching the drive to all the nodes
before you have the cluster installed can corrupt the file system on the disk resources on
the shared storage bus.
SCSI or fibre channel storage devices do not respond.
Cause: The SCSI bus is not properly terminated.
Solution: Make sure that the SCSI bus is not terminated early and that the SCSI bus is
terminated at both ends.
Cause: The SCSI or fibre channel cable is longer than the specification allows.
Solution: Make sure that the SCSI or fibre channel cable is not longer than the cable
specification allows.
Cause: The SCSI or fibre channel cable is damaged.
Solution: Make sure that the SCSI or fibre channel cable is not damaged. (For example,
check for bent pins and loose connectors on the cable and replace it if necessary.)

The Cluster service fails and the node cannot detect the network.
In this case, you probably have a configuration problem. Check the following:
• Cause: Have you made any configuration changes recently?
Solution: If the node was recently configured, or if you have installed some resource
that required you to restart the computer, make sure that the node is still properly
configured for the network.
• Cause: Is the node properly configured?
Solution: Check that the server is properly configured for TCP/IP. Also check that the
appropriate services are running. If the node recently failed, there is an instance of
failover; but, if the other nodes are misconfigured as well, the failover will be
inadequate and client access will fail
An IP address added to a group in the cluster fails.
• Cause: The Internet protocol (IP) address is not unique.
Solution: The IP address must be different from every other group IP address and
every other IP address on the network.
• Cause: The IP address is not a static IP address.
Solution: The IP addresses must be statically assigned outside of a DHCP scope, or
they must be reserved by the network administrator.
An IP address resource is unresponsive when taken offline, for example you are unable to
query its properties.
• Cause: You may not have waited long enough for the resource to go offline.

HCL Confidential 135


PROPOSAL
Solution: If an IP Address resource is unresponsive when taken offline, make sure that
you wait long enough for the resource to go offline.
Certain resources take time to go offline. For example, it can take up to three minutes
for the IP Address resource to go fully offline.
A resource group has failed over but will not fail back.
• Cause: The hardware and network configurations may not be valid.
Solution: Make sure that the hardware and network configurations are valid.
If any interconnect fails, failover can occur because the Cluster service does not detect
a heartbeat, or it may not even register that the node was ever online. In this case, the
Cluster service fails over the resources to the other nodes in the server cluster, but it
cannot fail back because that node is still down.
• Cause: The resource group may not have been configured to fail back immediately, or
you are not troubleshooting the problem within the allowable failback hours for the
resource.
Solution: Make sure that the resource group is configured to fail back immediately, or
that you are troubleshooting the problem within the allowable failback hours for the
resource group.
A group can be configured to fail back only during specified hours. Often,
administrators prevent failback during peak business hours. To check this, use Cluster
Administrator to view the resource failback policy.
• Cause: You restarted the node to test the failover policy for the group instead of
pressing the reset button.
Solution: Make sure that you press the reset button on the node. The resource group
will not failback to the preferred node if you shutdown, then restart the node.

All nodes appear to be functioning correctly, but you cannot access all of the drives from
one node.
• Cause: The shared drive may not be functioning.
Solution: Confirm that the shared drive is still functioning.
Try to access the drive from another node. If you can do that, check the cable from the
device to the node that you cannot perform the access. If the cable is not the problem,
restart the computer and then try again to access the device. If you cannot access the
drive, check your configuration.
• Cause: The drive has completely failed.
Solution: Determine (from another node) whether the drive is functioning at all. You
may have to restart the drive (by restarting the computer) or replace the drive.
The hard disk with the resource or a dependency for the resource may have failed. You
may have to replace a hard disk. You may also have to reinstall the cluster.

HCL Confidential 136


PROPOSAL

FEATURES:

1. High Availability

2. High Scalability

3. Fault Tolerance

4. Load Balancing

References:

1. http://technet.microsoft.com

2. www.petri.co.il

3. www.windowsnetworking.com

HCL Confidential 137

Вам также может понравиться