Вы находитесь на странице: 1из 3

Document Display

https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...

How to setup the NFS subsystem to mount the Siebel Filesystem [ID 759070.1]
Modified: Apr 5, 2012

Type: BULLETIN

Status: PUBLISHED

Priority: 1

In this Document
Purpose
Scope and Application
How to setup the NFS subsystem to mount the Siebel Filesystem

Applies to:
Siebel CRM - Version: 6.0.0 [2832] to 8.1.1 [21112] - Release: V6 to V8
Information in this document applies to any platform.
Purpose
This document describes important configuration prerequisites in order to setup the Siebel filesystem correctly. It applies to all
types of shared filesystems and covers in particular required setup when the filesystem server host is connected using a NFS
filesystem.
Scope and Application
Siebel administrators and network architects
How to setup the NFS subsystem to mount the Siebel Filesystem
The Siebel filesystem in general is installed on a central global file server that hosts the filesystem and shares it to all connected
clients. Clients in this context might be a Siebel server, a dedicated client, a Siebel document server etc.
In order to make sure that a file on the filesystem accessed by one client can not be opened simultaneously by another client,
the filesystems internal lock mechanism is utilized by the software.
Therefore the file serving product selected to host the Siebel filesystem needs to implement a global locking concept.
The file locking is implemented different for various filesystem types.

If the file server is hosted on the Windows platform using Windows SMB protocol then file locking is enabled by default and no
additional steps need to be taken.
The same holds true if a Samba File server running on a Unix host is selected to provide multi-platform SMB access when the
Siebel server itself runs on Windows platform.

However in a pure Unix deployment the remote share is usually implemented using the NFS system. Here a few configuration
steps need to be verified:
For all Unix platforms it is mandatory that the NFS servers lockd and statd daemons are enabled in addition to basic NFS
daemons that implement mounting and accessing a share.
These locking daemons need to be tuned in the number of threads on various platforms. Otherwise they might not be capable
to manage the high volume of concurrent lock requests that a large scale Siebel system will generate.

a) NFS server hosted on AIX:


# chssys -s rpc.lockd -a 511
# stopsrc -s rpc.lockd; startsrc -s rpc.lockd
b) NFS server hosted on Solaris:
/usr/lib/nfs/lockd [nthreads]
nthreads should be set to a value of 200 initially. Can also be set by defining the LOCKD_SERVERS

1 of 3

2/14/2013 12:08 PM

Document Display

https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...

parameter in the nfs file.


c) NFS server hosted on HP-UX:
lockd thread tuning only available with HP-UX 11i v3 (Itanium). Syntax is the same as with Solaris.Therefore it is recommended
to run only HP-UX 11i v3 as NFS file server platform.

d) NFS server on Linux:


For Linux deployments it is strongly recommended that only the NFS v4 protocol is used.
NFS v4 implements a more robust lock subsystem and therefore should be used whenever possible.
You should contact your Linux administrator to get the NFSv4 properly enabled.

Please note that the local lock mount option (llock) which is available on Solaris and HP-UX platform is not supported by Siebel.
The Siebel filesystem relies on a global, system wide file locking. Local locking will cause data inconsistencies and it is therefore
not supported to mount a share using the llock option.

As for the NFS clients, on the AIX platform it is recommended to enable the lockd on every client machine too for better load
distribution.

When you consider to implement another file sharing protocol like GPFS, SAN, NAS etc. you need to make sure that a file locking
concept equivalent to the above is available. Please contact the vendor of this filesystem for details on how locking can be
implemented.

IMPORTANT NOTE:
In order to reduce the load on the remote file servers locking subsystem it is strongly recommended to set the anonymous
users preference file to read-only. When a file is set to read-only, no write lock request to the remote server will be issued
by the application. Since this file will not be altered during anonymous part of the login procedure, it is safe to revoke the
write permission.
Because anonymous user preferences file is being accessed for each session login, setting this file to read-only can
significantly decrease the number of network lock requests.
To set for example the preference file for user GUESTCST read-only run the following command:
cd filesystem/userpref
chmod a-w "GUESTCST&Siebel Universal Agent.spf"

Please note that the read-only setting only will remain established when the corresponding user account is exclusively
dedicated to anonymous login. In case this account is also used for a regular session login, the preferences file will be
updated and file attributes reset to read-write.
Since it is best practice to have a dedicated account only for anonymous login this restriction usually should not be
applicable.
In case the anonymous user preference file does not exist in the userpref folder anymore, it will always be recreated with
read-write attributes and then the write attribute needs to be manually removed again.

2 of 3

2/14/2013 12:08 PM

Document Display

3 of 3

https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jsp...

2/14/2013 12:08 PM

Вам также может понравиться