Вы находитесь на странице: 1из 28

Authorization List

Create authorization list -- CRTAUTL Securing objects with authorization list -- EDTOBJAUT -- If u r using authorization list for securing objects change the public authority to *AUTL Add users to authorization list -- EDTAUTL or -- ADDAUTLE Checking authorization list -- DSPAUTL (displays all the user authorities to the list) -- F15 lists all the objects secured by the list 1. DSPOBJAUT 2. GRTOBJAUT 3. EDTOBJAUT

Group Profile
By creating a group profile, you can authorize one user profile to a number of programs or files and then have other users "inherit" those authorizations by assigning them as members of that group profile. Group profiles centralize security by limiting access to the lowest possible number of users (one), while retaining the flexibility to have any number of users access those security settings. To achieve the functionality of group profile, first we create a user profile which later will be treated as group profile and this profile (group profile) will be inherited by other user profiles that are being created on the system. If the group profile name has a hash in it, then authority given to it is not picked by members of that group profile.

Logged in users in the server


WRKUSRJOB USER (*ALL) STATUS (*ACTIVE) JOBTYPE (*INTERACT) -- we can find the logged in users by typing the above command, because one interactive job is there for each logged in users in the system

List printers available in the system


WRKCFGSTS CFGTYPE (*DEV) CFGD (*PRT)

Adopted Authority
-- It adds authority of a program owner to the authority of the user running the program.

Life Cycle of a job in iSeries

Submit a job Submitting a job to an iSeries server, where it is created and enters the system. At this time, the properties of a job are given to the job. Once the properties have been defined, the job moves to the job queue where it waits to enter the subsystem. Once the job receives its job description and defines its properties, it moves to the job queue where it waits to enter the subsystem.

The job enters a job queue Job queues are work entry points for batch jobs to enter the system. They can be thought of as "waiting rooms" for the subsystem. A number of factors affect when the job is pulled off the job queue into the subsystem, like job priority on the job queue, the sequence number of the job queue, and the maximum active jobs. When all of these factors work together, the job will be pulled off the job queue to start running in the subsystem.

The job enters the subsystem When the job enters the subsystem it becomes active. Until a job gets its activity level and memory from a memory pool, it cannot run. The job uses several pieces of information before it can receive memory to run. The subsystem description, like the job description, carries information, such as the memory pool to use, the routing entry, the maximum active jobs, and the number of active jobs currently in the subsystem.

The memory pool allocates memory to the subsystem Memory is a resource from the memory pool that the subsystem uses to run the job. The amount of memory from a memory pool, as well as how many other jobs are competing for memory affect how efficiently a job runs. Subsystems use different memory pools to support different types of jobs that run within them. The subsystem gives the memory pool the information it needs to process the order in which jobs are allocated memory, and the memory pool allocates memory for the job to run to completion.

The job finishes and moves to the output queue When a job finishes, the printer output from that job is sent to an output queue where it waits to be sent to a printer device or file. The output queue is like the job queue, in that it controls how the output is made available to the print devices. The output queue allows the user to control what files are printed first.

Work Entries
Work entries identify the sources where jobs can enter a subsystem. Specific types of work entries are used for different types of jobs. Work entries are part of the subsystem description. The following information describes the different types of work entries and how to manage them. There are five types of work entries: 1. 2. 3. 4. 5. Auto start job entries Communication entries Job queue entries Prestart job entries work station entries

Routing Entries -- The routing entry identifies the main storage subsystem pool to use, the
controlling program to run (typically the system-supplied program QCMD), and additional run-time information (stored in the class object). Routing entries are stored in the subsystem description. A routing entry can be likened to a single entry in a shopping mall directory. Customers that cannot find the store they need may use a directory to help send them in the right direction. The same is true on your system. Routing entries guide the job to the correct place. Routing entries in a subsystem description specify the program to be called to control a routing step for a job running in the subsystem, which memory pool the job will use, and from which class to get the run-time attributes. Routing data identifies a routing entry for the job to use. Together, routing entries and routing data provide information about starting a job in a subsystem. Routing entries consist of these parts: 1. 2. 3. 4. 5. 6. 7. 8. 9. The subsystem description Class Comparison data Max: active routing steps Memory pool ID Program to call Thread resources affinity Resources affinity group The sequence number

Job related commands


1. 2. 3. 4. 5. CRTJOBQ Create job queue HLDJOBQ - Hold job queue RLSJOBQ - Release job queue CLRJOBQ - Clear job queue WRKJOBQ - Work with job queue (for job queue entries)

Jobs are placed on a job queue with the command SBMJOB 1. 2. 3. 4. 5. HLDJOB Hold job RLSJOB - Release job ENDJOB - End job CHGJOB Change job WRKJOB Work with job ( for spool file o/p and etc..)

EXPDATE=*PERM parameter in SAVLIB command


The expiration date parameter tells the AS/400 how long to consider the backup to be an "active" file. If you save a library as *PERM, it will always be considered "active". The next time you do a save to the same tape, if your command says to check for active files, you would receive a message indicating that active files exist on the tape. Most likely, your backup routine either reinitializes the tape, or says not to check for active files. The best way to prove this is to do a DSPTAP OPTION(*PRINT) on the tape after the backup is complete, and check the dates next to each library. If the date is current, then the backup completed properly. If the date is an old date, then your backup has not been completing properly.

Run Priority Vs Job Priority


RP - the priority of the job while executing JP - the relative order of the job waiting on job queue

DSPPGM Command
The Display Program (DSPPGM) command displays information about a program. The display includes information about the compiler, the type of program, certain processing attributes of the program, the type of program (ILE or OPM), the size of the program, and the number of parameters that can be passed to the program when it is called.

Pre start and auto start job


Pre start job - A prestart job is a batch job that starts running before a program on a remote
system initiates communications with the server. Prestart jobs use prestart job entries in the subsystem description to determine which program, class, and storage pool to use when the jobs are started. Within a prestart job entry, you must specify attributes for the subsystem to use to create and to manage a pool of prestart jobs. Prestart jobs increase performance when you initiate a connection to a server. Prestart job entries are defined within a subsystem. Prestart jobs become active

when that subsystem is started, or they can be controlled with the Start Prestart Job (STRPJ) and End Prestart Job (ENDPJ) commands. System information that pertains to prestart jobs (such as DSPACTPJ) uses the term 'program start request' exclusively to indicate requests made to start prestart jobs, even though the information may pertain to a prestart job that was started as a result of a sockets connection request. Notes: * Prestart jobs can be reused, but there is no automatic cleanup for the prestart job once it has been used and subsequently returned to the pool. The number of times the prestart job is reused is determined by the value specified for the maximum number of uses (MAXUSE) value of the ADDPJE or CHGPJE CL commands. This means that resources that are used by one user of the prestart job must be cleaned up before ending use of the prestart job. Otherwise, these resources will maintain the same status for the next user that uses the prestart job. For example, a file that is opened but never closed by one user of a prestart job remains open and available to the following user of the same prestart job. Use the Display Active Prestart Jobs (DSPACTPJ) command to monitor the prestart jobs. For example, to monitor prestart jobs for the signon server, you must know the subsystem your prestart jobs are in (QUSRWRK or a user-defined subsystem) and the program (for example, QZSOSIGN). If you move jobs from the default subsystem, you must: 1. Create your own subsystem description. 2. Add your own pre-start job entries using the ADDPJE command. Set the STRJOBS parameter to *YES. If you do not do this, your jobs will run in the default subsystem.

Auto Start job


- Autostart jobs are associated with iSeries host servers. The QSERVER subsystem has an autostart job defined for the file server and database server jobs. If this job is not running, the servers cannot start. The subsystem will not end when the job disappears. If a problem occurs with this job, you may want to end and restart the QSERVER subsystem. The QSYSWRK subsystem has an autostart job defined for all of the optimized servers. This job monitors for events sent when a STRTCP command has been issued. This way, the server daemon jobs can dynamically determine when TCP/IP has become active. The daemon jobs then begin to listen on the

appropriate ports. If the autostart job is not active, and TCP/IP is started while the host servers are active, the following sequence of commands must be issued in order to start using TCP/IP:

AS 400 Subsystem
-- The subsystem is where work is processed on the system. A subsystem is a single, predefined operating environment through which the system coordinates the work flow and resource use. The system can contain several subsystems, all operating independently of each other. Subsystems manage resources. All jobs, with the exception of system jobs, run within subsystems. Each subsystem can run unique operations. For instance, one subsystem may be set up to handle only interactive jobs, while another subsystem handles only batch jobs. Subsystems can also be designed to handle many types of work. The system allows you to decide the number of subsystems and what types of work each subsystem will handle. The run-time characteristics of a subsystem are defined in an object called a subsystem description. For example, if you want to permanently change the amount of work (number of jobs) coming from a job queue into a subsystem you only need to change the job queue entry in the subsystem description. The controlling subsystem The controlling subsystem is the interactive subsystem that starts automatically when the system starts, and it is the subsystem through which the system operator controls the system via the system console. It is identified in the Controlling subsystem/library (QCTLSBSD) system value. IBM supplies two complete controlling subsystem descriptions: QBASE (the default controlling subsystem) and QCTL. Only one controlling subsystem can be active on the system at any time. When the system is in the restricted condition, most of the activity on the system has ended, and only one workstation is active. The system must be in this condition for commands such as Save System (SAVSYS) or Reclaim Storage (RCLSTG) to run. Some programs for diagnosing equipment problems also require the system to be in a restricted condition. To end this condition, you must start the controlling subsystem again. Note: There is also a batch restricted state in which one batch job can be active. When all of the subsystems, including the controlling subsystem are ended, a restricted condition is created. You can end each subsystem individually or you can use the ENDSBS SBS(*ALL) OPTION(*IMMED).

Important: The system cannot reach the restricted state until there is only one job remaining in the controlling subsystem. Sometimes it may appear as though there is a single job remaining, but the system does not go into the restricted state. In this case you need to verify that there are no suspended system request jobs, suspended group jobs, or disconnected jobs on the remaining active display. Use the Work with Active Jobs (WRKACTJOB) command and press F14=Include to display any suspended or disconnected jobs. If these jobs exist, you need to end them in order for the system to reach the restricted state. The ENDSYS and ENDSBS functions will send a CPI091C information message to the command issuer when this condition is detected. Subsystem description A subsystem description is a system object that contains information defining the characteristics of an operating environment controlled by the system. The systemrecognized identifier for the object type is *SBSD. A subsystem description defines how, where, and how much work enters a subsystem, and which resources the subsystem uses to perform the work. An active subsystem takes on the simple name of the subsystem description. Like a set of detailed blueprints, each subsystem description is unique, containing the specific characteristics that describe the subsystem. The description includes where work can enter the subsystem, how much work the subsystem can handle, how much main storage (memory) will be used, and how quickly jobs in the subsystem can run. You can use a subsystem description supplied with your system (with or without making changes to it), or you can create your own. Subsystem description attributes Subsystem description attributes are common overall system attributes. When you create a subsystem, the first step is to define the subsystem attributes. Work entries Work entries identify the sources where jobs can enter a subsystem. Specific types of work entries are used for different types of jobs. Work entries are part of the subsystem description. Routing entries The routing entry identifies the main storage subsystem pool to use, the controlling program to run (typically the system-supplied program QCMD), and

additional run-time information (stored in the class object). Routing entries are stored in the as400 subsystem description.

AS400 Storage Management


OS/400 and disk drives: OS/400, the operating system that runs on an AS/400, does not need to deal directly with disk drives. Beneath the operating system a level of software (called System Licensed Internal Code (SLIC)) "hides" the disk drives and manages the storage of objects on those drives. A virtual address space is mapped over the existing disk space and used for addressing objects rather than disk drive IDs, cylinders, and sectors. Needed objects are copied ("paged in") from this address space on disk into the address space of main memory. Because of the way AS/400 manages disk data, you do not generally need to worry about partitioning high-growth databases, defragmenting disks, or disk striping on your Integrated Netfinity Server. The Integrated Netfinity Server uses device drivers to share the AS/400's disk drives. These device drivers send and receive disk data to the AS/400 storage management subsystem. AS/400 storage management handles the hard disks, including spreading the hard disk images across multiple drives and applying RAID and file mirroring (if configured). Disk defragmentation software manages logical file fragmentation of the hard disk images. Because AS/400 storage management handles these tasks, running a defragmentation program on the Integrated Netfinity Server helps only in cases where "critical file system structures" can be defragmented. Storage pools: Administrators manage storage through the concept of storage pools. You can logically join disk units to form a storage pool and place objects into this pool. This pool is called an auxiliary storage pool (ASP). Every system has at least one ASP, the system ASP. The system ASP is ASP 1. You can configure additional user ASPs, numbered 2-16. You can use storage pools to distribute your AS/400 data over several groups of disks. You can also use this concept to move less important applications or data to your older, slower disk drives. Disk protection: AS/400 disks can be protected in two ways:

RAID-5 The RAID-5 technique groups several disks together to form an array. Each disk holds checksum information of the other disks in the same array. If a disk fails, the RAID-5 disk controller can re-create the data of the failing disk with the help of the checksum information on the other disks. When you replace the failing disk with a new one, AS/400 can rebuild the information from the failed disk on the new (and therefore empty) disk. Mirroring Mirroring keeps two copies of data on two different disks. AS/400 performs write operations on both disks at the same time, and can simultaneously perform two different read operations on the two disks of a mirrored pair. If one disk fails, AS/400 uses information from the second disk. When you replace the failing disk, AS/400 copies the data from the intact disk to the new disk. To further increase the level of protection, you can attach the mirrored disks to two different disk controllers. Then if one controller fails, and with it one set of disks, the other controller can keep the system up. On larger models of AS/400, you can attach controllers to more than one bus. Attaching the two disk controllers that form a mirrored pair to two different buses increases availability even more. You can define storage pools on your AS/400 to have different levels of protection or no protection at all. Then you can put applications and data into a storage pool with the right amount of protection, depending on how important their availability is.

Object Restoration
During an object restore following attributes are kept: * Owner value * Primary Group profile value * Authorization list value * *PUBLIC authority value Private authorities get lost during a save/restore process even if the target system is the same as the source system.

The only way to avoid this is to proceed as follows: * Restore user profiles * Restore objects * Run command RSTAUT to reapply all the private authorities This might be run during a system restore or migration but not when restoring a few objects or libraries to your system.

Routing Data
Routing determines jow jobs are processed after they reach the subsystem. Any job will have routing data associated with it. For many jobs, routing data comes from the job descritpion or from the routing data (rtgdta) parameter on the command that started the job. For a communications jobs, the routing data can come from information that the program start request recieves. In this case you add a routing entry to a subsystem descritpion with the addrtge (add routin entry) command. The subsystem compares the jobs routing data against values in the subsystems routing entries. A match between the jobs routing data and the subsystems routing entry determines, among other things, the pool the job will run in. We have now followed the trail of a particular job as it ends up in a givin system storage pool. Before we leave the topic, note that you specify another important performance- related parameter on the addrtge command. The name of the class object the job uses, which determines its runprty (run priority) and its time slice. You issue the crtcls (create class) command to create a class object. Thus, the same path that determines the system storage pool in which the job runs also determines the jobs priority and its time slice. Add routing entry (addrtge) the add routing entry (addrtge) command adds a routing entry to the specified subsystem description full name: add routing entry as/400 name: addrtge description: adds routing entry to subsystem to create: from command line type: addrtge then hit f4 to prompt the command. You will then see this display: add routing entry (addrtge) subsystem description . . . . . Name library . . . . . . . . . . . *libl name, *libl, *curlib routing entry sequence number . 1-9999 comparison data: compare value . . . . . . . . ________________________________________ ______________________________________________________________ _____________ starting position . . . . . . 1 1-80 program to call . . . . . . . . Name, *rtgdta library . . . . . . . . . . . *libl name, *libl, *curlib class . . . . . . . . . . . . . *sbsd name, *sbsd library . . . . . . . . . . . Name, *libl, *curlib maximum active routing steps . . *nomax 0-1000, *nomax storage pool identifier . . . . 1 1-10 under subsystem description type in the name of the subsystem you are setting up the routing for. Under library type in the library the subsystem is in. Under routing entry sequence number put in the sequence number to check against. Under starting position type the number to compare with the routing data. Under program to call type in the name of the program to run in the first

routing step. Under library type in the library the program to call is in. Under class type in the name of the class to call. Under library type in the name of the library the class is in. To destroy: from a command line type: rmvrtge then hit f4 to prompt the command. You will then get this display: remove routing entry (rmvrtge) subsystem description . . . . . Name library . . . . . . . . . . . *libl name, *libl, *curlib routing entry sequence number . 1-9999 under subsystem description type in the name of the subsystem the routing entry is in. Under library type in the name of the library the subsystem is in or use the default. Under routing entry sequence number type in the number of the routing sequnce number you wish removed. The routing entry is usable as soon as it is created. Extended information: the add routing entry (addrtge) command adds a routing entry to the specified subsystem description. Each routing entry specifies the parameters used to start a routing step for a job. For example, the routing entry specifies the name of the program to run when the routing data that matches the compare value in this routing entry is received. Expanded information: you would want to set up routing entries for at least four different classes. *high, *normal, *low, *any (this one is the catch all) in case no match is made to the previous three. This way the job wont blow up. Example: add routing entry (addrtge) subsystem description . . . . . Whsesbs01 name library . . . . . . . . . . . Whse01 name, *libl, *curlib routing entry sequence number . 20 1-9999 comparison data: compare value . . . . . . . . _____________________________________ ______________________________________________________________ __________ starting position . . . . . . 1 1-80 program to call . . . . . . . . Qcmd name, *rtgdta library . . . . . . . . . . . Qsys name, *libl, *curlib class . . . . . . . . . . . . . Whseh name, *sbsd library . . . . . . . . . . . Whse01 name, *libl, *curlib maximum active routing steps . . *nomax 0-1000, *nomax storage pool identifier . . . . 1 1-10 this example would create a routing entry for the warehoues number 1 in the warehouse number 1 subsystem. You might use this for creating r0uting entries for warehouse jobs run for instance. Like pull lists, picking labels, inventory sheets, etc.. The following is a list of routing releated commands: addrtge - add routing entry chgrtge - change routing entry rmvrtge - remove routing entry

RISC vs CISC
RISC - Reduced instruction set computer. CISC - Complex instruction set computer.

The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers) architecture. Multiplying Two Numbers in Memory On the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let's say we want to find the product of two numbers - one stored in location 2:3 and another stored in location 5:2 - and then store the product back in the location 2:3. The CISC Approach The primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this particular task, a CISC processor would come prepared with a specific instruction (we'll call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction: MULT 2:3, 5:2 MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let "a" represent the value of 2:3 and "b" represent the value of 5:2, then this command is identical to the C statement "a = a * b." One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware. The RISC Approach RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the "MULT" command described above could be divided into three separate commands: "LOAD," which moves data from the memory bank to a register, "PROD," which finds the product of two operands located within the registers, and "STORE," which moves data from a register to the memory banks. In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly: LOAD A, 2:3

Operating System version of iseries


DSPSFWRSC - > F11, here u will get all information GO LICPGM here we can see lots of options and select appropriate one DSPDTAARA DTAARA(QUSRSYS/QSS1MRI) DSPPTFGRP Display all the PTFs which has been installed on our server

Storage Capacity of server


WRKSYSSTS / DSPSYSSTS

SYSTEM COMMANDS View/Change a user beginning with ADM. With option 12, it is possible to see which objects are owned by this user.: wrkusrprf ADM* wrkusrprf ADM* -> Option 12 Show messages for user QSYSOPR. Work with messages for user QSYSOPR (e.g. to reply): dspmsg qsysopr wrkmsg qsysopr Show/Change System Values: WRKSYSVAL *ALL Change System Value for Date: WRKSYSVAL QDATE Display AS400 Version: GO LICPGM [Enter] Option 10 [F11]

Display several network information: NETSTAT Display contents of an audit log. Here we search for User Profile Changes (CP) done by ADMUSER in the given journal: DSPAUDJRNE ENTTYP(CP) USRPRF(ADMUSER) JRNRCV(AUDITSYS/AUDRCV0009) OUTPUT (*) Show disk utilization by library: PRTDSKINF

JOB COMMANDS Work with a Job. In this example, check, when daily saves ran and if saves were correctly finished by checking the output spool files wrkjob $JOBNAME wrkjob SAVDAILY Work with Outqueues (Job Outputs): wrkoutq [Enter] -> Select Output Queue wrkoutq [Enter] -> QPRINT (System Output) -> Option 5 (Work with) -> Select Output and Display with Option 5 To see list and work with active jobs: wrkactjob Check and create scheduled jobs (like cronjobs): wrkjobscde Launch a job in the "background" so terminal isn't blocked: sbmjob [F4] -> Enter your job/command

FILE/OBJECT/LIBRARY COMMANDS Show if selected object is currently in use (and by which tasks) or not: wrkobjlck [F4] -> $name of obj -> library: $libraryname -> Type: $objtype wrkobjlck [F4] -> MYOBJ -> library: MYLIB -> Type: *PGM WRKOBJLCK OBJ(MYLIB/MYOBJ) OBJTYPE(*FILE) The same for a library, not an object: WRKOBJLCK OBJ(QSYS/MYLIB) OBJTYPE(*LIB) Show permissions of a library and object: DSPOBJAUT OBJ(QSYS/MYLIB) OBJTYPE(*LIB) DSPOBJAUT OBJ(QGPL/MYOBJ) OBJTYPE(*FILE)

Change object permissions (grand permissions): GRTOBJAUT OBJ(QSYS/OLYFEOY) OBJTYPE(*ALL) USER(MYUSER) AUT(*ALL) GRTOBJAUT OBJ(OLYFEOY/*ALL) OBJTYPE(*ALL) USER(MYUSER) AUT(*ALL)

BACKUP/RESTORE AND TAPE COMMANDS Save Object in a Save File (SAVF): SAVOBJ [F4] Objects: Name of Obj to save Library: From which Library Device: *SAVF [F10] Save file: Name of the Savefile Library: Where to save the SAVF Display contents of a SAVF: DSPSAVF [F4] SAVF: Name of your SAVF Library: Where your SAVF is located Restore Object from a SAVF:

RSTOBJ [F4] Objects: Object to restore Saved library: Name of the saved library in savefile Device: *SAVF [F10] Save file: Name of SAVF Library: Library of Savefile Restore to Library: Library to temporary restore the object Save Object on tape device (here the tape device's name is TAP01): savobj [F4] Objects: Name of Obj to save Library: From which Library Device: TAP01

Show contents of tape in tape device TAP01 and save the info in a file: DSPTAP [F4] Device: TAP01 Volume identifier: *MOUNTED Output: *OUTFILE [F10] File to receive output: TAPLIST Library: QGPL Another way to show contents of tape on screen (one sequence number at a time): DSPTAP DEV(TAP01) Restore library (MYLIB) from tape: RSTLIB SAVLIB(MYLIB) DEV(TAP01) VOL(*MOUNTED) SEQNBR(0000000148) RSTLIB(MYLIB) Eject Tape: CHKTAP DEV(TAP01) ENDOPT(*UNLOAD) Initialize tape (is like a format of the tape), all data on tape will be deleted. Tape will be given a new name (BKP1): INZTAP DEV(TAP01) NEWVOL(BKP1) CHECK(*NO)

List all objects owned by a profile


DSPUSRPRF USRPRF (name) TYPE(*OBJOWN) OUTPUT(*PRINT) WRKOBJOWN USRPRF (name)

Primary group authority


Primary group authority has the same effect as granting the group a private authority.

Note: The system verifies a user's authority to an object in the following order: 1. Object's authority - fast path 2. User's *ALLOBJ special authority 3. User's specific authority to When a user attempts to perform an operation on an object, the system verifies that the user has adequate authority for the operation. The system first checks authority to the library or directory path that contains the object. If the authority to the library or directory path is adequate, the system checks authority to the object itself. In the case of database files, authority checking is done at the time the file is opened, not when each individual operation to the file is performed. During the authority-checking process, when any authority is found (even if it is not adequate for the requested operation) authority checking stops and access is granted or denied. The adopted authority function is the exception to this rule. Adopted authority can override any specific (and inadequate) authority found.

Safely deleting a user profile

Is the user profile used to run a regularly scheduled batch job or a server job? In OS/400, most recurring jobs are scheduled through the job scheduler, and that requires the scheduler entry to designate which user profile the job should run under. This can create a problem when you delete a user profile because a scheduled job will fail if it is submitted to run under a user profile that doesn't exist. So if you're deleting a popular user profile (such as the former head of the IT department, who may have scheduled many jobs), you need to determine whether there are any jobs scheduled to run under his user profile and switch those jobs to run under another profile name. As far as I know, there is no automated procedure in iSeries Navigator (OpsNav) to scan for scheduled jobs that run under a particular user profile, but you can find this information on a 5250 green screen by using the Work with Job Scheduled Entries (WRKJOBSCDE) command to create a printout containing information about all your scheduled jobs. WRKJOBSCDE OUTPUT(*PRINT) PRTFMT(*FULL) This command creates a detailed report of every job scheduler entry on your system, including the name of the user profile each job will run under. To find all the jobs that run under a particular user profile, display the printout created from the WRKJOBSCDE command and search for your target user profile name. When you find the user profile name in a particular job scheduler entry, you can again use the WRKJOBSCDE command to change the User parameter in that entry to an active user profile that has all the right authorities to run your job. The other gotcha in this technique is that some jobs (along with their associated user profile names) are submitted from within CL programs or during an OS/400 IPL. In these cases, you may also want to check your IPL startup program code and the startup process for any server job that is currently running under the name of the terminated user profile. Does the user profile own any objects in the system? This step is optional because you can also delete owned objects or transfer their ownership to another user profile as you delete the profile. The important point is that OS/400 will not delete a user profile that owns objects. So another key to successfully deleting a user profile is to either change the ownership of any objects it owns or delete the objects along with the user profile. The iSeries Navigator doesn't provide an easy way to perform these functions; you can view but not work with a user's owned objects inside OpsNav. So you have to go back to the green screen once again and use the Work with Objects by

Owner (WRKOBJOWN) command to view and change ownership for each object that a user profile owns. WRKOBJOWN USRPRF(user_profile) This command displays all the objects owned by the target user. The WRKOBJOWN screen gives you the option to change an object's owner (9=Change owner) or to delete an owned object (4=Delete). You can perform mass ownership changes by placing a 9 in front of all the owned objects and then specifying the user profile name of the new owner in the New Owner (NEWOWN) parameter on the command line, like this: NEWOWN(new user_profile) When you hit the Enter key, all the owned objects marked with a 9 will be changed to use the user profile specified in the NEWOWN parameter as their new owner. If you want to delete all the objects this user owns, simply put a 4 (delete) instead of a 9 in front of each object and perform the same routine.

Is the user profile that you're going to delete a group profile? If other user profiles are depending on the soon to be deleted user profile for authorities by listing the profile as their primary or secondary group, you need to find those user profiles and change their group profile or Supplemental groups parameters (GRPPRF and SUPGRPPRF) to another group. To find all the user profiles that are members of a group profile, use the Display User Profile command (DSPUSRPRF) with the group member option, like this: DSPUSRPRF USRPRF(user profile) TYPE(*GRPMBR) This shows all the user profiles that list the terminating user profile in their group profile or supplemental group parameters. Before you can delete your user profile, then, you need to change these parameters in each of the depended-on user profiles. Now that you've done the upfront work, you're ready to delete your user profile. You can delete profiles by using either iSeries Navigator or the green screen. For iSeries Navigator, open the Users and Groups and All Users nodes, and highlight the profile that you want to delete. Right-click the profile and choose Delete from the pop-up menu that appears. A Delete User panel will appear, with three radio buttons that tell OS/400 what to do with objects the user profile owns. The Do not delete if user owns objects button tells OS/400 to leave any owned objects alone and to retain the user profile if that user owns any objects. The

Delete objects that user owns button tells OS/400 to delete all owned objects as it axes the user profile. And the Transfer objects to another user button tells OS/400 to transfer ownership to another user profile that you can select from a list. Once you make your selection and press the OK button, the user profile will be deleted and the owned objects will be changed, depending on which parameters you selected. The user profile deletion process is similar to deleting profiles on the green screen. The big difference is that you use the Delete User Profile (DLTUSRPRF) command in one of the following three configurations: DLTUSRPRF USRPRF(user profile) OWNOBJOPT(*NODLT) DLTUSRPRF USRPRF(user profile) OWNOBJOPT(*DLT) DLTUSRPRF USRPRF(user profile) OWNOBJOPT(*CHGOWN new user_profile) In the first example, the user profile will be deleted, provided that it doesn't own any objects. The second example will delete the user profile and all objects that it owns, while the third example will delete the user profile and transfer any objects it owns to another user profile. At this point, you're finished. The user profile has been safely deleted and most objects that were affected by the profile have been modified to account for the profile's absence. This technique will take care of most user profile deletion scenarios.

Now let's consider an example where John is trying to open a database file for update. That means that to open the file successfully, John needs to have *CHANGE authority to the file -- from somewhere. Here are the steps that OS/400 and i5/OS go through to see if John has sufficient authority:

If John has *ALLOBJ special authority, then processing stops and he can do the file update. If John does not have *ALLOBJ, the system looks to see if John has a private authority to the file. o If John has a private authority of *CHANGE or greater, the processing stops and he can do the file update. o If John has *USE authority, that is not sufficient. Processing stops, but John is prevented from updating the file. o If no private authorities are found, processing continues. If no authority is found so far, the system looks to see if the file is secured with an authorization list. o If it is and John has *CHANGE authority or greater to the authorization list, the processing stops and he can update the file. o If John has *USE authority to the authorization list, that is not sufficient. Processing stops, but John is prevented from updating the file. o If no private authorities to the authorization list are found or the object is not secured by an authorization list, processing continues. If no authority is found so far and John is a member of a group profile and his group profile has *ALLOBJ processing stops and John can update the file.

If John's group does not have *ALLOBJ, the system looks to see if the object has a primary group. If it does, it looks to see if John's group is the primary group for the object. o If John's group has primary group authority of *CHANGE or greater, the processing stops and he can update the file. o If John's group has primary group authority of *USE, that is not sufficient. Processing stops, but John is prevented from updating the file. o If no primary group authority is found or John's group is not the group that has the primary group authority, processing continues. If John's group does not have primary group authority, the system looks to see if John's group has a private authority to the file. o If John's group has a private authority of *CHANGE or greater, the processing stops and he can do the file update. o If John's group has *USE authority, that is not sufficient. Processing stops, but John is prevented from updating the file. o If no private authorities are found, processing continues. If no authority is found so far, the system looks to see if the file is secured with an authorization list. o If it is and John's group has *CHANGE authority or greater to the authorization list, the processing stops and he can do the file update. o If John's group has *USE authority to the authorization list, that is not sufficient. Processing stops, but John is prevented from updating the file. o If no private authorities to the authorization list are found or the object is not secured by an authorization list, processing continues. If no other authority is found, the object's *PUBLIC authority is used. o If *PUBLIC authority is *CHANGE or greater, the processing stops and John can do the file update. o If *PUBLIC authority has *USE authority, that is not sufficient. Processing stops, but John is prevented from updating the file. If *PUBLIC authority is *AUTL, the system looks at the authorization list securing the object. o If *PUBLIC authority for the authorization list is *CHANGE or greater, the processing stops and John can do the file update. o If *PUBLIC authority for the authorization list is *USE, that is not sufficient. Processing stops, but John is prevented from updating the file.

Groups vs. Autls -- which is better?


Group profiles and authorization lists are management tools provided by i5/OS and OS/400. They are supplied to make your life as an administrator easier. You don't have to use either one. Their use is totally up to you.

If you have a set of users and they need the same set of authorities or capabilities (special authorities), then it makes a lot of sense to use a group profile. You simply specify the group profile name in the users' group profile parameter. Now, if you grant a group profile authority to an object (a file or library, for example) or give the group profile a special authority (such as *JOBCTL), then all of the users in the group get those authorities. Groups are often created by department or "role" (such as tellers, accounting clerks, payroll department, etc.). Authorization lists can be used to simplify the security administration of objects that need the same authorities. Authorization lists are often created by application. For example, you have an accounts payable application and each month a new set of files is created, but the same users need to be granted the same authorities each month as the new files are created. You may want to consider creating an authorization list and securing both the existing and newly created files with the list. Then, you grant the users' authorities to the authorization list, not the files themselves. By virtue of having authority to the authorization list, the users have that authority to anything the authorization list secures. Now to answer the question "Which is better -- the group profile approach or the authorization list approach?" The answer is: neither. It's not a matter of which is "better." It's a matter of which one makes more sense, given the problem you're trying to solve. In fact, many of my clients will grant a group profile authority to an authorization list, giving authority to every member of the group to all of the objects secured by the authorization list.

Invalid Login attempts


DSPAUDJRNE ENTTYP(PW) OUTPUT(*PRINT) Or DSPJRN JRN(QAUDJRN) ENTTYP(PW) OUTPUT(*PRINT) Or DSPLOG PERIOD((*AVAIL 102411) (*AVAIL *CURRENT) MSGID(CPF2234 CPIB682 CPF1393)

Restore From an iSeries Tape


1. * 1 Select the tape drive you wish to use for the library restoration. * 2 Mount or load the tape you wish to restore the library from. * 3 Sign on to the iSeries processor using your user profile and password. The "Main" menu will be displayed at your workstation.

* 4 Enter the command "rstlib" on the command line and press the F4 key to prompt the system for additional parameters. The Additional Parameters screen will appear at your workstation. * 5 Tab to the "SAVLIB" parameter and type the name of the library you wish to restore in the SAVLIB field. * 6 Tab to the "DEV" (device) parameter. Type the device name of the tape drive you previously mounted the tape on. As a typical example, most autoconfigured tape drives on the iSeries begin with a "TAP" prefix, such as TAP01 or TAP02. * 7 Tab to the "VOL" (volume identifier) parameter. If you know the volume id of the tape, type it into the field, if not, simply type *MOUNTED in the field. Once the restore command is executed, the iSeries processor will determine the volume id on its own. * 8 Tab to the "SEQNBR" (sequence number) parameter. If you know the sequence number of the library on the tape, type it into the field, if not, simply type *SEARCH. Once the restore command is executed, the iSeries processor will locate the library on its own. * 9 Tab to the "ENDOPT" (end of tape options) parameter. Type *UNLOAD into the field. Once the restore operation has been completed, the iSeries processor will automatically rewind the tape and eject it from the tape drive. * 10 Pres the "enter" key at your workstation. This will initiate the restore process. After the library has been successfully restored, you will receive a confirmation message at the bottom of your screen. Once the restore has finished, check the system operator message queue for more detailed information regarding the restoration of the library. To begin this process, type "wrkmsgq qsysopr" on the command line and press enter. The system operator message queue will be displayed.

iSeries Console

You can interact with your server through any of these consoles: the iSeries Operations Console, the twinaxial console, the Hardware Management Console (HMC), or the Thin Console for System i5. The iSeries Operations Console offers more flexibility than the twinaxial console because the server can be accessed locally and remotely. The HMC, available on some servers, supports access through your ethernet connection. It also provides some basic commandline functions. The Thin Console connects to select IBM System i models and provides a 5250 system console for the i5/OS operating system.End of change Operations Console

Operations Console allows you to use your PC to access and control the console and control panel locally and remotely. To access the server, a PC or local console must be configured. A local console configuration can be any of the following: a local console directly attached to the server, a local console directly attached to the server with remote access allowed, a local console on a network, or a local console through dial-up support. Once you configure a local console directly attached to the server with remote access allowed, you can establish connections from remote PCs or a Remote Console through the local console. You can interact with your server through a console. Use Operations Console as a system console to access and administer your server. Operations Console is an installable component of iSeries Access for Windows. It allows you to use one or more PCs to access and control, either remotely or locally, the iSeries console and control panel functions. VNC (Virtual Network Computing) - provides desktop sharing to remotely connect and control your iSeries console(s). RDP (Remote Desktop Protocol) - provides remote display and input over the network. Operations Console (Ops Con) - allows you to use a local or remote PC to access and control your iSeries console and iSeries Remote Control Panel. Twinaxial Console The twinaxial console uses a basic command-line interface to access and manage your server, and it does not require the use of a personal computer to act as a console. You access the server through a console screen, keyboard, and twinaxial cables. Hardware Management Console (HMC) HMC supports configuration and management of some IBM servers. You can manage logical partitions, manage Capacity on Demand, power systems on and off, delete and report changes in hardware, and more. Thin Console The Thin Console connects to select IBM System i models and provides a 5250 system console for the i5/OS operating system. IBM eServer iSeries models 270 and 8xx are not supported. The Thin Console connects directly to a Hardware Management Console (HMC) port on the system.End of change

Вам также может понравиться