Вы находитесь на странице: 1из 140

Use the Integrated

Virtualization Manager with


Linux on POWER
The IBM® Integrated Virtualization Manager (IVM) is a new component
of the Virtual I/O Server, which is included with the Advanced Power
Virtualization feature. With the use of IVM, customers can now manage
partitions on an IBM POWER5™ server without a Hardware
Management Console (HMC). This paper presents an overview of the
functionality of IVM, lists some of the differences between the IVM and
the HMC, and illustrates how to use IVM to create and manage Linux®
on POWER™ partitions.

Introduction

With IBM POWER5 and POWER5+™ processor-based systems and


Advanced POWER Virtualization (APV), there are many opportunities
for consolidation and simplification of the IT environment. Multiple
solutions can be created that take advantage of the benefits from the
application of virtualization. Some examples of solutions that benefit
by utilizing virtualization are:

• Server consolidation
• Rapid deployment
• Application development and testing
• Support for multiple operating system environments

The Advanced POWER Virtualization hardware feature includes


software, firmware, and hardware enablement, which provide support
for logical partitioning (LPAR), virtual LAN, and virtual I/O. In addition,
servers featuring the POWER5 processor can utilize Micro-
Partitioning™, which provides the capability to configure up to 10
logical partitions per processor.
A key component of APV is the Virtual I/O Server (VIOS). The Virtual I/O
Server provides sharing of physical resources between partitions,
including virtual SCSI and virtual networking. This allows more efficient
utilization of physical resources through sharing between partitions
and helps facilitate server consolidation.
To exploit the capabilities of APV, a system management interface is
required. This function is often provided by a Hardware Management
Console (HMC). The HMC is a dedicated workstation that runs
integrated system management software. In some system installations

1
an HMC may not be required, or for some businesses may not be a
cost-effective solution.
The Integrated Virtualization Manager (IVM) is a browser-based
management interface that is used to manage a single IBM System
p5™, IBM eServer® p5, or IBM OpenPower™ server. It can be used to
create logical partitions, manage the virtual storage and virtual
Ethernet, and view service information related to the server. IVM
provides the required system management capabilities to small and
mid-sized businesses, as well as larger businesses with distributed
environments.
IVM is provided as part of VIOS, starting with Version 1.2. The
functionality of IVM is supported in system firmware level SF235 or
later. When you install VIOS on a supported system that does not have
a HMC present or previously installed, IVM is automatically enabled on
that server. Therefore, IVM provides a simplified, cost-effective solution
for partitioning and virtualization management.
This article illustrates how to set up and use IVM and how to create and
work with Linux partitions.

Back to top

IVM
overview
POWER5
system
configuratio
ns
IBM
POWER5™
processor-
based
systems are
manufactur
ed in the
factory
default, or
unmanaged
configuratio
n. In this
configuratio
n, the
system has
a single
predefined
partition.

2
This
configuratio
n allows the
system to
be used as
a
standalone
server with
all of the
resources
allocated to
the single
partition.
After
activating
the
virtualizatio
n feature,
an HMC can
be attached
to the
system’s
service
processor to
convert the
unmanaged
system into
an HMC-
managed
system. As
an HMC-
managed
system, the
system can
exploit
virtualizatio
n, and the
system’s
resources
can be
divided
across
multiple
logical
partitions.
When the

3
system does
not have an
HMC
available,
an IVM-
managed
system can
be created
that can still
utilize the
virtualizatio
n and LPAR
capabilities
of the
system. To
convert the
unmanaged
configuratio
n into an
IVM-
managed
system, the
VIOS is
installed in
the first
partition on
the
unmanaged
system. This
VIOS
partition
owns all of
the physical
I/O
resources of
the system.
Client
partitions
can then be
created
using the
IVM
interface.
All of the
client
partition I/O

4
is
virtualized
through the
VIOS.

Back to top

IVM
components
System
administrat
ors can
work with
LPAR
configuratio
ns through
IVM’s Web
browser-
based or
command
line
interface.
These
interfaces
are used to
manage and
configure
the client
partitions
and the
virtual I/O
resources.
The
browser-
based
interface
provides an
intuitive,
easy-to-use
method of
connecting
to the VIOS
partition of
the
managed

5
system
using
standard
with
network
access. The
command
line
interface
uses an
interactive
console, or
a Telnet
session,
with the
VIOS
partition.
Since IVM is
not
connected
to the
service
processor; it
uses a
Virtual
Managemen
t Channel
(VMC) to
communicat
e with the
POWER5
Hypervisor.
Figure 1
depicts the
VIOS and
IVM
components
, along with
their
administrati
on
interfaces.

Figure 1.
VIOS and
IVM

6
components

Partitions
created in
an IVM
managed
system can
utilize the
following
virtualizatio
n support:

• Shared
process
or
partitio
ns or
dedicate
d
process
or
partitio
ns
• Micro-
partitio
ning
support
in

7
shared
process
or
partitio
ns,
providi
ng
shared
process
or pool
usage
for
capped
and
uncappe
d
partitio
ns
• Uncapp
ed
partitio
ns can
utilize
shared
process
or pool
idle
processi
ng
units,
based
on an
uncappe
d
weight
• Virtual
Etherne
t
support
allowin
g
logical
partitio
ns to
share a
physical

8
Etherne
t
adapter
• Virtual
network
s with
bridges
between
the
virtual
network
s and
the
physical
Etherne
t
adapters
• Virtual
SCSI
support
allowin
g
logical
partitio
ns to
share a
physical
SCSI
adapter
• Assign
ment of
physical
disks,
partial
disks,
or
external
logical
unit
number
s
(LUNs)
to client
partitio
ns
• Virtual

9
optical
support
allowin
g
logical
partitio
ns to
share a
CD or
DVD
drive
• Virtual
console
support,
with
virtual
terminal
console
access
from
the
VIOS
partitio
n

IVM
limitations
The
Integrated
Virtualizatio
n Manager
provides a
subset of
the HMC
functionality
. One should
carefully
consider
business
needs when
deciding
whether to
deploy IVM
or an HMC.
Some of the
limitations

10
when using
IVM include
the
following:

• Full
dynami
c LPAR
support
is only
availabl
e for the
VIOS
partitio
n
• All
physical
I/O is
owned
by the
VIOS
partitio
n
• There is
no
support
for
redunda
nt or
multiple
VIOS
partitio
ns
• Client
partitio
ns can
have a
maximu
m of 2
virtual
Etherne
t
adapters
• Client
partitio
ns can

11
have
only
one
virtual
SCSI
adapter
• No call-
home
service
support
is
provide
d

As seen by
some of
these
limitations,
there are
instances
where an
HMC
managed
system may
be required
or more
desirable.
Some
examples of
these
instances
include
systems
that require
complex
logical
partitioning,
client
partitions
with
dynamic
LPAR
support or
dedicated
physical I/O
adapters,

12
redundant
VIOS
support,
and
complete
HMC-based
service
support.
The
Appendix
provides a
table that
compares
IVM with
HMC.

Back to top

Before you
begin
This article
assumes
that the
reader has a
working
knowledge
of Linux,
POWER5
processor-
based
server
hardware,
partitioning
concepts,
and
virtualizatio
n concepts.
To set up
and use IVM
to create a
Linux-based
solution
requires the
following
system and

13
software
components
:

1. One of
the
followi
ng IBM
POWE
R5
process
or-
based
servers:
o S
y
s
t
e
m

p
5

5
0
5
,
5
2
0
,
a
n
d

5
5
0
o e
S
e
r
v
e
r

14
p
5

5
1
0
,
5
2
0
,
a
n
d

5
5
0
o O
p
e
n
P
o
w
e
r

7
1
0

a
n
d

7
2
0
2. System
microco
de,
Version
SF235_
160 or

15
later.

Notes:

1. T
h
e

M
i
c
r
o
c
o
d
e

u
p
d
a
t
e

f
i
l
e
s

C
D
-
R
O
M

i
m
a
g
e

c
a

16
n

b
e

d
o
w
n
l
o
a
d
e
d

f
r
o
m

I
B
M
.
(
S
e
e

R
e
s
o
u
r
c
e
s
.
)
2. I
f

17
d
i
a
g
n
o
s
t
i
c

C
D

i
s

n
e
e
d
e
d

t
o

i
n
s
t
a
l
l
t
h
e

n
e
w

f
i
r

18
m
w
a
r
e

l
e
v
e
l
,
t
h
e

S
t
a
n
d
a
l
o
n
e

D
i
a
g
n
o
s
t
i
c
s

C
D
-
R
O
M

19
i
m
a
g
e

c
a
n

b
e

d
o
w
n
l
o
a
d
e
d

f
r
o
m

I
B
M
.
(
S
e
e

R
e
s
o
u
r
c
e

20
s
.
)
2. The
virtualiz
ation
feature
for the
System
p5,
eServer
p5, or
OpenPo
wer
server:

o T
h
e

A
d
v
a
n
c
e
d

P
O
W
E
R

V
i
r
t
u
a
l
i
z
a
t

21
i
o
n

f
e
a
t
u
r
e
o T
h
e

A
d
v
a
n
c
e
d

O
p
e
n
P
o
w
e
r

V
i
r
t
u
a
l
i
z
a
t
i

22
o
n

f
e
a
t
u
r
e

OpenPower 710
OpenPower 720
System p5 505 Express
eServer p5 510 and 510
Express
eServer p5 520 and 520
Express
System p5 520 Express
eServer p5 550 and 550
Express
System p5 550 Express
System p5 550Q Express
3. Virtual
I/O
Server
Version
1.2 or
greater
installat
ion CD
4. A
support
ed
Linux
distribu
tion:

o S
U
S
E

23
L
i
n
u
x

E
n
t
e
r
p
r
i
s
e

S
e
r
v
e
r

f
o
r

P
O
W
E
R

(
S
L
E
S
9
)
o R
e

24
d

H
a
t
E
n
t
e
r
p
r
i
s
e

L
i
n
u
x

A
S

f
o
r

P
O
W
E
R

(
R
H
E
L
3
)
,
U

25
p
d
a
t
e

o
r

l
a
t
e
r

N
o
t
e
:

R
H
E
L
3

i
s

n
o
t

s
u
p
p
o
r
t
e
d

26
o
n

S
y
s
t
e
m

p
5

E
x
p
r
e
s
s

s
e
r
v
e
r
s
.

o R
e
d

H
a
t
E
n
t
e
r
p
r

27
i
s
e

L
i
n
u
x

A
S

f
o
r

P
O
W
E
R

(
R
H
E
L
4
)

5. A PC
with a
serial
terminal
applicat
ion (for
exampl
e, Linux
Minico
m or
Windo
ws

28
HyperT
erminal
) or a
serial
terminal
.
6. A 9-pin
serial
crossov
er
connect
ion
(null
modem)
cable
7. A
network
connect
ed Web
browser
:

o N
e
t
s
c
a
p
e

7
.
1
,
o
r

h
i
g
h
e
r

o M

29
i
c
r
o
s
o
f
t
I
n
t
e
r
n
e
t
E
x
p
l
o
r
e
r

6
.
0
,
o
r

h
i
g
h
e
r

o M
o
z
i
l
l
a

30
1
.
7
.
X

o F
i
r
e
f
o
x

1
.
0
,
o
r

h
i
g
h
e
r

31
Back to top

Setup and
configuratio
n of an IVM
managed
system
Setting up
and
configuring
an IVM
managed
system
requires a
serial
terminal
application
(Minicom on
Linux or
HyperTermi
nal on
Windows,
for
example)
running on
a PC that is
plugged into
the
system’s
serial port 1
by a 9-pin
serial
crossover,
or null
modem,
cable. In
addition,
you will
need to
connect the
flexible
service
processor’s
Link HMC1
Ethernet

32
connection
to your
network.
With the
following
steps, the
flexible
service
processor
(FSP) can be
initialized
and the
VIOS
installed.
Note: The
following
setup steps
assume that
the system
is in the
manufacturi
ng default
configuratio
n, or
unmanaged
system
state. If
necessary,
the system
can be reset
to the
manufacturi
ng default
configuratio
n by using
the service
processor’s
System
Service Aids
menu,
Factory
Configuratio
n option.
Initialize the
service
processor

33
1. Set the
terminal
applicat
ion’s
connect
ion to
19200
bits per
second,
8 data
bits, no
parity, 1
stop bit.
2. Power
on the
system
and
press a
key on
the
terminal
to
receive
the
service
process
or
prompt.
3. Log in
with the
User ID
admin
and the
default
passwor
d
admin.
When
prompte
d to
change
users’
passwor
ds,
change
the

34
admin
passwor
d.
4. From
Networ
k
Service
s>
Networ
k
Configu
ration >
Configu
re
interfac
e Eth0,
set the
static
mode,
the IP
address,
and the
subnet
mask.
Other
interfac
e
settings
can
optional
ly be
set.

Listing
1.
Interfa
ce
settings

MAC address: 00:02:55:2F:FC:04


Type of IP address: Static

1. Host name (Currently: OP710)


2. Domain name (Currently: company.com)
3. IP address (Currently: 10.10.10.109)

35
4. Subnet mask (Currently: 255.255.255.0)
5. Default gateway (Currently: 10.10.10.1)
6. IP address of first DNS server
7. IP address of second DNS server
8. IP address of third DNS server
9. Save settings
98. Return to previous menu
99. Log out

S1>

5. Select
Save
settings
and
confirm
the
changes
to reset
the
service
process
or.
6. Open a
Web
browser
and
connect
to the
IP
address
set on
the FSP
using
HTTPS
protocol
(for
exampl
e,
https://
10.10.1
0.109).
The
Advanc
ed

36
System
s
Manage
ment
interfac
e
(ASMI)
will be
displaye
d.

The
Advan
ced
Syste
ms
Mana
geme
nt
interfa
ce
(ASMI)
can
now
be
acces
sed
from
either
the
serial
consol
e or
the
Web
interfa
ce.
The
followi
ng
steps
illustr
ate
how
to set
date

37
and
time
and
enabl
e
virtual
ization
of the
machi
ne
throug
h the
Web
ASMI.
You
can
perfor
m the
same
tasks
using
the
FSP
menu
throug
h the
serial
consol
e. I
recom
mend
the
use of
the
Web
interfa
ce to
acces
s the
ASMI
to
perfor
m
FSP-
relate

38
d
tasks
for
the
followi
ng
reaso
ns:

o I
n

o
r
d
e
r

t
o

a
c
c
e
s
s

t
h
e

F
S
P

m
e
n
u
s

t
h
r

39
o
u
g
h

t
h
e

s
e
r
i
a
l
c
o
n
s
o
l
e
,
t
h
e

m
a
c
h
i
n
e

m
u
s
t
b
e

i
n

40
p
o
w
e
r
-
o
f
f

s
t
a
t
e

w
h
i
l
e

t
h
e

F
S
P

i
s

p
o
w
e
r
e
d

o
n
.
o T

41
o

u
s
e

t
h
e

F
S
P

m
e
n
u
s
,
y
o
u

a
r
e

r
e
q
u
i
r
e
d

t
o

b
e

p
h
y

42
s
i
c
a
l
l
y

n
e
a
r

t
o

t
h
e

m
a
c
h
i
n
e

s
i
n
c
e

c
o
n
n
e
c
t
i
o
n

43
t
o

P
C

t
h
r
o
u
g
h

s
e
r
i
a
l
c
o
n
n
e
c
t
i
o
n

(
n
u
l
l
m
o
d
e
m

44
)

c
a
b
l
e

i
s

r
e
q
u
i
r
e
d
.
o I
t
i
s

m
o
r
e

c
o
n
v
e
n
i
e
n
t
t
o

u
s
e

45
t
h
e

W
e
b

i
n
t
e
r
f
a
c
e

t
o

a
c
c
e
s
s

t
h
e

A
S
M
I
,
a
n
d

u
s
e

46
t
h
e

s
e
r
i
a
l
c
o
n
s
o
l
e

t
o

b
r
i
n
g

u
p

t
h
e

S
y
s
t
e
m

M
a
n
a
g

47
e
m
e
n
t
S
e
r
v
i
c
e
s

(
S
M
S
)

m
e
n
u
s

t
o

i
n
s
t
a
l
l
t
h
e

V
I
O

s
e

48
r
v
e
r
.
7. Log in
to the
ASMI
with
User ID
admin
and the
changed
passwor
d, as
shown
in
Figure
2.

Figure
2.
ASMI
login
page

49
8. Select
System
Config
uration
> Time
Of Day
in the
navigati
on area.
Enter
the date
and
time
based
on the
UTC
time.
Click
Save
Setting

50
s.

Note:
You
can
find
the
curren
t UTC
time
at
http://
tycho.
usno.
navy.
mil/cgi
-
bin/ti
mer.pl
.

Figure
3.
Setting
system
configu
ration
-- Time
of day

51
9. Select
Power/
Restart
Contro
l>
Power
On/Off
System.
In Boot
to
system
server
firmwar
e, select
Standb
y, and
click
Save
settings
and

52
power
on.

Figure
4.
Poweri
ng on
to boot
system
server
firmwa
re to
standb
y mode

10. Wait
several
minutes
for the

53
system
to
power
on. If
you re-
display
the
Power
On/Off
System
page,
the
current
system
server
firmwar
e state
should
be at
"standb
y".
Select
On
Deman
d
Utilities
> CoD
Activat
ion.
Enter
the
CoD
Activati
on code
for the
Advanc
ed
Power
Virtuali
zation
feature,
and
click
Contin
ue.

54
Figure
5.
Enterin
g the
CoD
activati
on code

Install
Virtual I/O
Server
Now that
the FSP is
set up, VIOS
can be
installed
and
configured.
This will
again

55
require the
use of the
serial
terminal
application
on the PC
connected
to the
system
serial port.

1. Insert
the
VIOS
Version
1.2 disk
into the
system
CD/DV
D drive.
2. From
the
ASMI
Web
interfac
e, select
Power/
Restart
Contro
l>
Power
On/Off
System.
Select
Runnin
g for
Boot to
system
server
firmwar
e, and
click
Save
settings
and
power

56
on.
3. Wait
for the
prompt
on the
serial
terminal
and
then
press 0
to select
the
active
console.
4. Wait
for the
boot
screen
to
appear
on the
serial
console.
Immedi
ately
press 1
after the
word
Keyboa
rd is
displaye
d to go
to the
SMS
(System
Manage
ment
Service
s)
Menu.

Listing
2. Boot
screen
on

57
serial
termin
al

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM

1 = SMS Menu 5 = Default Boot List


8 = Open Firmware Prompt 6 = Stored Boot List

Memory Keyboard Network SCSI Speaker

5. Enter
the
admin
user
account
’s
passwor
d when
prompte
d for it.
6. After
the
SMS
Main
Menu is
displaye
d, select
the
followi
ng SMS
options:
Select

58
Boot
Option
s>
Select
Install/
Boot
Device
> List
all
Devices
> IDE
CD-
ROM
(use
SCSI
CD-
ROM,
if the
drive is
a SCSI
CD or
DVD
drive).
7. Select
Normal
Mode
Boot,
followe
d by
Yes to
exit
System
Manage
ment
Service
s.
8. The
system
begins
to boot
the
VIOS
disk
image.
After
several

59
minutes
(possibl
y a long
time on
some
systems
with
many
I/O
devices)
, the
"Welco
me to
the
Virtual
I/O
Server"
boot
image
informa
tion
will be
displaye
d.
When
asked to
define
the
system
console,
enter
the
number
that is
displaye
d as
directed
-- the
number
2 in this
exampl
e.

Listing
3.

60
Define
system
console

******* Please define the System Console. *******

Type a 2 and press Enter to use this terminal as the


system console.
Pour definir ce terminal comme console systeme, appuyez
sur 2 puis sur Entree.
Taste 2 und anschliessend die Eingabetaste druecken, um
diese Datenstation als Systemkonsole zu verwenden.
Premere il tasto 2 ed Invio per usare questo terminal
come console.
Escriba 2 y pulse Intro para utilizar esta terminal como
consola del sistema.
Escriviu 1 2 i premeu Intro per utilitzar aquest
terminal com a consola del sistema.
Digite um 2 e pressione Enter para utilizar este terminal
como console do sistema.

9. Enter 1
to
choose
English
during
the
install.
10. When
asked to
choose
the
installat
ion
preferen
ces,
enter 1
to
choose
Start
Install
Now
with
Default
Setting

61
s.
11. On the
System
Installat
ion
Summa
ry,
make
sure
hdisk0
is the
only
disk
selected
and
enter 1
to
Continu
e with
install.
12. The
installat
ion
progres
s will
be
displaye
d.

Listing
4.
Installa
tion
progres
s

Installing Base Operating System

Please wait...

62
Approximate Elapsed time
% tasks complete (in minutes)

57 18 67% of mksysb data restored.

13. When
the
installat
ion
complet
es, the
system
will
reboot.
Log in
as user
padmin
with the
default
passwor
d
padmin
. When
prompte
d,
change
the
passwor
d.
14. To view
the
VIOS
license
agreem
ent,
enter:
license –view
15.
16. To
accept

63
the
license,
enter:
license –accept
17.
18. To
create
the
VIOS
virtual
Etherne
t
interfac
es,
enter:
mkgencfg -o init
19.
20. To find
the
Etherne
t
interfac
e(s) that
will be
used for
the
server’s
external
connect
ion(s)
to the
network
, enter:
lsdev | grep ent
21.
22. T
he
two
marke
d with
"2-
Port
10/10
0/100
0
Base-
TX

64
PCI-X
Adapt
er"
(ent0
and
ent1)
are
the
onboa
rd
Ethern
et
adapt
ers.
23.
Listing
5.
Onboa
rd
Ethern
et
adapte
rs

$ lsdev |grep ent


ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (1410890
ent2 Available Gigabit Ethernet-SX PCI-X Adapter (14106802)
ent3 Available Gigabit Ethernet-SX PCI-X Adapter (14106802)
ent4 Available Virtual I/O Ethernet Adapter (l-lan)
ent5 Available Virtual I/O Ethernet Adapter (l-lan)
ent6 Available Virtual I/O Ethernet Adapter (l-lan)
ent7 Available Virtual I/O Ethernet Adapter (l-lan)
ibmvmc0 Available Virtual Management Channel
$

24.
25. Enter
the
mktcpi
p
comma
nd to
configu
re the
network
interfac

65
e(s) for
the
Etherne
t
adapter(
s) that
the
VIOS
will
use. In
this
exampl
e, the
network
cable is
plugged
into the
ent0
adapter
and is
connect
ed to
the
10.10.1
0.0
network
. The
interfac
e en0,
which
is the
corresp
onding
VIOS
network
interfac
e for the
physical
Etherne
t device
ent0,
will be
configu
red with
the IP
address

66
10.10.1
0.110.

$ mktcpip -hostname IBMOP_VIO -inetaddr 10.10.10.110 -interface en0 \


-netmask 255.255.255.0 -gateway 10.10.10.1 –start

26. Optiona
lly,
addition
al
customi
zation
of
VIOS
can be
done at
this
point.
In
addition
, the
system
is now
ready
for use
as an
IVM-
manage
d
system
using
the
IVM
Web
interfac
e.

67
Back to top

Using IVM to
create a
Linux
partition
Now that
VIOS is
installed
and
configured,
IVM can be
used to
manage the
system and
its
resources.
When
creating a
partition,
storage
should first
be allocated
for the
partition.
This storage
will be used
as a virtual
disk for a
Linux
partition. A
partition
can then be
created
using a
wizard.
Finally, a
Linux
distribution
can be
installed
into that
partition.
The Linux
installation

68
can be
performed
with a
network or
a CD-based
install. This
section will
demonstrat
e the
installation
based on a
CD install.
Create
default
storage pool
and space
for a virtual
disk

1. Open a
Web
browser
with
network
access
to the
Etherne
t
adapter
that was
configu
red for
VIOS
to the
IP
address
of the
VIOS --
http://1
0.10.10.
110.
Log in
to IVM
using
the
User ID

69
padmin
and the
passwor
d that
was
created
for
padmin.

Figure
6. Log
in to
IVM

2. The
IVM
View/M
odify
Partitio
ns page
will
display.

70
Examin
e the
System
Overvie
w
informa
tion and
the
Partitio
n
Details.
Notice
that the
only
partitio
n that is
currentl
y
displaye
d is the
VIOS
partitio
n, with
a
default
partitio
n name
that is
based
on the
system
serial
number.

Figure
7.
System
overvie
w and
partitio
n
details

71
3. When
creating
a
partitio
n,
storage
must be
allocate
d for
the
partitio
n. This
storage
comes
from a
shared
storage
pool,
which
is
manage
d by
VIOS.

72
The
default
storage
pool
used is
set as
rootvg.
When
VIOS is
installe
d, the
rootvg
is set to
hdisk0,
which
is the
same
disk
that
contains
VIOS.
It is
conside
red a
good
practice
on
systems
with
several
disks to
create a
new
storage
pool
and
define it
as the
default
storage
pool.
See
Resourc
es for
more
informa

73
tion on
the use
of the
storage
pool
and
advance
d
storage
configu
rations.

To
create
a
storag
e pool
with
other
drives
on the
syste
m,
from
the
Storag
e
Mana
geme
nt
menu
in the
navig
ation
area,
click
Creat
e
Devic
es,
and
click
on the
Adva
nced
Creat

74
e
Devic
es
tab.
Then
click
on
Creat
e
Stora
ge
Pool.

Figure
8.
Create
storage
Pool

4. From
the

75
Create
Storage
Pool
window
, enter a
storage
pool
name
LinuxPo
olvg.
Select
hdisk1
and
hdisk2.
Click
OK to
create
the
pool.

Figure
9.
Create
storage
pool

76
5. From
the
Storage
Manage
ment
menu,
click
Advanc
ed
View/
Modify
Devices
. Then
on the
Storage
Pools
tab,
select
LinuxP
oolvg.

77
Click
on the
Assign
as
default
storage
pool
task on
the
bottom
of the
page.

Figure
10.
Assign
as
default
storage
pool

6. On the
Assign

78
as
default
storage
pool
page,
click
OK.
7. From
the
Storage
Manage
ment
menu,
click
Advanc
ed
View/
Modify
Devices
. Then
click
the
Advanc
ed
Create
Devices
tab.
Click
on
Create
Logical
Volum
e.

Figure
11.
Create
logical
volume

79
8. On the
Create
Logical
Volume
window
, enter a
Logical
Volume
Name
Linux01
LV,
select
the
Storage
Pool
Name
LinuxP
oolvg,
enter
Logical
Volume
Size 20,
and

80
select
GB.
Then
click
OK.

Figure
12.
Enter
logical
volume
name
and
size

9. From
the
Storage
Manage
ment
menu
Advanc
ed
View/
Modify
Devices

81
page,
the
newly
created
Linux0
1LV
volume
can
now be
seen.

Create the
Linux
partition

1. Create
the
Linux
partitio
n with
the
Create
Partitio
n
Wizard.
From
the
Partitio
n
Manage
ment
menu,
click
Create
Partitio
ns.
Click
Start
Wizard
.

Figure
13.
Create
the

82
Linux
partitio
n

2. The
Create
Partitio
n
Wizard
window
will
open:
1. O
n

t
h
e

C
r
e
a

83
t
e

P
a
r
t
i
t
i
o
n
:
N
a
m
e

w
i
n
d
o
w
,
e
n
t
e
r

P
a
r
t
i
t
i
o
n

I
D

84
n
d

P
a
r
t
i
t
i
o
n

n
a
m
e

L
i
n
u
x
0
1
.
C
l
i
c
k

N
e
x
t
.
2. O
n

t
h
e

C
r
e
a

85
t
e

P
a
r
t
i
t
i
o
n
:
M
e
m
o
r
y

w
i
n
d
o
w
,
e
n
t
e
r

A
s
s
i
g
n
e
d

m
e
m
o

86
r
y

7
6
8

a
n
d

s
e
l
e
c
t
M
B
.
C
l
i
c
k

N
e
x
t
.
3. O
n

t
h
e

C
r
e
a
t
e

P
a

87
r
t
i
t
i
o
n
:
P
r
o
c
e
s
s
o
r
s

w
i
n
d
o
w
,
s
e
l
e
c
t
A
s
s
i
g
n
e
d

p
r
o
c
e

88
s
s
o
r
s

a
n
d

s
e
l
e
c
t
S
h
a
r
e
d
.
C
l
i
c
k

N
e
x
t
.
4. O
n

t
h
e

C
r
e

89
a
t
e

P
a
r
t
i
t
i
o
n
:
V
i
r
t
u
a
l
E
t
h
e
r
n
e
t
w
i
n
d
o
w
,
s
e
l
e
c
t
V
i
r
t

90
u
a
l
E
t
h
e
r
n
e
t
1

f
o
r

A
d
a
p
t
e
r

1
.
C
l
i
c
k

N
e
x
t
.
5. O
n

t
h
e

91
r
e
a
t
e

P
a
r
t
i
t
i
o
n
:
S
t
o
r
a
g
e

T
y
p
e

w
i
n
d
o
w
,
y
o
u

c
a
n

e
i

92
t
h
e
r

c
r
e
a
t
e

v
i
r
t
u
a
l
d
i
s
k

f
r
o
m

t
h
e

d
e
f
a
u
l
t
s
t
o
r

93
a
g
e

p
o
o
l
o
r

a
s
s
i
g
n

a
n

e
x
i
s
t
i
n
g

v
i
r
t
u
a
l
d
i
s
k

o
r

94
p
h
y
s
i
c
a
l
v
o
l
u
m
e
.
T
o

u
t
i
l
i
z
e

t
h
e

2
0
G
B

L
i
n
u
x
0
1
L
V

95
l
o
g
i
c
a
l
v
o
l
u
m
e

t
h
a
t
w
a
s

a
l
r
e
a
d
y

c
r
e
a
t
e
d
,
s
e
l
e
c
t
A
s

96
s
i
g
n

e
x
i
s
t
i
n
g

v
i
r
t
u
a
l
d
i
s
k
s

a
n
d

p
h
y
s
i
c
a
l
v
o
l
u
m
e
s

97
.
C
l
i
c
k

N
e
x
t
.
6. O
n

t
h
e

C
r
e
a
t
e

P
a
r
t
i
t
i
o
n
:
S
t
o
r
a
g
e

w
i

98
n
d
o
w
,
s
e
l
e
c
t
t
h
e

a
v
a
i
l
a
b
l
e

v
i
r
t
u
a
l
d
i
s
k

L
i
n
u
x
0
1
L
V

99
.
C
l
i
c
k

N
e
x
t
.
7. O
n

t
h
e

C
r
e
a
t
e

P
a
r
t
i
t
i
o
n
:
O
p
t
i
c
a
l
w
i
n

100
d
o
w
,
t
o

a
s
s
i
g
n

t
h
e

D
V
D

d
r
i
v
e

t
o

t
h
i
s

p
a
r
t
i
t
i
o
n
,

101
s
e
l
e
c
t
c
d
0
.
C
l
i
c
k

N
e
x
t
.
8. R
e
v
i
e
w

t
h
e

C
r
e
a
t
e

P
a
r
t
i
t
i

102
o
n
:
S
u
m
m
a
r
y
.
T
h
e
n

c
l
i
c
k

F
i
n
i
s
h
.

Figure
14.
Review
the
create
partitio
n
summa
ry

103
3. The
IVM
View/M
odify
Partitio
ns page
will
now
contain
the
Linux0
1
partitio
n.

104
Figure
15.
IVM
view/m
odify
partitio
ns

4. A
virtual
Etherne
t bridge
is
required
to
provide
access
from
the
partitio
n’s
virtual
Etherne

105
t to the
external
network
. From
the
Virtual
Etherne
t
Manage
ment
menu,
click
View/
Modify
Virtual
Ethern
et.
Click
on the
Virtual
Ethern
et
Bridge
tab. For
the
Virtual
Etherne
t ID 1,
select
the
Physica
l
Adapter
ent0.
Click
Apply.

Figure
16.
Virtual
Ethern
et
bridge

106
Install Linux
into the
partition

1. To
install
Linux,
insert
the
Linux
distribu
tion
installat
ion CD
in the
CD/DV
D drive.
2. Start a
virtual
console
running
in the

107
VIO
server.
Telnet
to
10.10.1
0.110,
log in
as
padmin,
and
enter
the
comma
nd
mkvt –
id 2
(since 2
is the
partitio
n ID for
the
Linux0
1
partitio
n). The
console
will
then
wait
until the
partitio
n is
activate
d.

Listing
6.
Virtual
console

# telnet 10.10.10.110
Trying 10.10.10.110...
Connected to 10.10.10.110.
Escape character is '^]'.

108
telnet (IBMOP_VIO)

IBM Virtual I/O Server

login: padmin
padmin's Password:
Last login: Wed Sep 28 15:34:44 CDT 2005 on /dev/pts/0 from 10.10.10.20
$ mkvt -id 2

3. Linux
can
now be
installe
d by
activati
ng the
partitio
n and
using
System
Manage
ment
Service
s (SMS)
from
the
virtual
console.
From
IVM’s
Partitio
n
Manage
ment
menu
View/
Modify
Partitio
ns,
select
the
partitio
n
Linux0
1, and

109
then
select
the
Activat
e task
on the
bottom
of the
page.

Figure
17.
Activat
e the
Linux
partitio
n

4. On the
Activat
e

110
Partitio
ns page,
click
OK.
5. Switch
back to
the
telnet
session
with the
virtual
console.
As the
partitio
n boots
up,
press 0
to select
this
session
as the
console.
6. Wait
for the
boot
screen
to
appear
on the
virtual
console.
Immedi
ately
press 1
after the
word
Keyboa
rd is
displaye
d, to go
to the
SMS
Menu.
7. After
the
SMS

111
Main
Menu is
displaye
d, select
the
followi
ng SMS
options:
Select
Boot
Option
s>
Select
Install/
Boot
Device
> List
all
Devices
> SCSI
CD-
ROM .

Note:
Even
if the
physic
al
drive
is an
IDE
DVD
drive,
the
virtual
optica
l
driver
report
s it
back
to
SMS
and
Linux
as a

112
SCSI
CD/DV
D
drive.

8. Select
Normal
Mode
Boot,
followe
d by
Yes to
exit
System
Manage
ment
Service
s.
9. The
system
begins
to boot
the
Linux
installat
ion CD.
Proceed
at this
point
with the
standar
d Linux
installat
ion
process.
10. When
the
installat
ion is
complet
e, the
virtual
console
can be
closed

113
by
entering
~. (tilde
period).

Note:
You
can
force
a
virtual
consol
e
closed
from
the
VIO
Server
with
the
comm
and
rmvt –
id
<partiti
on id>
(for
exam
ple,
rmvt –
id 2).

11. Now
that
Linux is
running
in the
partitio
n, the
IVM
View/M
odify
Partitio
ns page
will
display
a status

114
of
"Runni
ng" and
a
"Linux
ppc64"
indicato
r as a
referenc
e code
for the
Linux0
1
partitio
n.

Note:
The
Linux
ppc64
indica
tor is
depen
dent
on the
Linux
distrib
ution
install
ed.

Figure
18.
Status:
Runnin
g

115
Back to top

Modifying
partition
resources
IVM can be
used to
modify the
system
resources
that are
available to
the
partitions.
The
memory
and the
processing
resources

116
for the VIOS
(partition ID
1) can be
modified
dynamically
. This is
accomplishe
d by
selecting
the Partition
Managemen
t menu
View/Modif
y
Partitions
page, and
then
selecting
the
Properties
task. Figure
19 shows an
example of
modifying
the
Processing
Units
assigned
setting for
the VIOS
partition
with the
Properties
task.

Figure 19.
Modify the
processing
resources for
the VIOS
partition

117
Resources
for the
Linux
partitions
can be also
be modified
from the
Partition
Managemen
t menu
View/Modif
y
Partitions
>
Properties
task.
However,
changes to
the

118
memory,
processing,
and virtual
Ethernet
resources
can only be
made when
the Linux
partition is
not active.
In addition,
after
making the
resource
change on
the
Properties
task, Linux
must be
rebooted.
Figure 20
shows an
example of
changing
the
assigned
processing
units
assigned to
a Linux
partition.

Figure 20.
Modifying the
processing
resources for a
Linux parition

119
To add
additional
storage to a
Linux
partition,
first create
another
virtual disk
from the
Storage
Managemen
t menu
Create
Devices
page. Click
Create
Virtual
Disk, and
enter the

120
new virtual
disk
information.
Then click
OK. Figure
21 shows an
example of
creating a
new virtual
disk.

Figure 21.
Create a
virtual disk

The
View/Modify
Partitions >
properties
task can be
used to
assign
storage to a
Linux
partition.
Changes
can be

121
made to the
storage and
optical
devices
properties
while the
partition is
active.
However,
the storage
device must
not be in
use by
another
partition.
Figure 22
shows the
assigning of
the new
virtual disk
Linux01disk,
created
above, to
the Linux01
partition.

Figure 22.
Assign a
virtual disk to
a partitions
storage

122
With SLES9
SP2, the
SCSI bus
can be
rescanned
with the
bash shell
script
/bin/rescan-
scsi-bus.sh
to make the
virtual disk
available.
However,
with RHEL3
and RHEL4,
the partition
must be
rebooted for

123
the
operating
system to
pick up the
new SCSI
device.
In SLES9
SP2, the
lsscsi
command
can be used
to list all
SCSI
devices and
find the new
SCSI device.
In SLES or
RHEL, the
fdisk –l
command
can be used
to display
information
about the
virtual
disks. The
new virtual
disk can
then be
partitioned
with the
fdisk
command.
The mkfs
command
can be used
to build a
Linux file
system, and
then the
disk
partition
can be
mounted
with the
mount

124
command.
An
alternative
method of
adding
more
storage to a
Linux
partition is
to extend a
current
virtual disk.
This can be
done by
using the
Storage
Managemen
t menu
View/Modif
y Devices,
and then
selecting
the Extend
task. Figure
23 shows an
example of
using the
Extend
Virtual Disk
task to
increase the
Linux01LV
storage
space by
the 10GB.
After the
disk is
extended,
the Linux
partition
must be
shut down.
IVM should
then be
used to
detach the

125
virtual disk,
and re-
attach it.
When the
Linux
partition is
re-booted,
the drive
will be
larger, due
to the
extended
storage.
Disk
partitions
can then be
allocated
with fdisk.

Figure 23.
Extend an
existing
virtual disk

126
Back to top

Summary
Logical
partitioning
can be an
integral
component
of a
successful
server
consolidatio
n strategy.
With the
Integrated
Virtualizatio
n Manager,
coupled
with the
performanc
e of
POWER5
systems,
businesses
can
leverage an
easy-to-use,
intuitive,
and cost-
effective
solution for
creating
partitions
and working
with
virtualizatio
n resources.
IVM
provides a
system
managemen
t solution
that is
especially
suited for

127
small and
mid-sized
businesses,
as well as
larger
businesses
with
distributed
environmen
ts. This
article
discussed
some of the
capabilities
and
limitations
of IVM, as
well as how
it can be
used to
work with
Linux
partitions.
For more
information
about using
IVM for
system
managemen
t, refer to
Resources
below.

Back to top

Appendix
Comparison
of IVM and
HMC

Physical footprint

Installation

128
Managed Operating
Systems supported
Virtual console
support

User security

Network security

Supported Hardware

Multiple system
support

Redundancy

Maximum number of
partitions supported
Uncapped Partition
Support

Dynamic Resource
Movement (DLPAR)

I/O Support for AIX


and Linux
I/O Support for i5/OS
Maximum # of
virtual LANs
Fix/Update process
for Manager
Adapter microcode
updates

Firmware updates

I/O Concurrent

129
Maintenance

Scripting and
Automation
Capacity on Demand
Workload
Management (WLM)
Groups Supported
LPAR Configuration
Data Backup and
Restore
Support for multiple
profiles per partition

Serviceable event
management

Hypervisor and
service processor
dump support

Remote support

Resources
Learn

• The
"Virtual
I/O
Server:
Integrat
ed
Virtuali
zation
Manage
r"
Redpap
er
provide
s an
introduc
tion to

130
IVM,
describi
ng its
architec
ture and
showin
g how
to
install
and
configu
re a
partitio
ned
server
using
its
capabili
ties.

• "IBM
Integrat
ed
Virtuali
zation
Manage
r:
Loweri
ng the
cost of
entry
into
POWE
R5
virtualiz
ation"
provide
s an in
depth
overvie
w of
IVM
with
some
deploy
ment

131
exampl
es.

• IBM
eServer
Hardwa
re
Informa
tion
Center:
Partitio
ning
with the
Integrat
ed
Virtuali
zation
Manage
r
provide
s
informa
tion on
how to
create
logical
partitio
ns on a
single
manage
d
system,
manage
the
virtual
storage
and
virtual
Etherne
t on the
manage
d
system,
and
view
service
informa

132
tion
related
to the
manage
d
system.

• IBM
eServer
Hardwa
re
Informa
tion
Center
provide
s
informa
tion to
familiar
ize you
with the
hardwar
e and
softwar
e
required
for
logical
partitio
ns and
to
prepare
you to
plan for
and
create
logical
partitio
ns on
your
server.

• Look at
these:
o L
i
n

133
u
x

o
n

P
o
w
e
r

A
r
c
h
i
t
e
c
t
u
r
e

D
e
v
e
l
o
p
e
r
'
s

C
o
r
n
e
r
:
L
e

134
a
r
n

m
o
r
e

a
b
o
u
t
L
i
n
u
x

o
n

P
o
w
e
r
.
F
i
n
d

t
e
c
h
n
i
c
a
l
d
o
c

135
u
m
e
n
t
a
t
i
o
n
,
e
d
u
c
a
t
i
o
n
,
d
o
w
n
l
o
a
d
s
,
p
r
o
d
u
c
t
i
n
f
o
r
m
a
t

136
i
o
n
,
a
n
d

m
o
r
e
.
o L
i
n
u
x

o
n

P
O
W
E
R

I
S
V

R
e
s
o
u
r
c
e

C
e
n
t
e

137
r
:
P
a
r
t
n
e
r
W
o
r
l
d

o
f
f
e
r
s

r
a
n
g
e

o
f

b
e
n
e
f
i
t
s

f
o
r

138
B
u
s
i
n
e
s
s

P
a
r
t
n
e
r
s

w
h
o

s
u
p
p
o
r
t
L
i
n
u
x
.
o L
e
a
r
n

a
b
o
u
t

139
L
i
n
u
x

a
t
I
B
M

o S
e
a
r
c
h

I
B
M

R
e
d
b
o
o
k
s

140

Вам также может понравиться