Topics: AIX, LVM, System Administration

LVM command history

Want to know which LVM commands were run on a system? Simply run the following command, and get a list of the LVM command history:

# alog -o -t lvmcfg
To filter out only the actual commands:
# alog -o -t lvmcfg | grep -v -E "workdir|exited|tellclvmd"
[S 06/11/13-16:52:02:236 lvmstat.c 468] lvmstat -v testvg
[S 06/11/13-16:52:02:637 lvmstat.c 468] lvmstat -v rootvg
[S 07/20/13-15:02:15:076 extendlv.sh 789] extendlv testlv 400
[S 07/20/13-15:02:33:199 chlv.sh 527] chlv -x 4096 testlv
[S 08/22/13-12:29:16:807 chlv.sh 527] chlv -e x testlv
[S 08/22/13-12:29:26:150 chlv.sh 527] chlv -e x fslv00
[S 08/22/13-12:29:46:009 chlv.sh 527] chlv -e x loglv00
[S 08/22/13-12:30:55:843 reorgvg.sh 590] reorgvg
The information for this LVM command history is stored in /var/adm/ras/lvmcfg.log. You can check the location for a circular log, by running:
# alog -t lvmcfg -L
#file:size:verbosity
/var/adm/ras/lvmcfg.log:51200:3
More detail can also be found in the lvmt log, by running:
# alog -t lvmt -o

Topics: AIX, Backup & restore, LVM, System Administration

Use dd to backup raw partition

The savevg command can be used to backup user volume groups. All logical volume information is archived, as well as JFS and JFS2 mounted filesystems. However, this command cannot be used to backup raw logical volumes.

Save the contents of a raw logical volume onto a file using:

# dd if=/dev/lvname of=/file/system/lvname.dd
This will create a copy of logical volume "lvname" to a file "lvname.dd" in file system /file/system. Make sure that wherever you write your output file to (in the example above to /file/system) has enough disk space available to hold a full copy of the logical volume. If the logical volume is 100 GB, you'll need 100 GB of file system space for the copy.

If you want to test how this works, you can create a logical volume with a file system on top of it, and create some files in that file system. Then unmount he filesystem, and use dd to copy the logical volume as described above.

Then, throw away the file system using "rmfs -r", and after that has been completed, recreate the logical volume and the file system. If you now mount the file system, you will see, that it is empty. Unmount the file system, and use the following dd command to restore your backup copy:
# dd if=/file/system/lvname.dd of=/dev/lvname
Then, mount the file system again, and you will see that the contents of the file system (the files you've placed in it) are back.

Topics: AIX, LVM, System Administration

Renaming disk devices

Getting disk devices named the same way on, for example, 2 nodes of a PowerHA cluster, can be really difficult. For us humans though, it's very useful to have the disks named the same way on all nodes, so we can recognize the disks a lot faster, and don't have to worry about picking the wrong disk.

The way to get around this usually involved either creating dummy disk devices or running configuration manager on a specific adapter, like: cfgmgr -vl fcs0. This complicated procedure is not needed any more since AIX 7.1 and AIX 6.1 TL6, because a new command has been made available, called rendev, which is very easy to use for renaming devices:

# lspv
hdisk0  00c8b12ce3c7d496  rootvg  active
hdisk1  00c8b12cf28e737b  None

# rendev -l hdisk1 -n hdisk99

# lspv
hdisk0  00c8b12ce3c7d496  rootvg  active
hdisk99 00c8b12cf28e737b  None

Topics: AIX, LVM, Storage, System Administration

VGs (normal, big, and scalable)

The VG type, commonly known as standard or normal, allows a maximum of 32 physical volumes (PVs). A standard or normal VG is no more than 1016 physical partitions (PPs) per PV and has an upper limit of 256 logical volumes (LVs) per VG. Subsequently, a new VG type was introduced which was referred to as big VG. A big VG allows up to 128 PVs and a maximum of 512 LVs.

AIX 5L Version 5.3 has introduced a new VG type called scalable volume group (scalable VG). A scalable VG allows a maximum of 1024 PVs and 4096 LVs. The maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis. This opens up the prospect of configuring VGs with a relatively small number of disks and fine-grained storage allocation options through a large number of PPs, which are small in size. The scalable VG can hold up to 2,097,152 (2048 K) PPs. As with the older VG types, the size is specified in units of megabytes and the size variable must be equal to a power of 2. The range of PP sizes starts at 1 (1 MB) and goes up to 131,072 (128 GB). This is more than two orders of magnitude above the 1024 (1 GB), which is the maximum for both normal and big VG types in AIX 5L Version 5.2. The new maximum PP size provides an architectural support for 256 petabyte disks.

The table below shows the variation of configuration limits with different VG types. Note that the maximum number of user definable LVs is given by the maximum number of LVs per VG minus 1 because one LV is reserved for system use. Consequently, system administrators can configure 255 LVs in normal VGs, 511 in big VGs, and 4095 in scalable VGs.

VG typeMax PVsMax LVsMax PPs per VGMax PP size
Normal VG3225632,512 (1016 * 32)1 GB
Big VG128512130,048 (1016 * 128)1 GB
Scalable VG102440962,097,152128 GB

The scalable VG implementation in AIX 5L Version 5.3 provides configuration flexibility with respect to the number of PVs and LVs that can be accommodated by a given instance of the new VG type. The configuration options allow any scalable VG to contain 32, 64, 128, 256, 512, 768, or 1024 disks and 256, 512, 1024, 2048, or 4096 LVs. You do not need to configure the maximum values of 1024 PVs and 4096 LVs at the time of VG creation to account for potential future growth. You can always increase the initial settings at a later date as required.

The System Management Interface Tool (SMIT) and the Web-based System Manager graphical user interface fully support the scalable VG. Existing SMIT panels, which are related to VG management tasks, have been changed and many new panels added to account for the scalable VG type. For example, you can use the new SMIT fast path _mksvg to directly access the Add a Scalable VG SMIT menu.

The user commands mkvg, chvg, and lsvg have been enhanced in support of the scalable VG type.

For more information:
http://www.ibm.com/developerworks/aix/library/au-aix5l-lvm.html.

Topics: AIX, LVM, System Administration

bootlist: Multiple boot logical volumes found

This describes how to resolve the following error when setting the bootlist:

# bootlist -m normal hdisk2 hdisk3
0514-229 bootlist: Multiple boot logical volumes found on 'hdisk2'.
Use the 'blv' attribute to specify the one from which to boot.
To resolve this: clear the boot logical volumes from the disks:
# chpv -c hdisk2
# chpv -c hdisk3
Verify that the disks can no longer be used to boot from by running:
# ipl_varyon -i
Then re-run bosboot on both disks:
# bosboot -ad /dev/hdisk2
bosboot: Boot image is 38224 512 byte blocks.
# bosboot -ad /dev/hdisk3
bosboot: Boot image is 38224 512 byte blocks.
Finally, set the bootlist again:
# bootlist -m normal hdisk2 hdisk3
Another way around it is by specifying hd5 using the blv attribute:
# bootlist -m normal hdisk2 blv=hd5 hdisk3 blv=hd5
This will set the correct boot logical volume, but the error will show up if you ever run the bootlist command again without the blv attribute.

Topics: AIX, LVM, System Administration

Mirrorvg without locking the volume group

When you run the mirrorvg command, you will (by default) lock the volume group it is run against. This way, you have no way of knowing what the status is of the sync process that occurs after mirrorvg has run the mklvcopy commands for all the logical volumes in the volume group. Especially with very large volume groups, this can be a problem.

The solution however is easy: Make sure to run the mirrorvg command with the -s option, to prevent it to run the sync. Then, when mirrorvg has completed, run the syncvg yourself with the -P option.

For example, if you wish to mirror the rootvg from hdisk0 to hdisk1:

# mirrorvg -s rootvg hdisk1
Of course, make sure the new disk is included in the boot list for the rootvg:
# bootlist -m normal hdisk0 hdisk1
Now rootvg is mirrored, but not yet synced. Run "lsvg -l rootvg", and you'll see this. So run the syncvg command yourself. With the -P option you can specify the number of threads that should be started to perform the sync process. Usually, you can specify at least 2 to 3 times the number of cores in the system. Using the -P option has an extra feature: there will be no lock on the volume group, allowing you to run "lsvg rootvg" within another window, to check the status of the sync process.
# syncvg -P 4 -v rootvg
And in another window:
# lsvg rootvg | grep STALE | xargs
STALE PVs: 1 STALE PPs: 73

Topics: AIX, LVM, System Administration

File system creation time

To determine the time and date a file system was created, you can use the getlvcb command. First, figure out what the logical volume is that is used for a partical file system, for example, if you want to know for the /opt file system:

# lsfs /opt
Name         Nodename Mount Pt VFS   Size    Options Auto Accounting
/dev/hd10opt --       /opt     jfs2  4194304 --      yes  no
So file system /opt is located on logical volume hd10opt. Then run the getlvcb command:
# getlvcb -AT hd10opt
  AIX LVCB
  intrapolicy = c
  copies = 2
  interpolicy = m
  lvid = 00f69a1100004c000000012f9dca819a.9
  lvname = hd10opt
  label = /opt
  machine id = 69A114C00
  number lps = 8
  relocatable = y
  strict = y
  stripe width = 0
  stripe size in exponent = 0
  type = jfs2
  upperbound = 32
  fs = vfs=jfs2:log=/dev/hd8:vol=/opt:free=false:quota=no
  time created  = Thu Apr 28 20:26:36 2011
  time modified = Thu Apr 28 20:40:38 2011
You can clearly see the "time created" for this file system in the example above.

Topics: AIX, LVM, System Administration

Logical volumes with customized owner / group / mode

Some applications, for example Oracle when using raw logical volumes, may require specific access to logical volumes. Oracle will require that the raw logical volume is owned by the oracle account, and it may or may not require custom permissions.

The default values for a logical volume are: dev_uid=0 (owned by user root), dev_gid=0 (owned by group system) and dev_perm=432 (mode 660). You can check the current settings of a logical volume by using the readvgda command:

# readvgda vpath51 | egrep "lvname|dev_|Logical"
lvname:         testlv (i=2)
dev_uid:        0
dev_gid:        0
dev_perm:       432
If the logical volume was create with or has been modified to use customized owner/group/mode values, the dev_values will show the current uid/gid/perm values, for example:
# chlv -U user -G staff -P 777 testlv
# ls -als /dev/*testlv
   0 crwxrwxrwx 1 user staff 57, 3 Mar 10 14:39 /dev/rtestlv
   0 brwxrwxrwx 1 user staff 57, 3 Mar 10 14:39 /dev/testlv
# readvgda vpath51 | egrep "lvname|dev_|Logical"
lvname:         testlv (i=2)
dev_uid:        3878
dev_gid:        1
dev_perm:       511
When the volume group is exported, and re-imported, this information is lost:
# errpt
# exportvg testvg
# importvg -y testvg vpath51
testvg
# ls -als /dev/*testlv
   0 crw-rw---- 1 root system 57, 3 Mar 10 15:11 /dev/rtestlv
   0 brw-rw---- 1 root system 57, 3 Mar 10 15:11 /dev/testlv
To avoid this from happening, make sure to use the -R option, that will restore any specific settings:
# chlv -U user -G staff -P 777 testlv
# ls -als /dev/*testlv
   0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:11 /dev/rtestlv
   0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:11 /dev/testlv
# readvgda vpath51 | egrep "lvname|dev_|Logical"
lvname:         testlv (i=2)
dev_uid:        3878
dev_gid:        1
dev_perm:       511
# varyoffvg testvg
# exportvg testvg
importvg -Ry testvg vpath51
testvg
# ls -als /dev/*testlv
   0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/rtestlv
   0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/testlv
Never use the chown/chmod/chgrp commands to change the same settings on the logical volume. It will work, however, the updates will not be written to the VGDA, and as soon as the volume group is exported out and re-imported on the system, the updates will be gone:
# chlv -U root -G system -P 660 testlv
# ls -als /dev/*testlv
   0 crw-rw---- 1 root system 57, 3 Mar 10 15:14 /dev/rtestlv
   0 brw-rw---- 1 root system 57, 3 Mar 10 15:14 /dev/testlv
# chown user.staff /dev/testlv /dev/rtestlv
# chmod 777 /dev/testlv /dev/rtestlv
# ls -als /dev/*testlv
   0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/rtestlv
   0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/testlv
# readvgda vpath51 | egrep "lvname|dev_|Logical"
lvname:         testlv (i=2)
dev_uid:        0
dev_gid:        0
dev_perm:       360
Notice above how the chlv command changed the owner to root, the group to system, and the permissions to 660. Even after the chown and chmod commands are run, and the changes are visible on the device files in /dev, the changes are not seen in the VGDA. This is confirmed when the volume group is exported and imported, even with using the -R option:
# varyoffvg testvg
# exportvg testvg
# importvg -Ry testvg vpath51
testvg
# ls -als /dev/*testlv
   0 crw-rw---- 1 root system 57, 3 Mar 10 15:23 /dev/rtestlv
   0 brw-rw---- 1 root system 57, 3 Mar 10 15:23 /dev/testlv
So, when you have customized user/group/mode settings for logical volumes, and you need to export and import the volume group, always make sure to use the -R option when running importvg.

Also, make sure never to use the chmod/chown/chgrp commands on logical volume block and character devices in /dev, but use the chlv command instead, to make sure the VGDA is updated accordingly.

Note: A regular volume group does not store any customized owner/group/mode in the VGDA. It is only stored for Big or Scalable volume groups. In case you're using a regular volume group with customized owner/group/mode settings for logical volumes, you will have to use the chmod/chown/chgrp commands to update it, especially after exporting and re-importing the volume group.

Topics: AIX, Backup & restore, LVM, Performance, Storage, System Administration

Using lvmstat

One of the best tools to look at LVM usage is with lvmstat. It can report the bytes read and written to logical volumes. Using that information, you can determine which logical volumes are used the most.

Gathering LVM statistics is not enabled by default:

# lvmstat -v data2vg
0516-1309 lvmstat: Statistics collection is not enabled for
        this logical device. Use -e option to enable.
As you can see by the output here, it is not enabled, so you need to actually enable it for each volume group prior to running the tool using:
# lvmstat -v data2vg -e
The following command takes a snapshot of LVM information every second for 10 intervals:
# lvmstat -v data2vg 1 10
This view shows the most utilized logical volumes on your system since you started the data collection. This is very helpful when drilling down to the logical volume layer when tuning your systems.
# lvmstat -v data2vg

Logical Volume    iocnt   Kb_read  Kb_wrtn   Kbps
  appdatalv      306653  47493022   383822  103.2
  loglv00            34         0     3340    2.8
  data2lv           453    234543   234343   89.3         
What are you looking at here?
  • iocnt: Reports back the number of read and write requests.
  • Kb_read: Reports back the total data (kilobytes) from your measured interval that is read.
  • Kb_wrtn: Reports back the amount of data (kilobytes) from your measured interval that is written.
  • Kbps: Reports back the amount of data transferred in kilobytes per second.
You can use the -d option for lvmstat to disable the collection of LVM statistics.

Topics: AIX, Backup & restore, LVM, Performance, Storage, System Administration

Spreading logical volumes over multiple disks

A common issue on AIX servers is, that logical volumes are configured on only one single disk, sometimes causing high disk utilization on a small number of disks in the system, and impacting the performance of the application running on the server.

If you suspect that this might be the case, first try to determine which disks are saturated on the server. Any disk that is in use more than 60% all the time, should be considered. You can use commands such as iostat, sar -d, nmon and topas to determine which disks show high utilization. If the do, check which logical volumes are defined on that disk, for example on an IBM SAN disk:

# lspv -l vpath23
A good idea always is to spread the logical volumes on a disk over multiple disk. That way, the logical volume manager will spread the disk I/O over all the disks that are part of the logical volume, utilizing the queue_depth of all disks, greatly improving performance where disk I/O is concerned.

Let's say you have a logical volume called prodlv of 128 LPs, which is sitting on one disk, vpath408. To see the allocation of the LPs of logical volume prodlv, run:
# lslv -m prodlv
Let's also assume that you have a large number of disks in the volume group, in which prodlv is configured. Disk I/O usually works best if you have a large number of disks in a volume group. For example, if you need to have 500 GB in a volume group, it is usually a far better idea to assign 10 disks of 50 GB to the volume group, instead of only one disk of 512 GB. That gives you the possibility of spreading the I/O over 10 disks instead of only one.

To spread the disk I/O prodlv over 8 disks instead of just one disk, you can create an extra logical volume copy on these 8 disks, and then later on, when the logical volume is synchronized, remove the original logical volume copy (the one on a single disk vpath408). So, divide 128 LPs by 8, which gives you 16LPs. You can assign 16 LPs for logical volume prodlv on 8 disks, giving it a total of 128 LPs.

First, check if the upper bound of the logical volume is set ot at least 9. Check this by running:
# lslv prodlv
The upper bound limit determines on how much disks a logical volume can be created. You'll need the 1 disk, vpath408, on which the logical volume already is located, plus the 8 other disks, that you're creating a new copy on. Never ever create a copy on the same disk. If that single disk fails, both copies of your logical volume will fail as well. It is usually a good idea to set the upper bound of the logical volume a lot higher, for example to 32:
# chlv -u 32 prodlv
The next thing you need to determine is, that you actually have 8 disks with at least 16 free LPs in the volume group. You can do this by running:
# lsvg -p prodvg | sort -nk4 | grep -v vpath408 | tail -8
vpath188  active  959   40  00..00..00..00..40
vpath163  active  959   42  00..00..00..00..42
vpath208  active  959   96  00..00..96..00..00
vpath205  active  959  192  102..00..00..90..00
vpath194  active  959  240  00..00..00..48..192
vpath24   active  959  243  00..00..00..51..192
vpath304  active  959  340  00..89..152..99..00
vpath161  active  959  413  14..00..82..125..192
Note how in the command above the original disk, vpath408, was excluded from the list.

Any of the disks listed, using the command above, should have at least 1/8th of the size of the logical volume free, before you can make a logical volume copy on it for prodlv.

Now create the logical volume copy. The magical option you need to use is "-e x" for the logical volume commands. That will spread the logical volume over all available disks. If you want to make sure that the logical volume is spread over only 8 available disks, and not all the available disks in a volume group, make sure you specify the 8 available disks:
# mklvcopy -e x prodlv 2 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
Now check again with "mklv -m prodlv" if the new copy is correctly created:
# lslv -m prodlv | awk '{print $5}' | grep vpath | sort -dfu | \
while read pv ; do
result=`lspv -l $pv | grep prodlv`
echo "$pv $result"
done
The output should similar like this:
vpath161 prodlv  16  16  00..00..16..00..00  N/A
vpath163 prodlv  16  16  00..00..00..00..16  N/A
vpath188 prodlv  16  16  00..00..00..00..16  N/A
vpath194 prodlv  16  16  00..00..00..16..00  N/A
vpath205 prodlv  16  16  16..00..00..00..00  N/A
vpath208 prodlv  16  16  00..00..16..00..00  N/A
vpath24  prodlv  16  16  00..00..00..16..00  N/A
vpath304 prodlv  16  16  00..16..00..00..00  N/A
Now synchronize the logical volume:
# syncvg -l prodlv
And remove the original logical volume copy:
# rmlvcopy prodlv 1 vpath408
Then check again:
# lslv -m prodlv
Now, what if you have to extend the logical volume prodlv later on with another 128 LPs, and you still want to maintain the spreading of the LPs over the 8 disks? Again, you can use the "-e x" option when running the logical volume commands:
# extendlv -e x prodlv 128 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
You can also use the "-e x" option with the mklv command to create a new logical volume from the start with the correct spreading over disks.

Number of results found for topic LVM: 14.
Displaying results: 1 - 10.