Topics: Backup & restore, TSM

Start a backup from the TSM server

There is a way to start a backup from the TSM server itself, and that is by defining a client action. For example, to start an incremental backup on a node, run:

define clientaction action=incremental
You can use wild cards like * in the node name, for example:
def clienta node* act=i
You can monitor the schedule event, using the following command:
q ev * @1
You may cancel this schedule, by running:
delete schedule @1

Topics: Backup & restore, TSM

Tail TSM console log

The following command can be used to tail the TSM console log:

dsmadmc -console
This will allow you to continously follow what is happening on the TSM server.

Topics: Backup & restore, TSM

Show configuration of a TSM server

To save the complete configuration of a TSM server to a file, run:

dsmadmc -id=admin -password=admin show config > /tmp/config
This assumes that you have an admin account with the password admin. And it will write the output file to /tmp/config.

If you wish to have comma separated output, add -comma.

To just display the status of the TSM server, run (this is included in the output of show config):
q status

Topics: Backup & restore, TSM

Register a new TSM admin

To register a new TSM admin, run:

register admin adminname password contact="Contact details of the new admin"
Next, grant system privilege authority to the new admin:
grant authority adminname class=sys
To remove a TSM admin, run:
remove admin adminname

Topics: Red Hat, System Administration

Red Hat: Creating a backup to ISO images

The following procedure describes how to create a full system backup, using MondoRescue, to ISO images, that can later be burnt to DVD, and used to recover the entire system.

First, set up the REPO for MondoResuce:

# cd /etc/yum.repos.d/
# wget
Install MondoRescue:
# yum install mondo
Answer "y" to everything.

You will need a destination to put the ISO files in. For example a remote NFS mount on a separate server is a good choice, so the backup is not locally on the same system.

Edit /etc/mindi/mindi.conf, to allow for a larger RAM disk. Mindi is used by Mondo. Wihout it, Mindi will exit saying it ran out of space. Add to mindi.conf:
Now run the MondoRescue backup:
# mondoarchive -O -V -i -s 4480m -d /target -I / -T /tmp
You can also add the -E option to tell MondoRescue to exclude certain folders.

The -s option tells MondoResuce to make ISO images of DVD size 4480m.

The command says it will log to /var/log/mondoarchive.log. A /var/log/mindi.log is also written. It will also indicate the number of media images to be created. Let it run, and your backup is successful.

Topics: AIX, System Administration

Configuring dsh

The dsh (distributed shell) is a very useful (and powerful) utility that can be used to run commands on multiple servers at the same time. By default it is not installed on AIX, but you can install it yourself:

First, install the dsm file sets. DSM is short for Distributed Systems Management, and these filesets include the dsh command. These file sets can be found on the AIX installation media. Install the following 3 filesets:

# lslpp -l | grep -i dsm
  dsm.core  COMMITTED  Distributed Systems Management
  dsm.dsh  COMMITTED  Distributed Systems Management
  dsm.core  COMMITTED  Distributed Systems Management
Next, we'll need to set up some environment variables that are being used by dsh. The best way to do it, is by putting them in the .profile of the root user (in ~root/.profile), so you won't have to bother setting these environment variables manually every time you log in:
# cat .profile
alias bdf='df -k'
alias cls="tput clear"
stty erase ^?
export TERM=vt100

# For DSH
export DSH_NODE_RSH=/usr/bin/ssh
export DSH_NODE_LIST=/root/hostlist
export DSH_NODE_OPTS="-q"
export DSH_REMOTE_CMD=/usr/bin/ssh
export DCP_NODE_RCP=/usr/bin/scp
In the output from .profile above, you'll notice that variable DSH_NODE_LIST is set to /root/hostlist. You can update this to any file name you like. The DSH_NODE_LIST variable points to a text file with server names in them (1 per line), that you will use for the dsh command. Basically, every hostname of a server that you put in the list that DSH_NODE_LIST refers to, will be used to run a command on using the dsh command. So, if you put 3 hostnames in the file, and then run a dsh command, that command will be executed on these 3 hosts in parallel.

Note: You may also use the environment variable WCOLL instead of DSH_NODE_LIST.

So, create file /root/hostlist (or any file that you've configured for environment variable DSH_NODE_LIST), and add hostnames in it. For example:
# cat /root/hostlist
Next, you'll have to set up the ssh keys for every host in the hostlist file. The dsh command uses ssh to run commands, so you'll have to enable password-less ssh communication from the host where you've installed dsh on (let's call that the source host), to all the hosts where you want to run commands using dsh (and we'll call those the target hosts).

To set this up, follow these steps:
  • Run "ssh-keygen -t rsa" as user root on the source and all target hosts.
  • Next, copy the contenst of ~root/.ssh/ from the source host into file ~root/.ssh/authorized_keys on all the target hosts.
  • Test if you can ssh from the source hosts, to all the target hosts, by running: "ssh host1 date", for each target host. If you're using DNS, and have fully qualified domain names configured for your hosts, you will want to test by performing a ssh to the fully qualified domain name instead, for example: "ssh". This is because dsh will also resolve hostnames through DNS, and thus use these instead of the short host names. You will be asked a question when you run ssh for the first time from the source host to the target host. Answer "yes" to add an entry to the known_host file.
Now, ensure you log out from the source hosts, and log back in again as root. Considering that you've set some environment variables in .profile for user root, it is necessary that file .profile gets read, which is during login of user root.

At this point, you should be able to issue a command on all the target hosts, at the same time. For example, to run the "date" command on all the servers:
# dsh date
Also, you can now copy files using dcp (notice the similarity between ssh and dsh, and scp and dcp), for example to copy a file /etc/exclude.rootvg from the source host to all the target hosts:
# dcp /etc/exclude.rootvg /etc/exclude.rootvg
Note: dsh and dcp are very powerful to run commands on multiple servers, or to copy files to multiple servers. However, keep in mind that they can be very destructive as well. A command, such as "dsh halt -q", will ensure you halt all the servers at the same time. So, you probably may want to triple-check any dsh or dcp commands that you want to run, before actually running them. That is, if you value your job, of course.

Topics: Red Hat

Using Wodim to write an ISO image to DVD

Wodim is an easy tool to write an ISO image to DVD, and it's included with Red Hat.

In order to write an ISO image to DVD, first start off by making sure what the device is of the DVD burner. Most often, it is /dev/sr0. To validate this, run:

# ls -als /dev/sr0
If that's the correct device, all you need is an ISO image. Let's say, your ISO image is located in /path/to/image.iso. In that case, use the following command to write the ISO image to DVD:
# wodim dev=/dev/sr0 -v -data /path/to/image.iso

Topics: Red Hat

Red Hat Cluster Suite commands

Red Hat cluster controls the startup and shutdown of all application components on all nodes within a cluster. To check the status of the cluster, to start, stop or failover resource groups Red Hat cluster's standard commands can be used.

Following is a list of some of cluster commands.

  • To check cluster status: clustat
  • To start cluster manager: service cman start (do on both nodes right away with in 60 seconds)
  • To start cluster LVM daemon: service clvmd start (do on both nodes)
  • To start Resource group manager: service rgmanager start (do on both nodes)
  • To enables and starts the user service: clusvcadm -e service_name (check with clustat for available service names in your cluster)
  • To disable and stops the user service: clusvcadm -d service_name (check with clustat for available service names in your cluster)
  • To stop Resource group manager: service rgmanager stop
  • To stop cluster LVM daemon: service clvmd stop
  • To stop cluster manager: service cman stop (Do not stop CMAN at the same time on all nodes)
  • To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)
  • To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)

Topics: Red Hat

How to Mount and Unmount an ISO Image in RHEL

An ISO image or .iso (International Organization for Standardization) file is an archive file that contains a disk image called ISO 9660 file system format. Every ISO file have .ISO extension has defined format name taken from the ISO 9660 file system and specially used with CD/DVD Roms. In simple words an iso file is a disk image.

Typically an ISO image contains installation of software such as, operating system installation, games installation or any other applications. Sometimes it happens that we need to access files and view content from these ISO images, but without wasting disk space and time in burning them on to CD/DVD.

This article describes how to mount and unmount an ISO image on RHEL to access and list the content of ISO images.

To mount an ISO image, you must be logged in as root user and run the following commands from a terminal to create a mount point.

# mkdir /mnt/iso
Once you created mount point, use the mount command to mount an iso file. We'll use a file called rhel-server-6.6-x86_64-dvd.iso for our example.
# mount -t iso9660 -o loop /tmp/Fedora-18-i386-DVD.iso /mnt/iso/
After the ISO image mounted successfully, go the mounted directory at /mnt/iso and list the content of an ISO image. It will only mount in read-only mode, so none of the files can be modified.
# cd /mnt/iso
# ls -l
You will see the list of files of an ISO image, that we have mounted in the above command.

To unmount an ISO image, run the following command from the terminal as root:
# umount /mnt/iso

Topics: AIX, System Administration

Copy printer configuration from one AIX system to another

The following procedure can be used to copy the printer configuration from one AIX system to another AIX system. This has been tested using different AIX levels, and has worked great. This is particularly useful if you have more than just a few printer queues configured, and configuring all printer queues manually would be too cumbersome.

  1. Create a full backup of your system, just in case something goes wrong.
  2. Run lssrc -g spooler and check if qdaemon is active. If not, start it with startsrc -s qdaemon.
  3. Copy /etc/qconfig from the source system to the target system.
  4. Copy /etc/hosts from the source system to the target system, but be careful to not lose important entries in /etc/hosts on the target system (e.g. the hostname and IP address of the target system should be in /etc/hosts).
  5. On the target system, refresh the qconfig file by running: enq -d
  6. On the target system, remove all files in /var/spool/lpd/pio/@local/custom, /var/spool/lpd/pio/@local/dev and /var/spool/lpd/pio/@local/ddi.
  7. Copy the contents of /var/spool/lpd/pio/@local/custom on the source system to the target system into the same folder.
  8. Copy the contents of /var/spool/lpd/pio/@local/dev on the source system to the target system into the same folder.
  9. Copy the contents of /var/spool/lpd/pio/@local/ddi on the source system to the target system into the same folder.
  10. Create the following script, called, and run it:
    let counter=0
    cp /usr/lpp/printers.rte/inst_root/var/spool/lpd/pio/@local/smit/* \
    cd /var/spool/lpd/pio/@local/custom
    chmod 775 /var/spool/lpd/pio/@local/custom
    for FILE in `ls` ; do
       let counter="$counter+1"
       chmod 664 $FILE
       QNAME=`echo $FILE | cut -d':' -f1`
       DEVICE=`echo $FILE | cut -d':' -f2`
       echo $counter : chvirprt -q $QNAME -d $DEVICE
       chvirprt -q $QNAME -d $DEVICE
  11. Test and confirm printing is working.
  12. Remove file
Number of results found: 388.
Displaying results: 1 - 10.