NIM

NIMADM-NIM alternate disk migration on AIX

NIM alternate disk migration is the approach which AIX admin use for migration from one version to other,like below

1.AIX 5.3 to 6.1
2.AIX 6.1 to 7.1
3.AIX 7.1 to 7.2

generally there are 2 approach for doing migration .
first one is the old approach where admin uses AIX DVD to do migration and second is
NIMADM(NIM alternate disk migration),here from NIM server Migration initiated.
I will try to explain second approach which take minimal downtime.

Here are the some advantages of NIMADM.

1.Less downtime.
2.Easy blackout in case of migration failed.
3.while application running we can run migration.
4.while doing migration no need to stop application.
5.After successful migration admin can schedule reboot of server at convenient time.

  
 Here need less time as compare to DVD type migration.

So lets see approach for NIMADM which is applicable for doing following type of migration.
1.AIX 5.3 to 6.1
2.AIX 6.1 to 7.1

3.AIX 7.1 to 7.2

***while doing NIMADM migration make sure that NIM master must be at same or Higher OS level than NIM client.Below mentioned approach is applicable for migration from AIX 5.3 to 6.1,7.1 and from 7.1 to 7.2.
===============================================================
Migrating from AIX 5.3 to AIX 6.1 with nimadm
1./usr/lpp/bos/pre_migration        // premigration script
Pre-migration script is run before migration start,it helps if anything is missing before starting migration.
2.copy script to /tmp folder
 # cp /mnt/usr/lpp/bos/pre_migration /tmp/pre_migration
3.Run the script using the following command on Client LPAR:
# /tmp/pre_migration

check and save output for reference .
========================================================================
4. To allow nimadm to do its job,  temporarily enable rshd on the client LPAR.
 # chsubserver -a -v shell -p tcp6 -r inetd
 # refresh –s inetd
# cd /
 # rm .rhosts
 # vi .rhosts    // edit .rhosts and add 
  //it specifies that any host from AIX environment is Trusted ,but if admin want to specify only hostname he can do that also by specifying hostname in .rhosts file.
# chmod 600 .rhosts
5.From NIM master,  now 'rsh' to the client and run a command as root.
 # rsh aix1 whoami
 root      //if without asking for password command output showing "root" then rsh configuration is 
OK.
6.On the NIM master, create a new volume group (VG) named nimadmvg. This VG must have enough capacity for doing this migration.
========================================================================
7.
On the master (nim1):
 # lsvg -l nimadmvg
nimadmvg:
LV NAME  TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
On the client (aix1):
                # lspv
                hdisk0       rootvg          active
                hdisk1     None
8.Check that fileset bos.alt_disk_install.rte fileset is installed on the NIM master:
# lslpp -l bos.alt_disk_install.rte
  Fileset                      Level  State      Description
  ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
  bos.alt_disk_install.rte   6.1.3.1  APPLIED    Alternate Disk Installation
9.Also check that SPOT resource having bos.alt_disk_install.rte by using command 
4. AIX 6.1 TL3 SP1 SPOT:
# nim -o showres 'spotaix61031'  | grep bos.alt_disk_install.rte
  bos.alt_disk_install.rte   6.1.3.1    C     F    Alternate Disk Installation

10.Then execute nimadm command  from the NIM master.
 # nimadm -j nimadmvg -c aix1 -s spotaix61031 -l lppsourceaix61031 -d "hdisk1" –Y


There are total 12 phase in this migration
In short i explain how migration happen.
1.Client Rootvg is replicated to same size disk at client LPAR.
2.NIM master copy client rootvg data to nimadmvg via RSH.
3.client rootvg data migrated to AIX 7.1 via cacheFS at NIM master.
4.NIM master copy migrated rootvg(AIX7.1) data to client Lpar alternate disk.



***  Make sure and double check that from client server you are choosing correct PV for migration ,if you choose wrong disk then above command will overwrite all data on that PV.
Where:
–j flag specifies the VG on the master which will be used for the migration
-c is the client name
–s is the SPOT name
-l is the lpp_source name
-d is the hdisk name for the alternate root volume group (altinst_rootvg)
–Y agrees to the software license agreements for software that will be installed during the migration.
11.After the migration is complete,  confirm that the bootlist is set to the new disk.
                # lspv | grep rootvg
                hdisk0                 rootvg          active
                hdisk1                   altinst_rootvg  active
                # bootlist -m normal -o
                hdisk1 blv=hd5
At an agreed time, reboot the LPAR and confirm that the system OK and running correct version.
  # shutdown –Fr         
; system reboots here…               
# oslevel –s
6100-03-01-0921
 # instfix -i | grep AIX
12. Perform some general AIX system health checks to ensure that the system is configured and running as expected. There is also a post_migration script that.You can find this script in /usr/lpp/bos, after the migration.
sometime after migration openssh and  openssl need to update according to OS level.
13.The rsh daemon can now be disabled after the migration.
                # chsubserver -d -v shell -p tcp6 -r inetd
                # refresh –s inetd
                # cd /
                # rm .rhosts
                # ln -s /dev/null .rhosts
14.
With the migration finished, the applications are started and the application support team verify application /database working fine.
Once all is looking find from OS and application side ,  then do  rootvg mirror.
# lspv | grep old_rootvg
hdisk0      old_rootvg
# alt_rootvg_op -X old_rootvg
# extendvg –f rootvg hdisk0
# mirrorvg rootvg hdisk0
# bosboot -a -d /dev/hdisk0
# bosboot -a -d /dev/hdisk1
# bootlist -m normal hdisk0 hdisk1
# bootlist -m normal -o
 hdisk0 blv=hd5
hdisk1 blv=hd5
15.If there was an issue with the migration, admin can easily back out to the previous release of AIX. Instead of re-mirroring rootvg (above), we would change the boot list to point at the previous rootvg disk (old_rootvg) and reboot the LPAR.
                # lspv | grep old_rootvg
                hdisk0       old_rootvg
                # bootlist -m normal hdisk0
                # bootlist -m normal –o
                hdisk0 blv=hd5
                # shutdown –Fr

                    
Any new idea found i will add this in the same post.
Thanks !!!

No comments:

Post a Comment