Thursday, November 22, 2018

sshd[22107]:SSH Authentication Refused Bad Ownership or Modes for Directory

Hello everyone ,

Lets discuss how to troubleshoot following error while trying to login remotely using pass wordless ssh.
"sshd[22107]: Authentication refused: bad ownership or modes for directory"

One user from development team was trying to run some pearl script and while executing that script he getting permission denied and prompting for password(pass wordless authentication environment)

When user executing pearl script he was getting following message

please authenticate for oracle|Authenticated with 
partial success|Permission denied (keyboard-interactive,password

 So I decided to check /var/log/authlog  file for any clue. After checking file found following line in authlog.

sshd[20856]: Authentication refused: bad ownership or modes for directory /home/oracle

from this clue it indicating that ownership or mode on home directory not correctly set. when i checked found that ownership is correct but permission on home directory wasn't correct . "/home/oracle" directory was group writable and this causing error "sshd[22107]: Authentication refused: bad ownership or modes for directory".

So here are  step which i performed to correct this error 

#chmod g-w /home/oracle

#ls -ld /home/oracle

#Linux: /home/oracle# ls -ld /home/oracle

drwxr-x--- 2 oracle dba 4096 Nov  3  2017 /home/oracle

so here important thing is home directory must not group writeable.

After removing write permission from group development user able successfully execute pearl script and able to do passwordless login.

Some other pre-requisite for passwordless ssh configuration are like below.......

1 User home directory permission :755 and correct ownership 
2 .ssh: 700 and correct ownership
3  authorized_keys :600 and correct ownership
4 correct public key at both source and destination server

Thanks !!!!!!!!!!!





Extend file system when max primary partition limit reached on Linux.

Extend file system when max primary partition limit reached on Linux.

Example

VG01- /dev/sdb    // sdb having  4 Primary partition.
   -/dev/sdb1
   -/dev/sdb2
   -/dev/sdb3
   -/dev/sdb4

Now VG01 having FS /oracle/log of size 500GB and Linux admin want it to increase by 100GB.
But on VG01 there is no free space available , so admin decided to expand existing disk /dev/sdb.
After analysis found that /dev/sdb already having 4 primary partition and we cant create 5th primary partition.

Why existing disk expand is not possible?
answer to this query if on existing disk  Linux admin already created 4 primary partition so now creation of 5th primary partition is not possible. Linux will not allow creation of 5 primary partition on single disk.

Solution to this situation is add new disk to Virtual machine and create required partition on it and create PV and add it to volume group where FS reside. So after adding PV to volume group VG01 total free space in VG01 will be 100 GB . Now Linux admin can add 100 GB to /oracle/log.

How to detect new disk on suse linux ?

Run command rescan-scsi-bus.sh , this add scsi devices to Linux virtual machine without reboot.
take lssci before and after running rescan-scsi-bus.sh. find difference and it will show new disk.

Output of lsscsi before running rescan-scsi-bus.sh
#lsscsi
[0:0:0:0]    disk    VMware   Virtual disk     1.0   /dev/sda

[0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb

After running rescan-scsi-bus.sh

#lsscsi
 [0:0:0:0]    disk    VMware   Virtual disk     1.0   /dev/sda
 [0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb    
 [0:0:2:0]    disk    VMware   Virtual disk     1.0   /dev/sdc       // /dev/sdc is new disk .
After create required size partition and add it to VG01 and expand FS.


#lvextend -L +100G /dev/mapper/VG01-lvora_log
#lvs    

#resize2fs /dev/mapper/VG01-lvora_log   //resizing EXT3 Filesystem

Check using df -hT /oracle/log whether FS is resized or 

command for resizing XFS file system.

#xfs_growfs /dev/mapper/VG01-lvora_log

Extended partition  is solution to overcome max 4 primary partition but if Linux environment where creation of extended partition is not allowed then this approach is useful.





Thanks !!!!



Understand /etc/fstab last two field

Have you ever wondered what exactly last 2 field of /etc/fstab in Linux.
Lets see what they suggest.

Fstab syntax is like below

file-system                      mount point   type            options       dump  pass

/dev/vg00/lvsysmgmt  /sysmgmt             ext3       defaults              1     2
NFS-server-hostname:/share   /mnt         nfs         defaults,bg,intr 0 0

Here i am going to discuss what exactly last 2 field dump and pass tells operating system during boot.

dump - 0 and 1
pass - 0 , 1 and 2

DUMP
dump backup  of Filesystem
0 - disable 
1- enable 

PASS 
fsck on filesystem 0 -Disable fsck 
1- Enable fsck
2 - Perform fsck on other FS after / fsck done.simply order of FSCK , 1 for / and 2 for all other FS.

Dump - This tells OS to create dump backup of Linux file system.
Pass -  This Tell OS that do FSCK on this file system after " /" FSCK done. It defines FSCK order , generally / having priority over other File system and / having 1 value if you observe this in Linux /eetc/fstab configuration file.

If you observe fstab you will found that / having pass value 1 and other FS having 2 , So its order of doing fsck on filesystem while booting Linux operating system.

Then what about NFS file system which also occupying stanza in /etc/fstab ??

for NFS file system dump and pass value always 0 . 0 means disable fsck and dump during boot .

Why NFS having pass and dump value always 0 ??
1. NFS file system resides on remote server.



Thanks !!!



Running fsck on AIX filesystem

Hello Friends , 
Today i am sharing how to run fsck on AIX non rootvg file system.

When AIX LPAR rebooted without proper application stop and umount of file system then next when AIX LPAR up then non rootvg file system may not get mounted. Sometime following error can be seen when admin try to mount that File system.

#mount /oracle
mount: 0506-324 Cannot mount /dev/oralv01 on /oracle: The media is not formatted or the format is not correct.
0506-342 The superblock on /dev/oralv01 is dirty.  Run a full fsck to fix


Solution to this type of error is Run a full fsck .

#fsck /dev/oralv01

After successfull fsck command execution mount filesystem using following command.

#mount /oracle
#df -gt /oracle   // check whether mounted or not


Thanks !!!!!! 

Thursday, November 1, 2018

How to shrink file system on AIX cluster and assign free space to other FS in Same volume group

Hello Friends Today i am going to discuss how to shrink File system on AIX HACMP cluster and assign that free space to other FS which is in  same volume group.




Prerequisite before File system shrink

1. Make sure that there is enough free space available on FS which you decided to shrink.
2.Also make sure that there is no error related file system which you are shrinking.
3.Make sure that both FS are in same volume group, because after FS size shrink if other file system which we want to extend using free space  is not in same VG then we cant extend that FS.

Let say we have scenario like below

In AIX cluster there is file system name /share/log of size 500GB and we want to shrink it by 100 GB and after shrinking /share/log assign space which we shrinked to FS /share/oracle .

step 1: Shrink /share/log by 100 GB

#cd /usr/sbin/cluster/sbin 
#./cl_chfs -a size=-100G /share/log

After shrink using command cl_chfs make sure that FS is shrinked using following command

#df -gt /share/log 

step 2:
Add space to file system /share/oracle using following command.

#cd /usr/sbin/cluster/sbin 
#./cl_chfs -a size=+100G /share/oracle

After FS extend confirm using following command.

#df -gt /share/oracle

This trick will help when there is no free space in AIX volume group and we need to immediately expand file system on AIX cluster/non-cluster setup.

Thanks !!!!!!!!!!!!!


0516-404 allocp: This system cannot fulfill the allocation request

How to resolve following error when AIX admin face error like below ??

root@aixnode1:/usr/sbin/cluster/sbin : ./cl_chfs -a size=+100G /oracle/log
cl_chfs: Error executing chfs  -a size="+209715200" /oracle/log on node aixnode1
Error detail:
    aixnode1: 0516-404 allocp: This system cannot fulfill the allocation request.
    aixnode1:        There are not enough free partitions or not enough physical volumes
    aixnode1:        to keep strictness and satisfy allocation requests.  The command
    aixnode1:        should be retried with different allocation characteristics.
    aixnode1: RETURN_CODE=1
    aixnode1: cdsh: cl_rsh: (RC=1) /usr/es/sbin/cluster/cspoc/cexec  chfs  -a size="+209715200" /oracle/log


This issue encountered when there was no free PP available on AIX cluster PV where LV reside .
In this situation filesystem /oracle/log   reside on LV oralv and this LV upper bound value is 4 . But when we checked on how many  physical volume this lv having mapping then we found that it points to 4 Physical volume and on that pv's there is no free PP available. solution for this situation is extend each physical volume by 50 GB and then increase file system using command ./cl_chfs -a size=+100G /oracle/log.  "Reason for increasing each disk by 50 GB is ,file system is mirrored and user asked to increase FS by 100 GB so we need twice of 100 GB which is 200 GB and in cluster setup we need to equally distribute this space among 4 disk, that's why we decided to extend each disk by 50 GB."

If you observe below output there is no free PP available on hdisk5,hdisk6,hdisk7 and hdisk8.
root@aixnode1:/usr/sbin/cluster/sbin : lsvg -p vg0
vg0:
PV_NAME   PV STATE  TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk5            active            33783       0                    00..00..00..00..00
hdisk6            active            33783       0                    00..00..00..00..00
hdisk7            active            33783       0                    00..00..00..00..00
hdisk8            active            33783       0                    00..00..00..00..00


so here we need to provide Lun ID details and by how much size each disk need to be extended.

root@aixnode1:/root : ssod
DISK         SIZE              ID                                                             VG

hdisk5      4000 GB XXXXXXXXXXXXXXXXXXXXXXA         vg0
hdisk6      4000 GB XXXXXXXXXXXXXXXXXXXXXXB         vg0
hdisk7      4000 GB XXXXXXXXXXXXXXXXXXXXXXC          vg0
hdisk8      4000 GB XXXXXXXXXXXXXXXXXXXXXXD          vg0

In this environment LV is mirrored so we need to increase size of PV by considering following calculation.

By 100 GB extend FS /oracle/log.

100*1024*1024*2 = 209715200 (size in blocks)

(209715200/1024)/10240=200 GB

Here in VG0 total 4 disk so we need extend each disk by equal size .

If we increase each disk by 50 GB then total size 200GB will be added in VG .


After disk extend from storage side 

After space added from storage side execute command chvg -g vg0 to reflect changes at storage side and then execute command 

root@aixnode1:/root : ssod
DISK         SIZE              ID                                                             VG

hdisk5      4050 GB XXXXXXXXXXXXXXXXXXXXXXA         vg0
hdisk6      4050 GB XXXXXXXXXXXXXXXXXXXXXXB         vg0
hdisk7      4050 GB XXXXXXXXXXXXXXXXXXXXXXC          vg0
hdisk8      4050 GB XXXXXXXXXXXXXXXXXXXXXXD          vg0



./cl_chfs -a size=+100G /oracle/log.

Thanks  !!!!!!!!!!



Monday, October 22, 2018

HACMP file system export from AIX server

Hello everyone today i am going to share how to export AIX HACMP File system and issues i faced while doing export of that File system from AIX to Linux client.

There 2 ways for exporting File system from AIX HACMP cluster node to Linux Clinet.

1. Edit   /usr/es/sbin/cluster/etc/exports and Activate changes by using following command.
#exportfs -a -f /usr/es/sbin/cluster/etc/exports

2. Use smitty nfs to export File system .

#smitty nfs 
Network File System (NFS)
Change / Show Attributes of an Exported Directory
Pathname of exported directory                     [/oracle/log] 
* Version of exported directory to be changed        [3] 

Then following screen will appear where AIX admin need to enter Exported FS name amd also 
Client details to whom this File system need to export . Important part is specify path name of alternate export file and that path is like below 
Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]   // This is for AIX hacmp server for non-hacmp export file name is /etc/exports

At final smitty window enter value like below:

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                                   [Entry Fields]
* Pathname of directory to export                     /oracle/log
* Version of exported directory to be changed         3
  Anonymous UID                                      [-2]
  Public filesystem?                                 [no]                                                                             +
* Change export now, system restart or both           both                                                                            +
  Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]
  Allow access by NFS versions                       [3]
  External name of directory (NFS V4 access only)    []
  Referral locations (NFS V4 access only)            []
  Replica locations                                  []
  Ensure primary hostname in replica list             yes                                                                             +
  Allow delegation?                                  []
  Scatter                                             none                                                                            +
  Security method 1                                  [sys]                                                                            +
      Mode to export directory                       [read-write]                                                                     +
      Hostname list. If exported read-mostly         []
      Hosts & netgroups allowed client access        [linuxclient]

      Hosts allowed root access                      [linuxclient]]

After editing all mandatory value press ENTER and then when OK screen appear it means that file system has been exported successfully.

Once exported at Linux client machine execute following command to mount

# mount  aixhacmpserver:/oracle/log  /oracle/log



Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

When i exported AIX FS using 1st method by editing file  /usr/es/sbin/cluster/etc/exports and activated changes by command #exportfs -a -f /usr/es/sbin/cluster/etc/exports . we encoutered with following error.

Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

So when i used SMITTY nfs to export we didn't faced that error.

one difference which i observed while exporting again using SMITTY is following.

Hosts & netgroups allowed client access        [ ] //In this line we didn't find client hostname   


Hosts allowed root access                      [linuxclient]]



Thanks !!!



Tuesday, October 16, 2018

Linux File system Type and feature


EXT2
Second extended file system
Not have journaling
Individual file size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 1993


EXT3
Third extended file system.
Have journaling
File size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 2001


EXT4
Second extended file system
Have journaling
Max individual file size from 16 GB to 16 TB
Maximum ext4 file system size is 1 EB
Max files - 4 billion
Not supports Transparent compression
Not support snapshots


XFS
Extended File System
Have journaling
individual file size 8 EB
Maximum file system size 8EB
MAX files 2 ^64
uses B+ trees for the directories and file allocation
XFS is Guaranteed Rate I/O (GRIO)
Not supports Transparent compression

BTRFS
B-Tree FS
Have journaling
Max individual file size 16Ebytes
Maximum file system 16Ebytes
MAX files 2 ^64
supports dynamic I-node allocation
provides support for RAID striping
supports Transparent compression
Supports Snapshots


Thanks !!!!!!











TAR backup without absolute path and extract to any location.



Today I am discussing how to take backup of directory without specifying absolute path.                              
Create tar backup with and without absolute path. There are  different way of creating tar backup.


Some time some strange things happened when Linux/Unix admin extract tar backup and that backup not got extracted to correct location.


Why backup not extracted to correct location ??


Answer to this query is command which was used while creating tar backup ,if admin used correct command while creating tar backup then he will not face any problem.





What will happen after extracting and where backup will be extracted ??  also we will see how to take backup of soft link in tar backup and preserve permission.

Linux/Unix admin sometime need to create tar backup and restore it to on other server where data need to extract at different path.

So let see how admin can create tar backup using correct command .

what are the correct way of creating tar backup ??

There are 2 ways to create tar backup??

1. With absolute path 
2. Without Absolute path

While creating tar backup don't mention absolute path, if Linux-UNIX admin want to extract tar backup at different path,  because if admin create tar backup using absolute path then while extracting backup at target location OS will try to find same path and if it will not able to find path then it will extract to current directory and while creating tar backup admin specified full path of directory then it will create whole structure of directory from parent directory.

So what will be solution for this ??

Solution is while creating backup go to path where target directory

and execute command to create tar backup 

#cd /required-path       // where test dir is present 

#tar -zcvf test.tar.gz test




How to extract this backup to destination server at any path.

At destination ,Go to target path and execute following command


#tar -zxvf test.tar.gz            // extarct to current directory.
tar -zxvf test.tar.gz -C /destination_directory   //extract to specific directory




Some example of tar command.


if admin want to preserve symbolic link and preserve original permission he can use "h" and "p" flag.


h - preserve symbolic link
p- preserve original permission.


ex.  tar -cvhf




*** if admin create tar backup with absolute path then at target server there must be absolute path exist if absolute  path not exist it will extract backup to current path where he extracting tar file and create same directory structure in current path.

Monday, October 1, 2018

Crontab for housekeeping old files


MM HH DOM Month DOW CommandToExecuted

MM Minutes(0-59)
HH Hour(0-23)
DOM Day Of Month(0-31)
Month (1-12)
DOW (0-6)Sunday=0 weekday



Hello friends today i am going to write about how to house keep file system on Linux/Unix using crontab. file system housekeeping needed when there is not enough space on File system or it reached to its threshold value. 

Let see one scenario where admin need to to house keep gzip files which are older than 100 days. so how he going to do this task ?? 

So in this situation admin need to first identify files which are old than 100 days and then need to remove from desired filesystem. here we can use following find command to identify .gz files which are older than 100 days.

#find /oracle/orausr -type f -mtime +100  -name "*.gz"

how to remove these files ??

answer is use following command 

#find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \; 

Till now we found way for finding and deleting *.gz file older than 100 days. but admin need to automate this by using crontab. also what if there are 1000 of server ?? so admin cant login to each server and delete these files, so in that situation crontab useful.

Crontab entry for scheduling above cron job is like below.

5 6 * * * find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \;   

This crontab execute every day at 6:05 am for any gz files which are older than 100 days and delete the same file.


** before scheduling this cron please make sure that you tested it on test environment and mentioned correct path for file deletion.


Thanks !!!



Thursday, September 13, 2018

NFS error not responding still trying error on Linux

"NFS error not responding still trying" ........ for filer from storage.
df -hT in hang sate ..........
dsmc q sched not working.........

All  above mentioned error alert we received from alert monitoring software , first didn't got what exactly wrong. because there are multiple error on LINUX server. on affected linux server we checked /etc/fstab and found that there are around 6 NFS share mounted . when we do ls and cd to this NFS share , for 2 share we found issue.

For 2 NFS share ls and cd command not succeeding. so we decided to check with storage admin .
we provided affected server IP and filer details to them and asked to check whether everything is correctly shared from their side or not ?? storage admin answered that all permission are ok and filer is correctly shared to LINUX server. we decided that reexport same filer again. after reexporting , we able to access NFS share and df -hT command also working.

But this joy of solving issue was not permanent . again same issue occurred on this linux server.
So whats next..........

Now we thinking that there must some issue at network level which causing  this and also in linux server log file "messages" we found following entry,

18:22:04 linuxclient1 kernel:  nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:22:42 linuxclient1 kernel: nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK

Sep  6 18:23:22 linuxclient1 kernel:] nfs: server netappnfsfiler.server.net OK 

From above logs ,we found that filer server is not responding to Linux server request, so there may be possibility that firewall blocking communication between NFS filer server and Linux client server.
we provided all required details to network team but after analysis found that from their side also no issue. Now pending team is VMWARE team who created this VM . but VM team also saying that all VM configuration is correct.

After their answer we did google search and found one interesting thing for this type of error and that was MTU. MTU is maximum transfer unit for ETHERNET interface on network.  incorrect MTU configuration on server causes performance issue on any Linu/UNIX and windows server. on affected Linux server MTU was 9000 . we also did search for MTU value on other server which are in same IP range and found that MTU for these server is 1500 and affected server it was 9000. we decided to take downtime for changing MTU value . we did MTU chnage to 1500 for primary interface and took reboot of Linux server and guess what after reboot everything was working perfect. df -hT , dsmc q sched and also ls ,cd command working perfectly on these NFS share.

At the end we can say that because of incorrect MTU configuration nfs share hang issue occurred and this nfs hang affect execution of df -hT ,dsmc q sched and ls ,cd command execution.

There may be multiple cause of NFS hang.

1. NFS server hang or down.
2. Firewall blocking communication between NFS server and Linux client.
3. incorrect MTU configuration on client or server side .
4. overloaded NFS server causing timeout for client request.


Thanks !!!


NFS export from AIX HACMP CLUSTER


Do NFS export from AIX HACMP CLUSTER to Linux server

Scenario: Export /hacmp/export from aixhanode1 to linux host linuxnode1 and linuxnode2

On PowerHA-cluster /usr/es/sbin/cluster/etc/exports is used for exporting filesystems.

Step 1: For Cluster setup while doing NFS export from Cluster node edit file /usr/es/sbin/cluster/etc/exports . Don’t use /etc/exports file, /etc/exports file used in NON-CLUSTER environment.
Step 2:
Find export directory stanza and add hostname to whom directory need to export.
Here find stanza "/hacmp/export" and add client name at last without affecting current hostname list.
Example:
Find stanza for /hacmp/export in file /usr/es/sbin/cluster/etc/exports and add hostname linuxnode1 linuxnode2 and save file
Vi /usr/es/sbin/cluster/etc/exports
/hacmp/export -rw,root=linuxnode1:linuxnode2
Step3:
Execute following command to activate changes which we made in file /usr/es/sbin/cluster/etc/exports , by executing following command.
exportfs -a -f /usr/es/sbin/cluster/etc/exports
Step 4 :
Check on both client whether NFS directory is exported or not by using below command
showmount -e nfsservername
showmount –e aixhanode1
Step 5:
Mount share directory using mount command.
Mount aixhanode1:/hacmp/export /mnt      //mounting on temp mount point
Mount aixhanode1:/hacmp/export /mnt
       Or you also mount on same name mount point using below command.
Before executing make sure that /hacmp/export exist on both NFS client.
Mount aixhanode1:/hacmp/export  /hacmp/export
Mount aixhanode1:/hacmp/export  /hacmp/export

Verify whether directory/FS is mounted or not by executing below command.
df –hT /hacmp/export

Thanks!!!

Friday, August 31, 2018

Expand AIX disk using chvg



How to expand volume group space without extending new disk to volume group?? So this post is helpful for AIX setup where addition of new disk is time consuming, so instead of adding new disk it will always good option to expand disk from storage end and then expand volume group on AIX level using chvg –g command.Consider situation where there is no free PP available on volume group and AIX admin want to immediately add additional free PP to volume group

There are 2 option to add space to AIX volume group,
1. Add new disk to volume group.
2.Expand existing disk size in volume group.
If situation demand immediate space addition to volume group then 2nd option is the best option.

Which details storage admin need to expand existing disk on AIX ??
Storage admin need Lun Id and Lun name for expanding disk from storage end.

Storage specific command to find Lun details.

EMC storage disk – powermt display dev=hdiskpowerX
IBMstorage - mpio_get_config –Av |grep hdiskX
XIV storage - xiv_devlist |grep hdiskx
Also admin can use lspv –u command to see PV details.
There may be different type of storage, so use command according to storage type to identify Lun ID details

After finding Lun id by AIX admin, provide the same to storage admin and ask to expand same disk with required size. Next step is, execute chvg –g vgname by AIX admin on server. confirm volume group expansion by command 
#lsvg vgname

What if chvg –g vgname command not expanding current disk space ??
 Answer to this question is, admin need to varyoffvg vg and then vary on vg. Here need downtime, because while varyoff VG admin first need to umount corresponding filesystem.

How to handle following error message ??
"chvg -g not supported in the rootvg"
0516-1380 chvg: Re-sizing of the disks is not supported for the rootvg.
0516-732 chvg: Unable to change volume group rootvg.
This error mesage will appear on AIX 5.3 and AIX 6.1 TL lower than 03.

chvg -g rootvg  command supported version are like below.
AIX V6.1 TL3 and higher (6100-03-00) resizing rootvg disks is now supported. on lower version this command will not work for AIX rootvg volume group.

Next query is how AIX admin expand volume group disk on AIX HACMP environment??
Step 1: First provide Lun details to storage team.
Step 2:/usr/es/sbin/cluster/sbin/cl_chvg -g datavg.
Step 3: Observer whether volume size is by using command 
#lsvg vgname


Thanks!!!!

Wednesday, August 29, 2018

VIO update using alternate clone method





Hello everyone today, I am sharing VIO update using alternate disk cloning method. In this method, we will do update on dual VIO environment . Alternate cloning method first clone VIO OS on alternate disk and then start applying update on cloned disk. In dual VIO environment admin need to switch SEA(primary operational) to secondary VIO SEA and do update on primary VIO and after successful update on primary VIO do same for 2nd VIO

First verify VIO fileset consistency.
#errpt |more    - To check errpt alerts.
#lppchk -v   - To check inconsistency.
#installp –C   - Remove the inconsistency file sets. 
Check following fileset is installed on VIO or not, this fileset needed while doing update.
#lslpp –l |grep –i alt_disk_install.rte
Also most important thing take VIO backup before starting update. 
Command for taking VIO backup:
#su – padmin
$backupios –file /tmp/VIOImage
$viosbr -backup -file /home/padmin/`hostname`
Once backup completed transfer it to backup server. This backup need in case of recovery requirement. Before starting VIO update make sure that there is spare disk available which must be same size of rootvg of VIO server and that disk is not part of any volume group. It is always good to switch network traffic to secondary VIO while doing VIO update on Primary.  For single VIO setup admin need downtime and all client LPAR must be shut down before starting update.

For SEA setup do fail-over to secondary VIO by using following command and then proceed for VIO update. Below mentioned SEA adapter change according to SEA setup so please first identify SEA in dual VIO environment then change attribute to “standby”.

Step 1:
# lsdev -C | grep -i  "Shared Ethernet Adapter"
ent11          Available      Shared Ethernet Adapter

$entstat -d ent11 |grep -i state 
State: PRIMARY
LAN State: Operational
LAN State: Operational

# lsattr -El ent11 | grep ha_mode
ha_mode       auto     High Availability Mode    
                                        
# chdev -l ent11 -a ha_mode=standby
# lsattr -El ent11 | grep ha_mode
ha_mode       standby  High Availability Mode
After changing "ha_mode" to standby proceed for next step
Step 2: 
#updateios –commit    To Commit the current IOS level
Commit all applied state fileset before starting update .

Step 3:
Start update by using command "alt_root_vg" command
$alt_root_vg -target hdisk1 -bundle update_all -location /tmp/update

Step 4:
Once update done then check whether bootlist is updated new disk or not??
#bootlist -m normal -o
If bootlist is changed to new disk then admin need to take reboot.
$shutdown -restart
Step 5:
After reboot from new disk check VIOS OS level by using ioslevel command.
$license –accept   Accept license after VIO update
$ioslevel     // Check VIOS OS level after update.
#lppchk -v
Step 6:
After successful reboot of VIO, toggle SEA setting back to original.
Use following command to set SEA setting on Primary mode.

On Primary VIO
# chdev -l ent11 -a ha_mode=auto
# lsattr -El ent11 | grep ha_mode
ha_mode       auto  High Availability Mode

How to verify VIO OS level after update without reboot using chroot method??
Use command chroot to Wake up the cloned disk and change the shell prompt
            to verify the IOS level and oslevel of the VIO server.

chroot /alt_inst /usr/bin/ksh 
start the altinst_rootvg shell.
oslevel -s      check the oslevel of altinst_rootvg.
lppchk -m3 -v   To check inconsistency.
Installp –c all To commit the fileset

lppchk –vm3      -To again check the inconsistency.
su – padmin    - To switch to padmin.
ioslevel    - To check the iOS level on the VIOS.

For second VIO follow same approach while doing update.

Thanks!!!!