Sunday, July 18, 2021
Copy pedbg zip file using SCP from HMC to NIM server.
AIX lpar not connecting ,ssh process issue troubleshoot
Got incident for AIX LPAR not connecting from outside using ssh.
To solve this issue followed following approach.
First did pre-check like tried to connect from AIX NIM server and CyberArk, so found that its not connecting from both NIM and CyberArk.
so finally decided to try from putty, that also failed.
so, decided to take console from HMC and took session. upon checking found that "ssh" process is in inoperative state. so tried to start that process but its not starting using following command. :(
#startstc -s sshd
#refresh -s sshd
so decided to check which process is holding that port 22. checked this using lsof command
#lsof -i :22
this command showing process id where socket status is "closed_wait",
so decided that will go for kill that process.
killed that process by command #kill -9 PID
and started "sshd" process and finally it work.
AIX lpar we able access from cyberARK ,putty and NIM.
Finally we conclude that, ssh process was in "hung" state and sshd process refresh and stop, start also not worked, then finally after clearing PID associated with that process solve issue.
Thanks !!!!
AIX HACMP ,GPFS and veritas Cluster state check command
AIX HACMP ,GPFS and veritas Cluster state check command
AIX HACMP resource group state check command
#clRGinfo
How to check cluster is in stable state or not
#lssrc -ls clstrmgrES |grep -i state
if output is showing "ST_STABLE" ,then cluster is in stable state .
#lssrc -g cluster // Show process status in cluster group
GPFS cluster information check command
#mmgetstate -aLs
following command will show information about gpfs cluster
#mmlscluster
Veritas Cluster state check command
#hastatus -summ
Thanks !!!!!
How to replace faulty disk on AIX VIO rootvg using DIAG HOT plug task
How to replace faulty disk on AIX VIO rootvg using DIAG HOT plug task??
AIX admin can replace faulty hard drive using
"DIAG" procedure. here faulty disk must support
hot swap operation.
Before replacement we need to do some pre-check,
1. IS this part of rootvg and its in mirror ?
Ans : if yes then ,identify that disk and unmirror
that disk from rootvg using following command.
#umirrovg rootvg hdisk1
#reducevg rootvg hdisk1
2. Identify disk location using lscfg command
#lscfg -vpl hdisk1
3. make sure that disk is removed from rootvg.
Once all precheck done we can identify hdisk1 by following procdure,
PART I
DIAG--->Certify media Task---->Hot plug task--->scsi and scsi RAID hot plug RAID manager----->Identify disk
Select hdisk1 and make sure location and disk is current and press enter key.
Example .
XXX.XXX.XX.P2-C9-D5 //disk location
once "Enter key pressed" LED will blink, which set disk to identify mode.
once disk identified, you can exit from this menu by pressing "enter" and come to previous menu by "ESC +3"
PART II
Next is Remove and replace hdisk1
===============================
DIAG--->Certify media Task---->Hot plug task--->scsi and scsi RAID hot plug RAID manager----->Identify disk-->Replace / Remove a Device Attached to An SCSI Hot Swap Enclosure Device
select hdisk1 and make sure location and disk is current and press enter key.
XXX.XXX.XX.P2-C9-D5
now disk is ready for remove and replace.
now you can ask remote engineer to perform disk replacement.
PART III
Once disk replaced you need to detect that disk on AIX LPAR.
follow below procedure it will configure newly replaced disk on LPAR.
DIAG--->Certify media Task---->Hot plug task--->scsi and scsi RAID hot plug RAID manager-->Configure Added/Replaced Devices
identify new disk by lspv and lscfg command
#lscfg -vpl hdisk // it will have new serial number.
Thanks !!!!
Monday, December 9, 2019
How to Delete million of file in certain directory
When administrator get request for deleting files which having large number, then simple "rm" command will not work. Solution to this problem is following command.
find . -type f -name "*.bak" -exec rm -i {} \;
Above command will find files with ".bak" extension and will delete them. IF administrator want to specify any specific path then he also specify that path like below.
find /backup -type f -name "*.bak" -exec rm -i {} \;
Thursday, July 25, 2019
How to change timezone from CEST to UTC on SUSE Linux Enterprise Server
CEST => UTC
Some time Linux admin need to configure timezone for Linux host,reason may requirement from application or database team or sometime wrong timezone configuration by Linux admin.
So, Lets see how to configure timezone from CEST to UTC.
Procedure:
Step 1: Before changing timezone please note previous timezone details by using following command.
#date
root@linuxhost:/root : date
Thu Jul 18 16:30:38 CEST 2019
root@linuxhost:/root : cd /etc
root@sedcacoi0060:/etc : ls -lrt /etc/localtime
lrwxrwxrwx 1 root root 33 Jul 3 08:57 localtime -> /usr/share/zoneinfo/Europe/Berlin
Step2 :
Now here we need to change time zone to UTC. Before changing this setting first note previous setting in notepad file.
Step 3:
Excute following command to change timezone to UTC.
#rm /etc/localtime
#ln -sf /usr/share/zoneinfo/UTC /etc/localtime
Step 4:
Confirm change is done or not by using following command.
#ls -lrt /etc/localtime
lrwxrwxrwx 1 root root 23 May 6 12:05 /etc/localtime -> /usr/share/zoneinfo/UTC
Also using date command admin can confirm that timezone changed from CEST to UTC.
#date
Thu Jul 18 02:40:24 UTC 2019
Thanks !!!
Tuesday, July 23, 2019
auto mount issue on AIX hacmp cluster
Today i am discussing on issue which i faced while umounting nfs mount on AIX hacmp node .on aix hacmp node netapp filer was mounted and we received request to umount that nfs mount permanently.
We executed "#umount" command to perform this ticket and updated to responsible team, but after some time found that mount point auto mounting. we thought that somebody has mounted and did umount again, but that mount point mounting again.
so we decided to check configuration file /etc/auto.direct on AIX cluster node and found that there is entry for that mount point.
/etc/auto.direct entry:
/share/data -bg,intr,soft,rw nfsServername:/vol/xxxxx_cifs_nfs_vol017/share_data
After comment
#/share/data -bg,intr,soft,rw nfsServername:/vol/xxxxx_cifs_nfs_vol017/share_data
To sort out auto mount issue we commented nfs mount entry in /etc/auto.direct and then umounted /share/data.
***always take backup of configuration file before editing.
Thanks !!!!!!!!!!!