Monday, October 22, 2018

HACMP file system export from AIX server

Hello everyone today i am going to share how to export AIX HACMP File system and issues i faced while doing export of that File system from AIX to Linux client.

There 2 ways for exporting File system from AIX HACMP cluster node to Linux Clinet.

1. Edit   /usr/es/sbin/cluster/etc/exports and Activate changes by using following command.
#exportfs -a -f /usr/es/sbin/cluster/etc/exports

2. Use smitty nfs to export File system .

#smitty nfs 
Network File System (NFS)
Change / Show Attributes of an Exported Directory
Pathname of exported directory                     [/oracle/log] 
* Version of exported directory to be changed        [3] 

Then following screen will appear where AIX admin need to enter Exported FS name amd also 
Client details to whom this File system need to export . Important part is specify path name of alternate export file and that path is like below 
Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]   // This is for AIX hacmp server for non-hacmp export file name is /etc/exports

At final smitty window enter value like below:

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                                   [Entry Fields]
* Pathname of directory to export                     /oracle/log
* Version of exported directory to be changed         3
  Anonymous UID                                      [-2]
  Public filesystem?                                 [no]                                                                             +
* Change export now, system restart or both           both                                                                            +
  Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]
  Allow access by NFS versions                       [3]
  External name of directory (NFS V4 access only)    []
  Referral locations (NFS V4 access only)            []
  Replica locations                                  []
  Ensure primary hostname in replica list             yes                                                                             +
  Allow delegation?                                  []
  Scatter                                             none                                                                            +
  Security method 1                                  [sys]                                                                            +
      Mode to export directory                       [read-write]                                                                     +
      Hostname list. If exported read-mostly         []
      Hosts & netgroups allowed client access        [linuxclient]

      Hosts allowed root access                      [linuxclient]]

After editing all mandatory value press ENTER and then when OK screen appear it means that file system has been exported successfully.

Once exported at Linux client machine execute following command to mount

# mount  aixhacmpserver:/oracle/log  /oracle/log



Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

When i exported AIX FS using 1st method by editing file  /usr/es/sbin/cluster/etc/exports and activated changes by command #exportfs -a -f /usr/es/sbin/cluster/etc/exports . we encoutered with following error.

Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

So when i used SMITTY nfs to export we didn't faced that error.

one difference which i observed while exporting again using SMITTY is following.

Hosts & netgroups allowed client access        [ ] //In this line we didn't find client hostname   


Hosts allowed root access                      [linuxclient]]



Thanks !!!



Tuesday, October 16, 2018

Linux File system Type and feature


EXT2
Second extended file system
Not have journaling
Individual file size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 1993


EXT3
Third extended file system.
Have journaling
File size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 2001


EXT4
Second extended file system
Have journaling
Max individual file size from 16 GB to 16 TB
Maximum ext4 file system size is 1 EB
Max files - 4 billion
Not supports Transparent compression
Not support snapshots


XFS
Extended File System
Have journaling
individual file size 8 EB
Maximum file system size 8EB
MAX files 2 ^64
uses B+ trees for the directories and file allocation
XFS is Guaranteed Rate I/O (GRIO)
Not supports Transparent compression

BTRFS
B-Tree FS
Have journaling
Max individual file size 16Ebytes
Maximum file system 16Ebytes
MAX files 2 ^64
supports dynamic I-node allocation
provides support for RAID striping
supports Transparent compression
Supports Snapshots


Thanks !!!!!!











TAR backup without absolute path and extract to any location.



Today I am discussing how to take backup of directory without specifying absolute path.                              
Create tar backup with and without absolute path. There are  different way of creating tar backup.


Some time some strange things happened when Linux/Unix admin extract tar backup and that backup not got extracted to correct location.


Why backup not extracted to correct location ??


Answer to this query is command which was used while creating tar backup ,if admin used correct command while creating tar backup then he will not face any problem.





What will happen after extracting and where backup will be extracted ??  also we will see how to take backup of soft link in tar backup and preserve permission.

Linux/Unix admin sometime need to create tar backup and restore it to on other server where data need to extract at different path.

So let see how admin can create tar backup using correct command .

what are the correct way of creating tar backup ??

There are 2 ways to create tar backup??

1. With absolute path 
2. Without Absolute path

While creating tar backup don't mention absolute path, if Linux-UNIX admin want to extract tar backup at different path,  because if admin create tar backup using absolute path then while extracting backup at target location OS will try to find same path and if it will not able to find path then it will extract to current directory and while creating tar backup admin specified full path of directory then it will create whole structure of directory from parent directory.

So what will be solution for this ??

Solution is while creating backup go to path where target directory

and execute command to create tar backup 

#cd /required-path       // where test dir is present 

#tar -zcvf test.tar.gz test




How to extract this backup to destination server at any path.

At destination ,Go to target path and execute following command


#tar -zxvf test.tar.gz            // extarct to current directory.
tar -zxvf test.tar.gz -C /destination_directory   //extract to specific directory




Some example of tar command.


if admin want to preserve symbolic link and preserve original permission he can use "h" and "p" flag.


h - preserve symbolic link
p- preserve original permission.


ex.  tar -cvhf




*** if admin create tar backup with absolute path then at target server there must be absolute path exist if absolute  path not exist it will extract backup to current path where he extracting tar file and create same directory structure in current path.

Monday, October 1, 2018

Crontab for housekeeping old files


MM HH DOM Month DOW CommandToExecuted

MM Minutes(0-59)
HH Hour(0-23)
DOM Day Of Month(0-31)
Month (1-12)
DOW (0-6)Sunday=0 weekday



Hello friends today i am going to write about how to house keep file system on Linux/Unix using crontab. file system housekeeping needed when there is not enough space on File system or it reached to its threshold value. 

Let see one scenario where admin need to to house keep gzip files which are older than 100 days. so how he going to do this task ?? 

So in this situation admin need to first identify files which are old than 100 days and then need to remove from desired filesystem. here we can use following find command to identify .gz files which are older than 100 days.

#find /oracle/orausr -type f -mtime +100  -name "*.gz"

how to remove these files ??

answer is use following command 

#find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \; 

Till now we found way for finding and deleting *.gz file older than 100 days. but admin need to automate this by using crontab. also what if there are 1000 of server ?? so admin cant login to each server and delete these files, so in that situation crontab useful.

Crontab entry for scheduling above cron job is like below.

5 6 * * * find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \;   

This crontab execute every day at 6:05 am for any gz files which are older than 100 days and delete the same file.


** before scheduling this cron please make sure that you tested it on test environment and mentioned correct path for file deletion.


Thanks !!!



Thursday, September 13, 2018

NFS error not responding still trying error on Linux

"NFS error not responding still trying" ........ for filer from storage.
df -hT in hang sate ..........
dsmc q sched not working.........

All  above mentioned error alert we received from alert monitoring software , first didn't got what exactly wrong. because there are multiple error on LINUX server. on affected linux server we checked /etc/fstab and found that there are around 6 NFS share mounted . when we do ls and cd to this NFS share , for 2 share we found issue.

For 2 NFS share ls and cd command not succeeding. so we decided to check with storage admin .
we provided affected server IP and filer details to them and asked to check whether everything is correctly shared from their side or not ?? storage admin answered that all permission are ok and filer is correctly shared to LINUX server. we decided that reexport same filer again. after reexporting , we able to access NFS share and df -hT command also working.

But this joy of solving issue was not permanent . again same issue occurred on this linux server.
So whats next..........

Now we thinking that there must some issue at network level which causing  this and also in linux server log file "messages" we found following entry,

18:22:04 linuxclient1 kernel:  nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:22:42 linuxclient1 kernel: nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK

Sep  6 18:23:22 linuxclient1 kernel:] nfs: server netappnfsfiler.server.net OK 

From above logs ,we found that filer server is not responding to Linux server request, so there may be possibility that firewall blocking communication between NFS filer server and Linux client server.
we provided all required details to network team but after analysis found that from their side also no issue. Now pending team is VMWARE team who created this VM . but VM team also saying that all VM configuration is correct.

After their answer we did google search and found one interesting thing for this type of error and that was MTU. MTU is maximum transfer unit for ETHERNET interface on network.  incorrect MTU configuration on server causes performance issue on any Linu/UNIX and windows server. on affected Linux server MTU was 9000 . we also did search for MTU value on other server which are in same IP range and found that MTU for these server is 1500 and affected server it was 9000. we decided to take downtime for changing MTU value . we did MTU chnage to 1500 for primary interface and took reboot of Linux server and guess what after reboot everything was working perfect. df -hT , dsmc q sched and also ls ,cd command working perfectly on these NFS share.

At the end we can say that because of incorrect MTU configuration nfs share hang issue occurred and this nfs hang affect execution of df -hT ,dsmc q sched and ls ,cd command execution.

There may be multiple cause of NFS hang.

1. NFS server hang or down.
2. Firewall blocking communication between NFS server and Linux client.
3. incorrect MTU configuration on client or server side .
4. overloaded NFS server causing timeout for client request.


Thanks !!!


NFS export from AIX HACMP CLUSTER


Do NFS export from AIX HACMP CLUSTER to Linux server

Scenario: Export /hacmp/export from aixhanode1 to linux host linuxnode1 and linuxnode2

On PowerHA-cluster /usr/es/sbin/cluster/etc/exports is used for exporting filesystems.

Step 1: For Cluster setup while doing NFS export from Cluster node edit file /usr/es/sbin/cluster/etc/exports . Don’t use /etc/exports file, /etc/exports file used in NON-CLUSTER environment.
Step 2:
Find export directory stanza and add hostname to whom directory need to export.
Here find stanza "/hacmp/export" and add client name at last without affecting current hostname list.
Example:
Find stanza for /hacmp/export in file /usr/es/sbin/cluster/etc/exports and add hostname linuxnode1 linuxnode2 and save file
Vi /usr/es/sbin/cluster/etc/exports
/hacmp/export -rw,root=linuxnode1:linuxnode2
Step3:
Execute following command to activate changes which we made in file /usr/es/sbin/cluster/etc/exports , by executing following command.
exportfs -a -f /usr/es/sbin/cluster/etc/exports
Step 4 :
Check on both client whether NFS directory is exported or not by using below command
showmount -e nfsservername
showmount –e aixhanode1
Step 5:
Mount share directory using mount command.
Mount aixhanode1:/hacmp/export /mnt      //mounting on temp mount point
Mount aixhanode1:/hacmp/export /mnt
       Or you also mount on same name mount point using below command.
Before executing make sure that /hacmp/export exist on both NFS client.
Mount aixhanode1:/hacmp/export  /hacmp/export
Mount aixhanode1:/hacmp/export  /hacmp/export

Verify whether directory/FS is mounted or not by executing below command.
df –hT /hacmp/export

Thanks!!!

Friday, August 31, 2018

Expand AIX disk using chvg



How to expand volume group space without extending new disk to volume group?? So this post is helpful for AIX setup where addition of new disk is time consuming, so instead of adding new disk it will always good option to expand disk from storage end and then expand volume group on AIX level using chvg –g command.Consider situation where there is no free PP available on volume group and AIX admin want to immediately add additional free PP to volume group

There are 2 option to add space to AIX volume group,
1. Add new disk to volume group.
2.Expand existing disk size in volume group.
If situation demand immediate space addition to volume group then 2nd option is the best option.

Which details storage admin need to expand existing disk on AIX ??
Storage admin need Lun Id and Lun name for expanding disk from storage end.

Storage specific command to find Lun details.

EMC storage disk – powermt display dev=hdiskpowerX
IBMstorage - mpio_get_config –Av |grep hdiskX
XIV storage - xiv_devlist |grep hdiskx
Also admin can use lspv –u command to see PV details.
There may be different type of storage, so use command according to storage type to identify Lun ID details

After finding Lun id by AIX admin, provide the same to storage admin and ask to expand same disk with required size. Next step is, execute chvg –g vgname by AIX admin on server. confirm volume group expansion by command 
#lsvg vgname

What if chvg –g vgname command not expanding current disk space ??
 Answer to this question is, admin need to varyoffvg vg and then vary on vg. Here need downtime, because while varyoff VG admin first need to umount corresponding filesystem.

How to handle following error message ??
"chvg -g not supported in the rootvg"
0516-1380 chvg: Re-sizing of the disks is not supported for the rootvg.
0516-732 chvg: Unable to change volume group rootvg.
This error mesage will appear on AIX 5.3 and AIX 6.1 TL lower than 03.

chvg -g rootvg  command supported version are like below.
AIX V6.1 TL3 and higher (6100-03-00) resizing rootvg disks is now supported. on lower version this command will not work for AIX rootvg volume group.

Next query is how AIX admin expand volume group disk on AIX HACMP environment??
Step 1: First provide Lun details to storage team.
Step 2:/usr/es/sbin/cluster/sbin/cl_chvg -g datavg.
Step 3: Observer whether volume size is by using command 
#lsvg vgname


Thanks!!!!