Tuesday, October 16, 2018

Linux File system Type and feature


EXT2
Second extended file system
Not have journaling
Individual file size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 1993


EXT3
Third extended file system.
Have journaling
File size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 2001


EXT4
Second extended file system
Have journaling
Max individual file size from 16 GB to 16 TB
Maximum ext4 file system size is 1 EB
Max files - 4 billion
Not supports Transparent compression
Not support snapshots


XFS
Extended File System
Have journaling
individual file size 8 EB
Maximum file system size 8EB
MAX files 2 ^64
uses B+ trees for the directories and file allocation
XFS is Guaranteed Rate I/O (GRIO)
Not supports Transparent compression

BTRFS
B-Tree FS
Have journaling
Max individual file size 16Ebytes
Maximum file system 16Ebytes
MAX files 2 ^64
supports dynamic I-node allocation
provides support for RAID striping
supports Transparent compression
Supports Snapshots


Thanks !!!!!!











TAR backup without absolute path and extract to any location.



Today I am discussing how to take backup of directory without specifying absolute path.                              
Create tar backup with and without absolute path. There are  different way of creating tar backup.


Some time some strange things happened when Linux/Unix admin extract tar backup and that backup not got extracted to correct location.


Why backup not extracted to correct location ??


Answer to this query is command which was used while creating tar backup ,if admin used correct command while creating tar backup then he will not face any problem.





What will happen after extracting and where backup will be extracted ??  also we will see how to take backup of soft link in tar backup and preserve permission.

Linux/Unix admin sometime need to create tar backup and restore it to on other server where data need to extract at different path.

So let see how admin can create tar backup using correct command .

what are the correct way of creating tar backup ??

There are 2 ways to create tar backup??

1. With absolute path 
2. Without Absolute path

While creating tar backup don't mention absolute path, if Linux-UNIX admin want to extract tar backup at different path,  because if admin create tar backup using absolute path then while extracting backup at target location OS will try to find same path and if it will not able to find path then it will extract to current directory and while creating tar backup admin specified full path of directory then it will create whole structure of directory from parent directory.

So what will be solution for this ??

Solution is while creating backup go to path where target directory

and execute command to create tar backup 

#cd /required-path       // where test dir is present 

#tar -zcvf test.tar.gz test




How to extract this backup to destination server at any path.

At destination ,Go to target path and execute following command


#tar -zxvf test.tar.gz            // extarct to current directory.
tar -zxvf test.tar.gz -C /destination_directory   //extract to specific directory




Some example of tar command.


if admin want to preserve symbolic link and preserve original permission he can use "h" and "p" flag.


h - preserve symbolic link
p- preserve original permission.


ex.  tar -cvhf




*** if admin create tar backup with absolute path then at target server there must be absolute path exist if absolute  path not exist it will extract backup to current path where he extracting tar file and create same directory structure in current path.

Monday, October 1, 2018

Crontab for housekeeping old files


MM HH DOM Month DOW CommandToExecuted

MM Minutes(0-59)
HH Hour(0-23)
DOM Day Of Month(0-31)
Month (1-12)
DOW (0-6)Sunday=0 weekday



Hello friends today i am going to write about how to house keep file system on Linux/Unix using crontab. file system housekeeping needed when there is not enough space on File system or it reached to its threshold value. 

Let see one scenario where admin need to to house keep gzip files which are older than 100 days. so how he going to do this task ?? 

So in this situation admin need to first identify files which are old than 100 days and then need to remove from desired filesystem. here we can use following find command to identify .gz files which are older than 100 days.

#find /oracle/orausr -type f -mtime +100  -name "*.gz"

how to remove these files ??

answer is use following command 

#find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \; 

Till now we found way for finding and deleting *.gz file older than 100 days. but admin need to automate this by using crontab. also what if there are 1000 of server ?? so admin cant login to each server and delete these files, so in that situation crontab useful.

Crontab entry for scheduling above cron job is like below.

5 6 * * * find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \;   

This crontab execute every day at 6:05 am for any gz files which are older than 100 days and delete the same file.


** before scheduling this cron please make sure that you tested it on test environment and mentioned correct path for file deletion.


Thanks !!!



Thursday, September 13, 2018

NFS error not responding still trying error on Linux

"NFS error not responding still trying" ........ for filer from storage.
df -hT in hang sate ..........
dsmc q sched not working.........

All  above mentioned error alert we received from alert monitoring software , first didn't got what exactly wrong. because there are multiple error on LINUX server. on affected linux server we checked /etc/fstab and found that there are around 6 NFS share mounted . when we do ls and cd to this NFS share , for 2 share we found issue.

For 2 NFS share ls and cd command not succeeding. so we decided to check with storage admin .
we provided affected server IP and filer details to them and asked to check whether everything is correctly shared from their side or not ?? storage admin answered that all permission are ok and filer is correctly shared to LINUX server. we decided that reexport same filer again. after reexporting , we able to access NFS share and df -hT command also working.

But this joy of solving issue was not permanent . again same issue occurred on this linux server.
So whats next..........

Now we thinking that there must some issue at network level which causing  this and also in linux server log file "messages" we found following entry,

18:22:04 linuxclient1 kernel:  nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:22:42 linuxclient1 kernel: nfs: server netappnfsfiler.server.net not responding, still trying
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK
Sep  6 18:23:22 linuxclient1 kernel: nfs: server netappnfsfiler.server.net OK

Sep  6 18:23:22 linuxclient1 kernel:] nfs: server netappnfsfiler.server.net OK 

From above logs ,we found that filer server is not responding to Linux server request, so there may be possibility that firewall blocking communication between NFS filer server and Linux client server.
we provided all required details to network team but after analysis found that from their side also no issue. Now pending team is VMWARE team who created this VM . but VM team also saying that all VM configuration is correct.

After their answer we did google search and found one interesting thing for this type of error and that was MTU. MTU is maximum transfer unit for ETHERNET interface on network.  incorrect MTU configuration on server causes performance issue on any Linu/UNIX and windows server. on affected Linux server MTU was 9000 . we also did search for MTU value on other server which are in same IP range and found that MTU for these server is 1500 and affected server it was 9000. we decided to take downtime for changing MTU value . we did MTU chnage to 1500 for primary interface and took reboot of Linux server and guess what after reboot everything was working perfect. df -hT , dsmc q sched and also ls ,cd command working perfectly on these NFS share.

At the end we can say that because of incorrect MTU configuration nfs share hang issue occurred and this nfs hang affect execution of df -hT ,dsmc q sched and ls ,cd command execution.

There may be multiple cause of NFS hang.

1. NFS server hang or down.
2. Firewall blocking communication between NFS server and Linux client.
3. incorrect MTU configuration on client or server side .
4. overloaded NFS server causing timeout for client request.


Thanks !!!


NFS export from AIX HACMP CLUSTER


Do NFS export from AIX HACMP CLUSTER to Linux server

Scenario: Export /hacmp/export from aixhanode1 to linux host linuxnode1 and linuxnode2

On PowerHA-cluster /usr/es/sbin/cluster/etc/exports is used for exporting filesystems.

Step 1: For Cluster setup while doing NFS export from Cluster node edit file /usr/es/sbin/cluster/etc/exports . Don’t use /etc/exports file, /etc/exports file used in NON-CLUSTER environment.
Step 2:
Find export directory stanza and add hostname to whom directory need to export.
Here find stanza "/hacmp/export" and add client name at last without affecting current hostname list.
Example:
Find stanza for /hacmp/export in file /usr/es/sbin/cluster/etc/exports and add hostname linuxnode1 linuxnode2 and save file
Vi /usr/es/sbin/cluster/etc/exports
/hacmp/export -rw,root=linuxnode1:linuxnode2
Step3:
Execute following command to activate changes which we made in file /usr/es/sbin/cluster/etc/exports , by executing following command.
exportfs -a -f /usr/es/sbin/cluster/etc/exports
Step 4 :
Check on both client whether NFS directory is exported or not by using below command
showmount -e nfsservername
showmount –e aixhanode1
Step 5:
Mount share directory using mount command.
Mount aixhanode1:/hacmp/export /mnt      //mounting on temp mount point
Mount aixhanode1:/hacmp/export /mnt
       Or you also mount on same name mount point using below command.
Before executing make sure that /hacmp/export exist on both NFS client.
Mount aixhanode1:/hacmp/export  /hacmp/export
Mount aixhanode1:/hacmp/export  /hacmp/export

Verify whether directory/FS is mounted or not by executing below command.
df –hT /hacmp/export

Thanks!!!

Friday, August 31, 2018

Expand AIX disk using chvg



How to expand volume group space without extending new disk to volume group?? So this post is helpful for AIX setup where addition of new disk is time consuming, so instead of adding new disk it will always good option to expand disk from storage end and then expand volume group on AIX level using chvg –g command.Consider situation where there is no free PP available on volume group and AIX admin want to immediately add additional free PP to volume group

There are 2 option to add space to AIX volume group,
1. Add new disk to volume group.
2.Expand existing disk size in volume group.
If situation demand immediate space addition to volume group then 2nd option is the best option.

Which details storage admin need to expand existing disk on AIX ??
Storage admin need Lun Id and Lun name for expanding disk from storage end.

Storage specific command to find Lun details.

EMC storage disk – powermt display dev=hdiskpowerX
IBMstorage - mpio_get_config –Av |grep hdiskX
XIV storage - xiv_devlist |grep hdiskx
Also admin can use lspv –u command to see PV details.
There may be different type of storage, so use command according to storage type to identify Lun ID details

After finding Lun id by AIX admin, provide the same to storage admin and ask to expand same disk with required size. Next step is, execute chvg –g vgname by AIX admin on server. confirm volume group expansion by command 
#lsvg vgname

What if chvg –g vgname command not expanding current disk space ??
 Answer to this question is, admin need to varyoffvg vg and then vary on vg. Here need downtime, because while varyoff VG admin first need to umount corresponding filesystem.

How to handle following error message ??
"chvg -g not supported in the rootvg"
0516-1380 chvg: Re-sizing of the disks is not supported for the rootvg.
0516-732 chvg: Unable to change volume group rootvg.
This error mesage will appear on AIX 5.3 and AIX 6.1 TL lower than 03.

chvg -g rootvg  command supported version are like below.
AIX V6.1 TL3 and higher (6100-03-00) resizing rootvg disks is now supported. on lower version this command will not work for AIX rootvg volume group.

Next query is how AIX admin expand volume group disk on AIX HACMP environment??
Step 1: First provide Lun details to storage team.
Step 2:/usr/es/sbin/cluster/sbin/cl_chvg -g datavg.
Step 3: Observer whether volume size is by using command 
#lsvg vgname


Thanks!!!!

Wednesday, August 29, 2018

VIO update using alternate clone method





Hello everyone today, I am sharing VIO update using alternate disk cloning method. In this method, we will do update on dual VIO environment . Alternate cloning method first clone VIO OS on alternate disk and then start applying update on cloned disk. In dual VIO environment admin need to switch SEA(primary operational) to secondary VIO SEA and do update on primary VIO and after successful update on primary VIO do same for 2nd VIO

First verify VIO fileset consistency.
#errpt |more    - To check errpt alerts.
#lppchk -v   - To check inconsistency.
#installp –C   - Remove the inconsistency file sets. 
Check following fileset is installed on VIO or not, this fileset needed while doing update.
#lslpp –l |grep –i alt_disk_install.rte
Also most important thing take VIO backup before starting update. 
Command for taking VIO backup:
#su – padmin
$backupios –file /tmp/VIOImage
$viosbr -backup -file /home/padmin/`hostname`
Once backup completed transfer it to backup server. This backup need in case of recovery requirement. Before starting VIO update make sure that there is spare disk available which must be same size of rootvg of VIO server and that disk is not part of any volume group. It is always good to switch network traffic to secondary VIO while doing VIO update on Primary.  For single VIO setup admin need downtime and all client LPAR must be shut down before starting update.

For SEA setup do fail-over to secondary VIO by using following command and then proceed for VIO update. Below mentioned SEA adapter change according to SEA setup so please first identify SEA in dual VIO environment then change attribute to “standby”.

Step 1:
# lsdev -C | grep -i  "Shared Ethernet Adapter"
ent11          Available      Shared Ethernet Adapter

$entstat -d ent11 |grep -i state 
State: PRIMARY
LAN State: Operational
LAN State: Operational

# lsattr -El ent11 | grep ha_mode
ha_mode       auto     High Availability Mode    
                                        
# chdev -l ent11 -a ha_mode=standby
# lsattr -El ent11 | grep ha_mode
ha_mode       standby  High Availability Mode
After changing "ha_mode" to standby proceed for next step
Step 2: 
#updateios –commit    To Commit the current IOS level
Commit all applied state fileset before starting update .

Step 3:
Start update by using command "alt_root_vg" command
$alt_root_vg -target hdisk1 -bundle update_all -location /tmp/update

Step 4:
Once update done then check whether bootlist is updated new disk or not??
#bootlist -m normal -o
If bootlist is changed to new disk then admin need to take reboot.
$shutdown -restart
Step 5:
After reboot from new disk check VIOS OS level by using ioslevel command.
$license –accept   Accept license after VIO update
$ioslevel     // Check VIOS OS level after update.
#lppchk -v
Step 6:
After successful reboot of VIO, toggle SEA setting back to original.
Use following command to set SEA setting on Primary mode.

On Primary VIO
# chdev -l ent11 -a ha_mode=auto
# lsattr -El ent11 | grep ha_mode
ha_mode       auto  High Availability Mode

How to verify VIO OS level after update without reboot using chroot method??
Use command chroot to Wake up the cloned disk and change the shell prompt
            to verify the IOS level and oslevel of the VIO server.

chroot /alt_inst /usr/bin/ksh 
start the altinst_rootvg shell.
oslevel -s      check the oslevel of altinst_rootvg.
lppchk -m3 -v   To check inconsistency.
Installp –c all To commit the fileset

lppchk –vm3      -To again check the inconsistency.
su – padmin    - To switch to padmin.
ioslevel    - To check the iOS level on the VIOS.

For second VIO follow same approach while doing update.

Thanks!!!!