Monday, October 22, 2018

HACMP file system export from AIX server

Hello everyone today i am going to share how to export AIX HACMP File system and issues i faced while doing export of that File system from AIX to Linux client.

There 2 ways for exporting File system from AIX HACMP cluster node to Linux Clinet.

1. Edit   /usr/es/sbin/cluster/etc/exports and Activate changes by using following command.
#exportfs -a -f /usr/es/sbin/cluster/etc/exports

2. Use smitty nfs to export File system .

#smitty nfs 
Network File System (NFS)
Change / Show Attributes of an Exported Directory
Pathname of exported directory                     [/oracle/log] 
* Version of exported directory to be changed        [3] 

Then following screen will appear where AIX admin need to enter Exported FS name amd also 
Client details to whom this File system need to export . Important part is specify path name of alternate export file and that path is like below 
Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]   // This is for AIX hacmp server for non-hacmp export file name is /etc/exports

At final smitty window enter value like below:

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                                   [Entry Fields]
* Pathname of directory to export                     /oracle/log
* Version of exported directory to be changed         3
  Anonymous UID                                      [-2]
  Public filesystem?                                 [no]                                                                             +
* Change export now, system restart or both           both                                                                            +
  Pathname of alternate exports file                 [ /usr/es/sbin/cluster/etc/exports]
  Allow access by NFS versions                       [3]
  External name of directory (NFS V4 access only)    []
  Referral locations (NFS V4 access only)            []
  Replica locations                                  []
  Ensure primary hostname in replica list             yes                                                                             +
  Allow delegation?                                  []
  Scatter                                             none                                                                            +
  Security method 1                                  [sys]                                                                            +
      Mode to export directory                       [read-write]                                                                     +
      Hostname list. If exported read-mostly         []
      Hosts & netgroups allowed client access        [linuxclient]

      Hosts allowed root access                      [linuxclient]]

After editing all mandatory value press ENTER and then when OK screen appear it means that file system has been exported successfully.

Once exported at Linux client machine execute following command to mount

# mount  aixhacmpserver:/oracle/log  /oracle/log



Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

When i exported AIX FS using 1st method by editing file  /usr/es/sbin/cluster/etc/exports and activated changes by command #exportfs -a -f /usr/es/sbin/cluster/etc/exports . we encoutered with following error.

Issue which i faced during mount was following error:

"mount.nfs: access denied by server while mounting "

So when i used SMITTY nfs to export we didn't faced that error.

one difference which i observed while exporting again using SMITTY is following.

Hosts & netgroups allowed client access        [ ] //In this line we didn't find client hostname   


Hosts allowed root access                      [linuxclient]]



Thanks !!!



Tuesday, October 16, 2018

Linux File system Type and feature


EXT2
Second extended file system
Not have journaling
Individual file size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 1993


EXT3
Third extended file system.
Have journaling
File size can be from 16 GB to 2 TB
File system size can be from 2 TB to 32 TB
introduced in 2001


EXT4
Second extended file system
Have journaling
Max individual file size from 16 GB to 16 TB
Maximum ext4 file system size is 1 EB
Max files - 4 billion
Not supports Transparent compression
Not support snapshots


XFS
Extended File System
Have journaling
individual file size 8 EB
Maximum file system size 8EB
MAX files 2 ^64
uses B+ trees for the directories and file allocation
XFS is Guaranteed Rate I/O (GRIO)
Not supports Transparent compression

BTRFS
B-Tree FS
Have journaling
Max individual file size 16Ebytes
Maximum file system 16Ebytes
MAX files 2 ^64
supports dynamic I-node allocation
provides support for RAID striping
supports Transparent compression
Supports Snapshots


Thanks !!!!!!











TAR backup without absolute path and extract to any location.



Today I am discussing how to take backup of directory without specifying absolute path.                              
Create tar backup with and without absolute path. There are  different way of creating tar backup.


Some time some strange things happened when Linux/Unix admin extract tar backup and that backup not got extracted to correct location.


Why backup not extracted to correct location ??


Answer to this query is command which was used while creating tar backup ,if admin used correct command while creating tar backup then he will not face any problem.





What will happen after extracting and where backup will be extracted ??  also we will see how to take backup of soft link in tar backup and preserve permission.

Linux/Unix admin sometime need to create tar backup and restore it to on other server where data need to extract at different path.

So let see how admin can create tar backup using correct command .

what are the correct way of creating tar backup ??

There are 2 ways to create tar backup??

1. With absolute path 
2. Without Absolute path

While creating tar backup don't mention absolute path, if Linux-UNIX admin want to extract tar backup at different path,  because if admin create tar backup using absolute path then while extracting backup at target location OS will try to find same path and if it will not able to find path then it will extract to current directory and while creating tar backup admin specified full path of directory then it will create whole structure of directory from parent directory.

So what will be solution for this ??

Solution is while creating backup go to path where target directory

and execute command to create tar backup 

#cd /required-path       // where test dir is present 

#tar -zcvf test.tar.gz test




How to extract this backup to destination server at any path.

At destination ,Go to target path and execute following command


#tar -zxvf test.tar.gz            // extarct to current directory.
tar -zxvf test.tar.gz -C /destination_directory   //extract to specific directory




Some example of tar command.


if admin want to preserve symbolic link and preserve original permission he can use "h" and "p" flag.


h - preserve symbolic link
p- preserve original permission.


ex.  tar -cvhf




*** if admin create tar backup with absolute path then at target server there must be absolute path exist if absolute  path not exist it will extract backup to current path where he extracting tar file and create same directory structure in current path.

Monday, October 1, 2018

Crontab for housekeeping old files


MM HH DOM Month DOW CommandToExecuted

MM Minutes(0-59)
HH Hour(0-23)
DOM Day Of Month(0-31)
Month (1-12)
DOW (0-6)Sunday=0 weekday



Hello friends today i am going to write about how to house keep file system on Linux/Unix using crontab. file system housekeeping needed when there is not enough space on File system or it reached to its threshold value. 

Let see one scenario where admin need to to house keep gzip files which are older than 100 days. so how he going to do this task ?? 

So in this situation admin need to first identify files which are old than 100 days and then need to remove from desired filesystem. here we can use following find command to identify .gz files which are older than 100 days.

#find /oracle/orausr -type f -mtime +100  -name "*.gz"

how to remove these files ??

answer is use following command 

#find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \; 

Till now we found way for finding and deleting *.gz file older than 100 days. but admin need to automate this by using crontab. also what if there are 1000 of server ?? so admin cant login to each server and delete these files, so in that situation crontab useful.

Crontab entry for scheduling above cron job is like below.

5 6 * * * find /oracle/orausr -type f -mtime +100  -name "*.gz" -exec rm {} \;   

This crontab execute every day at 6:05 am for any gz files which are older than 100 days and delete the same file.


** before scheduling this cron please make sure that you tested it on test environment and mentioned correct path for file deletion.


Thanks !!!