Monday, December 9, 2019

How to Delete million of file in certain directory

How to Delete million of file in certain directory.

When administrator get request for deleting files which having large number, then simple "rm" command will not work. Solution to this problem is following command.

find . -type f -name "*.bak" -exec rm -i {} \;

Above command will find files with ".bak" extension and will delete them. IF administrator want to specify any specific path then he also specify that path like below.

find /backup -type f -name "*.bak" -exec rm -i {} \;

Thursday, July 25, 2019

How to change timezone from CEST to UTC on SUSE Linux Enterprise Server

                     TimeZone
 CEST => UTC
Some time Linux admin need to configure timezone for Linux host,reason may requirement from application or database team or sometime wrong timezone configuration by Linux admin.

So, Lets see how to configure timezone from CEST to UTC.

Procedure:

Step 1: Before changing timezone please note previous timezone details by using following command.

#date
root@linuxhost:/root : date
Thu Jul 18 16:30:38 CEST 2019

root@linuxhost:/root : cd /etc
root@sedcacoi0060:/etc : ls -lrt /etc/localtime
lrwxrwxrwx 1 root root        33 Jul  3 08:57 localtime -> /usr/share/zoneinfo/Europe/Berlin

Step2 :

Now here we need to change time zone to UTC. Before changing this setting first note previous setting in notepad file.

Step 3:

Excute following command to change timezone to UTC.

#rm /etc/localtime
#ln -sf /usr/share/zoneinfo/UTC /etc/localtime

Step 4:

Confirm change is done or not by using following command.
#ls -lrt /etc/localtime
lrwxrwxrwx 1 root root 23 May  6 12:05 /etc/localtime -> /usr/share/zoneinfo/UTC

Also using date command admin can confirm that timezone changed from CEST to UTC.

#date

Thu Jul 18 02:40:24 UTC 2019


Thanks !!!






Tuesday, July 23, 2019

auto mount issue on AIX hacmp cluster

Hello everyone ,

Today i am discussing on issue which i faced while umounting nfs mount on AIX hacmp node .on aix hacmp node netapp filer was mounted and we received request to umount that nfs mount permanently.

We executed "#umount"  command to perform this ticket and updated to responsible team, but after some time found that mount point auto mounting. we thought that somebody has mounted and did umount again, but that mount point mounting again.

so we decided to check configuration file /etc/auto.direct on AIX cluster node and found that there is entry for that mount point.

/etc/auto.direct entry:

/share/data -bg,intr,soft,rw  nfsServername:/vol/xxxxx_cifs_nfs_vol017/share_data

After comment

#/share/data -bg,intr,soft,rw  nfsServername:/vol/xxxxx_cifs_nfs_vol017/share_data


To sort out auto mount issue we commented nfs mount entry in /etc/auto.direct and then umounted /share/data.

***always take backup of configuration file before editing.

Thanks !!!!!!!!!!!

umount AIX nfs mount point by removing entry in /etc/filesystems

Sometime while umounting nfs mount point, AIX admin also need to remove entry in /etc/filesystems. Let see how we can do this using 2 different method.

1. Using rmnfsmnt command.
2. Using umount command and then remove entry manually from /etc/filesystems.

In first method we need to execute following command

Lets assume that nfs mount point name is "/nfs_aix", so command is like

#rmnfsmnt -f /nfs_aix -B

-f  specify mount point name
-B Remove /etc/filesystems entry for mentioned mount point.

Second method is like below

1. umount /nfs_aix
2. Take backup of /etc/filesystems
3. find nfs mount point entry /nfs_aix  in /etc/filesystems and remove that entry.
4.confirm whether entry removed from configuration file .

Thanks !!!

Tuesday, June 4, 2019

umask value modification on specific user account for Linux-unix server


                                                                 
While doing user administration we received one request for permanent umask value change.

First understand what is umask and what exactly this value do if admin create file or directory on UNIX and Linux system.

Answer: umask value determines final permission value for directory and file, which admin create.
so by changing this value admin can customize these permission according to security requirement.
default value of umask is 022.

How Directory and file  how umask calculated ?
Ans:

Directory:  777-022 = 755    Final permission for directory after umask subtract.
File:           666-022 = 644     Final permission for File after umask subtract.


.Lets see what is the scenario............

Scenario:

For user oracle change umask value from 022 to 002.

Solution : answer to this request is simply edit .profile using vi editor and add line umask 0002 and save file. After saving file log out and log in to user account to make these changes effective or reload .profile using any one below mentioned command.

Reload .profile using using any one command from following list.

1. oracle#/home/oracle: . $HOME/.profile     
2.oracle#/home/oracle . .profile


After reload of .profile if admin execute "umask" command he will see changed umask value from 022 to 002. This method is useful in situation where this value need to persistent.

Thanks.

Wednesday, March 13, 2019

How to configure and change SMTP server on AIX


Hello everyone, todays I am going to share how AIX admin can modify existing SMTP server configuration.
When AIX admin do AIX LPAR migration from one DATA Center to other and after migration SMTP mail server will be different. So this different mail server need to be changed in   "sendmail.cf"  file else mail alert for application will not generated or it will stuck. 

For AIX LPAR we did migration and after migration, we did post check and handover to development/application team, but after handover, they complaining that they not receiving mail alert on their team mail ID from migrated AIX server.so after doing analysis we found that mail server configuration missed and that need to be corrected. After replacing Mail Server with correct name, everything worked fine and team receiving mail.

Steps, which we followed for SMPT server change
Step1 : vi /etc/sendmail.cf
sendmail.cf -> /etc/mail/sendmail.cf   // in /etc you will find this file

Step 2: Find word “smart relay” in this configuration file like below
DSoldsmtp.server.com    //old SMTP server

Replace old SMTP name with suitable new SMTP server which suits current environment after AIX LPAR migration
DSnewsmtp.server.com    //new SMTP server

Step 3:
Stop sendmail service  and start it using following commands.
#stopsrc -s sendmail
# startsrc -s sendmail -a "-bd -q30m"

-bd   Start sendmail process at background as daemon.
-q30m   process mail queue every 30 minutes.

Thanks!!!

Tuesday, March 12, 2019

mount.nfs Remote I/O error


Yesterday when i exported /aix/share  from AIX 7.1 to Linux client with NFS version 3 and when tried to mount on SUSE Linux 12 client with following command got error “mount.NFS: Remote I/O error”
root# mount aixNFSserver:/aix/share /linux/share
“mount.NFS: Remote I/O error”

First, I did not got why this error occurred, because everything was ok from NFS server side. Permission, client hostname entry in /etc/exports file on NFS server was correct. Then i thought that it is looking like I/O error for file system, which I exported, but on NFS server, shared file system is ok and not showing any error related I/O. After I checked that what is default NFS version for SUSE Linux 12 and found that it is NFS version 4 and observed that there are some mount option was used for mounting AIX NFS share to SUSE Linux.

Observation found:
Environment:
Source NFS server:  AIX 7.1
Version: NFS V3.
Target Linux client: SUSE Linux 12.
Default NFS Version: NFS Version 4.
Found /etc/fstab entry for NFS mount with following option for other mount which already mounted from AIX to Linux.

aixNFSserver2:/aix/soft /linux/soft nfs     defaults,nfsvers=3,rw,intr 0 0

After this observation, I did entry for /aix/share like below and executed mount command and guess what it was successful

aixNFSserver:/aix/share /linux/share nfs    defaults,nfsvers=3,rw,intr 0 0

Option and their meaning
intr : Allows NFS requests to be interrupted if the server goes down or cannot be reached.
nfsvers= option specifies which version of nfs protocol should use, If admin not choose any version NFS will use highest supported version

What was the reason that causing previously mount failure with error “mount.NFS: Remote I/O error” ??

Answer:
Previously we did not specified NFS version while mounting share we directly executing mount command like below.

root# mount aixNFSserver:/aix/share /linux/share

 And we didn’t specified mount option “nfsvers=3” because default NFS version of SUSE 12 is NFSv4 and exported NFS version from AIX NFS server is NFSv3. While mounting we didn’t mention any NFS version so at Linux side NFS chossed highest supported NFS version and that was NFS v4 and this causing Remote I/O error while mounting.

Also we can specify mount option while executing mount command. In below example shown how to specify mount option while mounting NFS share from AIX to Suse Linux to avoid error "mount.NFS: Remote I/O error"

#mount –t nfs aixNFSserver:/aix/share /linux/share –o nfsvers=3

Or

First make entry in /etc/fstab like below and then execute command #mount –a.
aixNFSserver:/aix/share /linux/share nfs    defaults,nfsvers=3,rw,intr 0 0

Finally we conclude that if we specify correct nfs version while mounting NFS share we can avoid error mount.NFS: Remote I/O error.


Thanks !!!!!

Monday, March 11, 2019

How to deallocate alias IP from AIX HACMP cluster.

How to deallocate alias service IP on AIX hacmp cluster.

Scenario:

we had request for deallocation of service IP from AIX hacmp node and assign that same IP  on Linux platform server.
So here application team moving there app from AIX to Linux so they want same IP address which is currently assigned on AIX hacmp node.

Solution :

How we can deallocate service IP from AIX hacmp node, answer to this query is simply bring down resource group to which service IP belong.

Steps followed for doing this activity:


IP address befor bringing down resource group


root@:/root : ifconfig -a
en0: BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
        inet 192.168.1.XX netmask broadcast
        inet 10.1.XXX.XXX netmask broadcast
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1


Here service IP is 10.1.XXX.XXX , so we need to bring down 10.1.XXX.XXX.


Step 1:

root@/root : clRGinfo
--------------------------------------------------------------------------------------
Group Name                   Group State      Node
---------------------------------------------------------------------------------------
rg_app1                   ONLINE           node1     // IP is in rg_app1 Resource group
                          OFFLINE          node2


Step 2:

Stop resource group using smitty clstop


                                                             Stop Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                                                        [Entry Fields]
* Stop now, on system restart or both                 now                                                                                      +
  Stop Cluster Services on these nodes               [node1]                                                                         +
  BROADCAST cluster shutdown?                         false                                                                                    +
* Select an Action on Resource Groups                 Bring Resource Groups Offline


Step 3:

once stopped confirm uing following command

root@/root : clRGinfo
--------------------------------------------------------------------------------------
Group Name                   Group State      Node
---------------------------------------------------------------------------------------
rg_app1                   OFFLINE           node1     // here rg_app1 is OFFLINE.
                          OFFLINE          node2



Step 4:

After resource group offline alias IP also got deallocated from interface en0.
if you observed ifconfig -a ouput you will see that alias service IP down after Resource group stop.

root@:/root : ifconfig -a
en0: BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
        inet 192.168.1.XX netmask  broadcast


What about non-hacmp AIX server , in that situation you need to bring down AIX host or other option is bring down that interface from HMC console.
why HMC console ??

Because after interface down AIX admin also able to access AIX LPAR, but if admin using putty ssh seesion then after interface down then server
will be not reachable from putty ssh. It will only accessible from HMC console.

     
     



Thanks !!!!

AIX nfs export property change using smitty nfs on AIX cluster.

Hello Everyone ,

Today i am going to share how to change property of existing nfs share on AIX cluster setup, when existing exports file(/usr/es/sbin/cluster/etc/exports) is too large for editing and it generates some unreadable character while editing.

Scenario:

We had one scenario where, we need to add new host name to already exported NFS share on AIX cluster setup. Here host name was same but domain was changed and because of this issue nfs share showing error so application team requested us to delete previous hostname and add hostname with FQDN and do re-export. 

Lets see how we implemented and solved above mentioned scenario and which issue we faced.

Solution:

when we tried to edit existing exports file(/usr/es/sbin/cluster/etc/exports) on AIX cluster node we found that file is having too many entry of host name and we find it difficult to edit and also it generated some  unreadable character. so we decided that use smitty nfs to edit this file.

While doing changes to existing export make sure that you entered correct path name for "hanfs"config file and that is /usr/es/sbin/cluster/etc/exports.


* Pathname of directory to export                  [/clnfs]
  Anonymous UID                                              [-2]
  Public filesystem?                                        no
* Export directory now, system restart or both              both

  Pathname of alternate exports file                        [/usr/es/sbin/cluster/etc/exports]

Steps we followed to remove old host name and add same host name with new domain are as below:

1.

smitty NFS

2.

Change characteristics of existing NFS share

3.

choose nfs share name , here /clnfs

4.

version of NFS is 3

5.

remove old hostname with old domain and added same hostname with new domain


  HOSTS allowed root access                          [add hostname]
  HOSTS & NETGROUPS allowed client access            [add hostname]
  PATHNAME of Exports file if using HA-NFS           [/usr/es/sbin/cluster/etc/exports]
]
6.
press enter

after successful re-export make sure that nfs share exported correctly.


7.
at client side remount nfs export.


***issue we faced while doing re-export was, we faced error like below 

"exportfs: 1831-189 hostname: unknown host "

this error was saying that NFS server unable to resolve newly added hostname.

Solution:


we added newly added host name in following format on AIX NFS server in /etc/hosts file

IP      FQDN          HOSTNAME

and problem got resolved.




Thanks  !!!









PARTED on SUSE Linux 12

PARTED on SUSE Linux 12

For creating new partition on SUSE 12 Linux admin generally use "PARTED" command. Also one very important thing want to highlight here is, Linux admin can not create partition larger than 2 TB using fdisk, so in this situation "PARTED" command come in picture.

Also when Linux admin want to extend existing disk on SUSE 12 then parted command is useful, because when Linux admin try to extend existing disk and try to create 2nd or any new partition "fdisk" will not work. It will throw error on SUSE 12 as below

Error :

Value out of range. 

so solution to this issue is always use "parted" command on SUSE 12 for partitioning existing disk.

Now here i will discuss  how to create partition on new disk on SUSE 12.

scenario :

Create new partition on /dev/sdb of  size 100 GB

root@/root : parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
(parted) mklabel gpt         // assign "gpt" lable type to disk.
(parted) mkpart primary 1049K 100%          // Create primary partition of using 100% free space.
(parted) set 1 lvm on                              // SET lvm flag to partition 1.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags

 1      1049kB    100 GB  100 GB               primary  lvm

(parted) q

Information: You may need to update /etc/fstab.

root@:/root : partx -a /dev/sdb   // add newly created partitions.

root@:/root : pvcreate /dev/sdb1    // create new PV 


This is how we created 100 GB partition on SUSE Linux using parted. Newly created PV admin can use for expanding existing Volume group or can use to create new volume group.

Example.

1.
Add /dev/sdb1  to existing volume group vg01

#vgextend vg01 /dev/sdb1

Or
2.

Create new volume group vg02 using /dev/sdb1 PV.

#vgcreate vg02 /dev/sdb1


Thanks !!!!!



Sunday, January 13, 2019

Suse Linux SLES 12 fdisk error "Value out of range"

Hello everyone this post i am writing for Linux admin who faced or will going to face following error , when they use fdisk command for doing partition on Suse Linux 12.

"Error : Value out of range"

Task details 
==============

Increase size of following file system to 100 GB

Procedure followed :
=================
Filesystem details 

# df -hT /data
Filesystem                 Type      Size    Used      Avail  Use% Mounted on

/dev/mapper/vg01-lvdata      50G   1.0G        49G   2% /data

# vgs
  VG     #PV #LV #SN Attr     VSize    VFree
  vg00     1       3     0      wz--n- 19.47g   1.47g

  vg01     1        8   0       wz--n-  50.0g   0.00 

Disk layout on which "/data" reside

sdb                          8:16   0  50G  0 disk
`-sdb1                       8:17   0  50G  0 part

  |-vg01-lvdata    254:3    0   50G  0 lvm  /data

SCSI details

root#  lsscsi

[0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb

On vg01 there was no free space, so decided to extend existing disk to to 100 GB. so we provided SCSI details to VM team and requested increase size of /dev/sdb from 50 GB to 100 GB. after size increase from vmware team, we did disk re-scan using following command

#echo 1 >/sys/block/sdb/device/rescan

After re-scan output of lsblk is :

#lsblk |grep -i sdb
sdb                          8:16   0  100G  0 disk

now next step was to create new partition using fdisk utility. So i use command #fdisk /dev/sdb and printed existing partition table before creating new partition and observed that there is already one partition /dev/sdb1 there so new partition will be /dev/sdb2.

Steps followed for partitioning using fdisk :

#fdisk /dev/sdb

by pressing "n" started new partition creation process.

First sector : Press enter (it will take default value) // it will take value from where disk extended.

End sector , +sectors or +size{K,M,G}: +50G    // when press enter got following error

  "Error : Value out of range".  

After searching on google and took suggestion from senior,  always use "parted" for doing partition on SLES 12 when situation is like above, where disk is having one primary partition already then we need to create 2nd partition /dev/sdb2, then use "parted" command.

After this i used parted for creating partition on /dev/sdb like :

Procedure for doing partition using parted :
***please use parted command carefully , it can erase all data on your disk if you use it wrongly.



Linux#: parted /dev/sdb

GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print      //print partition details on /dev/sdb
Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another operating system
believes the disk is smaller.  Fix, by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? fix     // enter fix

Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an
extra 629145600 blocks) or continue with the current setting?
Fix/Ignore? Fix   // enter fix

Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr
Disk Flags:

Number  Start         End          Size   File system  Name     Flags
 1             1049kB  50GB      50GB                      primary  lvm         // Existing /dev/sdb1 partition

(parted) mkpart primary 50GB 100%      // Create /dev/sdb2  partition which start from 50 GB and end is use 100% space from 50GB onward.
(parted) print

Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr
Disk Flags:
Number  Start   End    Size   File system  Name     Flags
 1      1049kB  50GB   50GB                  primary        lvm
 2       50GB    100GB   50GB                 primary  

(parted) set 2 lvm                                // setting flag to "lvm" for 2nd partition /dev/sdb2
New state?  [on]/off? on
(parted) print

Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      1049kB  50GB  50GB               primary  lvm
 2      50GB   100GB  50GB               primary  lvm

(parted) quit
Information: You may need to update /etc/fstab.

#partx -a /dev/sdb      // Add the specified partition
#pvcreate /dev/sdb2
#vgextend vg01 /dev/sdb2

#vgs
VG       #PV #LV #SN Attr   VSize  VFree
  vg00     1      3      0     wz--n- 19.47g   1.47g
  vg01     1      1      0     wz--n-  100.0g    50.0g

#lvextend -L +50G /dev/mapper/vg01-lvdata
#xfs_growfs  /dev/mapper/vg01-lvdata

# df -hT /data
Filesystem                 Type      Size    Used      Avail  Use% Mounted on

/dev/mapper/vg01-lvdata      100G   1.0G   99G   1% /data


This way resized xfs filesysetm on Suse Linux 12, when  fdisk partitioning failing then used parted for partition and added that pv to vg01 and increased /data.

Thanks !!!

How to change existing user ID on Suse Linux using usermod command

Hello Friend ,

Today i am going to share how Linux admin can change existing user id. while doing day-today user administration task we got ticket for change existing user ID.

Request Details:

Change oracle user ID from 3600 to 3800.

Prerequisite check before changing user ID .

1. Make sure that new user ID which we going to use is not in use by any other user.
here 3800 is the new user id, so we can check whether its in use or not by using following command.

#cat /etc/passwd |grep -i 3800

2. No process with user id 3600 ,should be running while changing userid.
If any process is in running state(using user id 3200) while changing user id it will throw following error .

usermod: user oracle is currently used by process 17340


Now after checking all prerequisite we executed below command for userid change.

Syntax :  usermod -u [new_user_id]  [user_name]

root@Linux:/root : usermod -u 3800 oracle

-u new user ID for oracle user.

After executing above command we got some error message like below

"usermod: user oracle is currently used by process 17340"

After checking process details we found that its oracle process running with user id oracle[3600], So we checked with DBA team should we kill that process??. After there confirmation we killed process using command "#kill -9 17340" and re-run usermod command again and this time it executed successfully.

*** in production environment double check impact of killing any existing process. It will have impact on currently running application/database.

Check modified uid using below command.

#cat /etc/passwd |grep -i  oracle.

Thanks !!!