Wednesday, March 13, 2019

How to configure and change SMTP server on AIX


Hello everyone, todays I am going to share how AIX admin can modify existing SMTP server configuration.
When AIX admin do AIX LPAR migration from one DATA Center to other and after migration SMTP mail server will be different. So this different mail server need to be changed in   "sendmail.cf"  file else mail alert for application will not generated or it will stuck. 

For AIX LPAR we did migration and after migration, we did post check and handover to development/application team, but after handover, they complaining that they not receiving mail alert on their team mail ID from migrated AIX server.so after doing analysis we found that mail server configuration missed and that need to be corrected. After replacing Mail Server with correct name, everything worked fine and team receiving mail.

Steps, which we followed for SMPT server change
Step1 : vi /etc/sendmail.cf
sendmail.cf -> /etc/mail/sendmail.cf   // in /etc you will find this file

Step 2: Find word “smart relay” in this configuration file like below
DSoldsmtp.server.com    //old SMTP server

Replace old SMTP name with suitable new SMTP server which suits current environment after AIX LPAR migration
DSnewsmtp.server.com    //new SMTP server

Step 3:
Stop sendmail service  and start it using following commands.
#stopsrc -s sendmail
# startsrc -s sendmail -a "-bd -q30m"

-bd   Start sendmail process at background as daemon.
-q30m   process mail queue every 30 minutes.

Thanks!!!

Tuesday, March 12, 2019

mount.nfs Remote I/O error


Yesterday when i exported /aix/share  from AIX 7.1 to Linux client with NFS version 3 and when tried to mount on SUSE Linux 12 client with following command got error “mount.NFS: Remote I/O error”
root# mount aixNFSserver:/aix/share /linux/share
“mount.NFS: Remote I/O error”

First, I did not got why this error occurred, because everything was ok from NFS server side. Permission, client hostname entry in /etc/exports file on NFS server was correct. Then i thought that it is looking like I/O error for file system, which I exported, but on NFS server, shared file system is ok and not showing any error related I/O. After I checked that what is default NFS version for SUSE Linux 12 and found that it is NFS version 4 and observed that there are some mount option was used for mounting AIX NFS share to SUSE Linux.

Observation found:
Environment:
Source NFS server:  AIX 7.1
Version: NFS V3.
Target Linux client: SUSE Linux 12.
Default NFS Version: NFS Version 4.
Found /etc/fstab entry for NFS mount with following option for other mount which already mounted from AIX to Linux.

aixNFSserver2:/aix/soft /linux/soft nfs     defaults,nfsvers=3,rw,intr 0 0

After this observation, I did entry for /aix/share like below and executed mount command and guess what it was successful

aixNFSserver:/aix/share /linux/share nfs    defaults,nfsvers=3,rw,intr 0 0

Option and their meaning
intr : Allows NFS requests to be interrupted if the server goes down or cannot be reached.
nfsvers= option specifies which version of nfs protocol should use, If admin not choose any version NFS will use highest supported version

What was the reason that causing previously mount failure with error “mount.NFS: Remote I/O error” ??

Answer:
Previously we did not specified NFS version while mounting share we directly executing mount command like below.

root# mount aixNFSserver:/aix/share /linux/share

 And we didn’t specified mount option “nfsvers=3” because default NFS version of SUSE 12 is NFSv4 and exported NFS version from AIX NFS server is NFSv3. While mounting we didn’t mention any NFS version so at Linux side NFS chossed highest supported NFS version and that was NFS v4 and this causing Remote I/O error while mounting.

Also we can specify mount option while executing mount command. In below example shown how to specify mount option while mounting NFS share from AIX to Suse Linux to avoid error "mount.NFS: Remote I/O error"

#mount –t nfs aixNFSserver:/aix/share /linux/share –o nfsvers=3

Or

First make entry in /etc/fstab like below and then execute command #mount –a.
aixNFSserver:/aix/share /linux/share nfs    defaults,nfsvers=3,rw,intr 0 0

Finally we conclude that if we specify correct nfs version while mounting NFS share we can avoid error mount.NFS: Remote I/O error.


Thanks !!!!!

Monday, March 11, 2019

How to deallocate alias IP from AIX HACMP cluster.

How to deallocate alias service IP on AIX hacmp cluster.

Scenario:

we had request for deallocation of service IP from AIX hacmp node and assign that same IP  on Linux platform server.
So here application team moving there app from AIX to Linux so they want same IP address which is currently assigned on AIX hacmp node.

Solution :

How we can deallocate service IP from AIX hacmp node, answer to this query is simply bring down resource group to which service IP belong.

Steps followed for doing this activity:


IP address befor bringing down resource group


root@:/root : ifconfig -a
en0: BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
        inet 192.168.1.XX netmask broadcast
        inet 10.1.XXX.XXX netmask broadcast
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1


Here service IP is 10.1.XXX.XXX , so we need to bring down 10.1.XXX.XXX.


Step 1:

root@/root : clRGinfo
--------------------------------------------------------------------------------------
Group Name                   Group State      Node
---------------------------------------------------------------------------------------
rg_app1                   ONLINE           node1     // IP is in rg_app1 Resource group
                          OFFLINE          node2


Step 2:

Stop resource group using smitty clstop


                                                             Stop Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                                                        [Entry Fields]
* Stop now, on system restart or both                 now                                                                                      +
  Stop Cluster Services on these nodes               [node1]                                                                         +
  BROADCAST cluster shutdown?                         false                                                                                    +
* Select an Action on Resource Groups                 Bring Resource Groups Offline


Step 3:

once stopped confirm uing following command

root@/root : clRGinfo
--------------------------------------------------------------------------------------
Group Name                   Group State      Node
---------------------------------------------------------------------------------------
rg_app1                   OFFLINE           node1     // here rg_app1 is OFFLINE.
                          OFFLINE          node2



Step 4:

After resource group offline alias IP also got deallocated from interface en0.
if you observed ifconfig -a ouput you will see that alias service IP down after Resource group stop.

root@:/root : ifconfig -a
en0: BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
        inet 192.168.1.XX netmask  broadcast


What about non-hacmp AIX server , in that situation you need to bring down AIX host or other option is bring down that interface from HMC console.
why HMC console ??

Because after interface down AIX admin also able to access AIX LPAR, but if admin using putty ssh seesion then after interface down then server
will be not reachable from putty ssh. It will only accessible from HMC console.

     
     



Thanks !!!!

AIX nfs export property change using smitty nfs on AIX cluster.

Hello Everyone ,

Today i am going to share how to change property of existing nfs share on AIX cluster setup, when existing exports file(/usr/es/sbin/cluster/etc/exports) is too large for editing and it generates some unreadable character while editing.

Scenario:

We had one scenario where, we need to add new host name to already exported NFS share on AIX cluster setup. Here host name was same but domain was changed and because of this issue nfs share showing error so application team requested us to delete previous hostname and add hostname with FQDN and do re-export. 

Lets see how we implemented and solved above mentioned scenario and which issue we faced.

Solution:

when we tried to edit existing exports file(/usr/es/sbin/cluster/etc/exports) on AIX cluster node we found that file is having too many entry of host name and we find it difficult to edit and also it generated some  unreadable character. so we decided that use smitty nfs to edit this file.

While doing changes to existing export make sure that you entered correct path name for "hanfs"config file and that is /usr/es/sbin/cluster/etc/exports.


* Pathname of directory to export                  [/clnfs]
  Anonymous UID                                              [-2]
  Public filesystem?                                        no
* Export directory now, system restart or both              both

  Pathname of alternate exports file                        [/usr/es/sbin/cluster/etc/exports]

Steps we followed to remove old host name and add same host name with new domain are as below:

1.

smitty NFS

2.

Change characteristics of existing NFS share

3.

choose nfs share name , here /clnfs

4.

version of NFS is 3

5.

remove old hostname with old domain and added same hostname with new domain


  HOSTS allowed root access                          [add hostname]
  HOSTS & NETGROUPS allowed client access            [add hostname]
  PATHNAME of Exports file if using HA-NFS           [/usr/es/sbin/cluster/etc/exports]
]
6.
press enter

after successful re-export make sure that nfs share exported correctly.


7.
at client side remount nfs export.


***issue we faced while doing re-export was, we faced error like below 

"exportfs: 1831-189 hostname: unknown host "

this error was saying that NFS server unable to resolve newly added hostname.

Solution:


we added newly added host name in following format on AIX NFS server in /etc/hosts file

IP      FQDN          HOSTNAME

and problem got resolved.




Thanks  !!!









PARTED on SUSE Linux 12

PARTED on SUSE Linux 12

For creating new partition on SUSE 12 Linux admin generally use "PARTED" command. Also one very important thing want to highlight here is, Linux admin can not create partition larger than 2 TB using fdisk, so in this situation "PARTED" command come in picture.

Also when Linux admin want to extend existing disk on SUSE 12 then parted command is useful, because when Linux admin try to extend existing disk and try to create 2nd or any new partition "fdisk" will not work. It will throw error on SUSE 12 as below

Error :

Value out of range. 

so solution to this issue is always use "parted" command on SUSE 12 for partitioning existing disk.

Now here i will discuss  how to create partition on new disk on SUSE 12.

scenario :

Create new partition on /dev/sdb of  size 100 GB

root@/root : parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
(parted) mklabel gpt         // assign "gpt" lable type to disk.
(parted) mkpart primary 1049K 100%          // Create primary partition of using 100% free space.
(parted) set 1 lvm on                              // SET lvm flag to partition 1.
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags

 1      1049kB    100 GB  100 GB               primary  lvm

(parted) q

Information: You may need to update /etc/fstab.

root@:/root : partx -a /dev/sdb   // add newly created partitions.

root@:/root : pvcreate /dev/sdb1    // create new PV 


This is how we created 100 GB partition on SUSE Linux using parted. Newly created PV admin can use for expanding existing Volume group or can use to create new volume group.

Example.

1.
Add /dev/sdb1  to existing volume group vg01

#vgextend vg01 /dev/sdb1

Or
2.

Create new volume group vg02 using /dev/sdb1 PV.

#vgcreate vg02 /dev/sdb1


Thanks !!!!!