November 18

Linux: Expand an ext3 partition after disk has already been enlarged

This process is similar to expanding an LVM partition, but the server must be rebooted

fdisk -l you have the list of normally 2 partitions, second beginning 137216 (Verify and write down this number before proceding)
fdisk /dev/sda

d to delete a partiton
2 the second

the 2nd partition is now deleted

n create partition
accept default primary, second partition but entry the good beginning address
137216 (This should be whatever number you wrote down above)

If you make a mistake quit without saving change with q
otherwize w will write the new table to disk

sudo reboot
sudo resize2fs /dev/sda2

Category: Linux | Comments Off on Linux: Expand an ext3 partition after disk has already been enlarged
November 18

Linux: The general process of expanding live LVM partiitons in Linux

Overview

    The following information outlines the drive expansion process. No rebooting is required with this process and it does not require us to do LVM partition spanning.

    Please note: It is important to complete all of the steps below in entirety. This procedure involves deleting an active partition and recreating it while the server is running. 

Short version:
Live Expansion of LVM Partitions in VMWARE:

In VCENTER goto the server - Edit Settings
Expand Hard disk 1 to 60 GB - Note (VMs cannot have snapshots at this point)

*After the expansion has completed take a snapshot of the server. Make certain to include "Snapshot Memory".  

- Note: Commands follow 

1. Log into the server via SSH or the Console. Before starting, execute the following command:
 echo 1>/sys/class/block/sda/device/rescan

- This forces a rescan of the hard drives

2. Verify which partition you are working with
 fdisk -l /dev/sda

3. Display the partitions
 cat /proc/partitions | grep sd

4. Display the physical drive information
 pvs

5. Modify the partition table
 fdisk /dev/sda

---------------------------------------
List the partitions
p
Delete the partition
d
2
Create the partition
n
p
2
Enter
Enter
Set the partition type
t
2
8e
Write/commit your changes
w
---------------------------------------

- Ignore the error

6. Let the OS know there have been partition table changes
 partprobe 

You may get an error stating that the server needs rebooted.  Do not reboot, just execute the next command: partx -u /dev/sda

7. Verify that in-memory kernel partition table has been updated with the new size
 cat /proc/partitions | grep sd

- This should look larger than step 2

8. Resize the LVM's physical volume
 pvresize /dev/sda2
 1 physical volume(s) resized / 0 physical volume(s) not resized

9. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <60.00g 2.00g

- This should be larger than step 3.

10. Display the names of the Logical Volumes
 lvdisplay
The format should look like:
  --- Logical volume ---
  LV Path                /dev/rhel/usr

11. Extend each volume to the appropriate size (this may vary per server as they do not all seem to be setup the same.)
 lvextend -L+16G /dev/rhel/usr
 lvextend -L+10G /dev/rhel/var
 lvextend -L+4G /dev/rhel/home

12. Grow the partion with in the logical volume to fill out the space
 xfs_growfs /dev/rhel/usr
 xfs_growfs /dev/rhel/var
 xfs_growfs /dev/rhel/home

13. Verify the drive space has increased
 df -h

14. You are finished

-------------------------

Long Version:
Live Expansion of LVM Partitions in VMWARE:

In VCENTER goto the server - Edit Settings
Expand Hard disk 1 to 60 GB - Note (VMs cannot have snapshots at this point)

*After the expansion has completed take a snapshot of the server. Make certain to include "Snapshot Memory".  

- Note: Commands follow 

1. Log into the server via SSH or the Console. Before starting, execute the following command:
 echo 1>/sys/class/block/sda/device/rescan

- This forces a rescan of the hard drives

2. Verify which partition you are working with
fdisk -l /dev/sda

Disk /dev/sda: 60.9 GB, 60899345920 bytes, 118,944,035 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b1786

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    62914559    29359104   8e  Linux LVM


3. Display the partitions
 cat /proc/partitions | grep sd
   8        0   83886080 sda
   8        1    2097152 sda1
   8        2   29359104 sda2
   8       16   41943040 sdb

4. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <28.00g    0 

5. Modify the partition table

---------------------------------------
 fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p

Disk /dev/sda: 60.9 GB, 60899345920 bytes, 118,944,035 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b1786

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    62914559    29359104   8e  Linux LVM

Command (m for help): d
Partition number (1,2, default 2): 2
Partition 2 is deleted
df -h
Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (2-4, default 2): 2
First sector (4196352-167772159, default 4196352): 
Using default value 4196352
Last sector, +sectors or +size{K,M,G} (4196352-118944035, default 118944035: 
Using default value 118944035
Partition 2 of type Linux and of size 60 GiB is set

Command (m for help): t
Partition number (1,2, default 2): 2
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
---------------------------------------

- Ignore the error

6. Let the OS know there have been partition table changes
 partprobe 

You may get an error stating that the server needs rebooted.  Do not reboot, just execute the next command: partx -u /dev/sda

7. Verify that in-memory kernel partition table has been updated with the new size
 cat /proc/partitions | grep sd
   8        0   83886080 sda
   8        1    2097152 sda1
   8        2   62914560 sda2
   8       16   41943040 sdb

- This should look larger than step 2

8. Resize the LVM's physical volume
 pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

9. Display the physical drive information
 pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <60.00g 2.00g

- This should be larger than step 3.

10. Display the names of the Logical Volumes
 lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                KGvb1z-IsoD-SIiz-D72J-YDal-aNtM-WFNETB
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:58:59 -0400
  LV Status              available
   open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/rhel/var
  LV Name                var
  VG Name                rhel
  LV UUID                uq7DEy-595R-dZin-HXiy-f0qS-JDAW-NUpN7m
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                6zieq1-xgWt-vZVc-4bcJ-oYv6-dAa4-T313Qk
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 2
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                Q2tfVr-f3A7-s5ca-1jGs-TkH0-TT5h-GjJnfX
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:00 -0400
  LV Status              available
   open                 1
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
   
  --- Logical volume ---
  LV Path                /dev/rhel/usr
  LV Name                usr
  VG Name                rhel
  LV UUID                2Ll4Ef-M0zU-rgB8-Fy2u-A9gx-VTFG-b3vomw
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-14 05:59:01 -0400
  LV Status              available
   open                 1
  LV Size                <4.00 GiB
  Current LE             1023
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2


11. Extend each volume to the appropriate size (this may vary per server as they do not all seem to be setup the same.)

 lvextend -L+16G /dev/rhel/usr
  Size of logical volume rhel/usr changed from <4.00 GiB (1023 extents) to <30.00 GiB (7679 extents).
  Logical volume rhel/usr successfully resized.

 lvextend -L+10G /dev/rhel/var
  Size of logical volume rhel/var changed from 10.00 GiB (2560 extents) to 30.00 GiB (7680 extents).
  Logical volume rhel/var successfully resized.

 lvextend -L+4G /dev/rhel/home
  Size of logical volume rhel/home changed from 1.00 GiB (256 extents) to 5.00 GiB (1280 extents).
  Logical volume rhel/home successfully resized.

12. Grow the partion with in the logical volume to fill out the space
 xfs_growfs /dev/rhel/usr
meta-data=/dev/mapper/rhel-usr   isize=512    agcount=4, agsize=261888 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=1047552, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1047552 to 5241856

 xfs_growfs /dev/rhel/var
meta-data=/dev/mapper/rhel-var   isize=512    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

 xfs_growfs /dev/rhel/home
meta-data=/dev/mapper/rhel-home  isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 1310720

13. Verify the drive space has increased
 df -h

Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root    10G  122M  9.9G   2% /
devtmpfs                3.9G     0  3.9G   0% /dev
tmpfs                   3.9G     0  3.9G   0% /dev/shm
tmpfs                   3.9G  373M  3.5G  10% /run
tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-usr     20G  3.8G   17G  19% /usr
/dev/sdb                 40G  3.3G   37G   9% /data
/dev/sda1               2.0G  269M  1.8G  14% /boot
/dev/mapper/rhel-var     20G  988M   20G   5% /var
/dev/mapper/rhel-home   5.0G   41M  5.0G   1% /home
sagshared:/linuxshared  306G   30G  276G  10% /mnt/Linuxmnt
tmpfs                   783M     0  783M   0% /run/user/1000
tmpfs                   783M  4.0K  783M   1% /run/user/0
//Lshare/eip            306G   30G  276G  10% /mnt/Linuxmnt
tmpfs                   783M   12K  783M   1% /run/user/42


14. You are finished
Category: Linux | Comments Off on Linux: The general process of expanding live LVM partiitons in Linux
November 18

Linux: Expanding a raw xfs drive

Example:
------------------------------------------------
/data is mapped to a second attached hard drive /dev/sdb

[root@servnamed-a ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Mar  5 08:08:09 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=90afc952-90eb-48bd-9cf4-f1790f23e159 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-usr    /usr                    xfs     defaults        0 0
/dev/mapper/rhel-var    /var                    xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/sdb		/data			xfs	defaults	0 0

Using the command pvs, we determined that LVM was not used on this volume

------------------------------------------------
1. Verify the drive mappings and space

[root@servname~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G     0  5.8G   0% /dev/shm
tmpfs                  5.8G   34M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/rhel-root   15G  1.9G   14G  13% /
/dev/mapper/rhel-usr   5.0G  3.8G  1.3G  76% /usr
/dev/sdb               100G   92G  8.6G  92% /data
/dev/sda1              3.0G  288M  2.8G  10% /boot
/dev/mapper/rhel-var    15G  2.4G   13G  16% /var
tmpfs                  1.2G     0  1.2G   0% /run/user/0
tmpfs                  1.2G     0  1.2G   0% /run/user/990
tmpfs                  1.2G   12K  1.2G   1% /run/user/42
tmpfs                  1.2G     0  1.2G   0% /run/user/1000

2. Determine if LVM was used on the target drive /dev/sdb
[root@servername ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <37.00g    0 

In our case it was not used as you can see their is no reference above.


3. Determine which file system was installed on the drive /dev/sdb
[root@servername ~]# blkid
/dev/mapper/rhel-var: UUID="0d05bcc2-b292-4e9f-a34a-b93539fbd8c0" TYPE="xfs" 
/dev/sda2: UUID="NQlV2B-EhAF-O7j2-hUDj-q2U8-EVCY-8obFEc" TYPE="LVM2_member" 
/dev/sda1: UUID="90afc952-90eb-48bd-9cf4-f1790f23e159" TYPE="xfs" 
/dev/sdb: UUID="93f4b905-330e-4a89-ad9f-454067886d70" TYPE="xfs" 
/dev/mapper/rhel-root: UUID="557b5ecd-0c1c-4c41-af04-827b5427e90b" TYPE="xfs" 
/dev/mapper/rhel-swap: UUID="a742138f-ae4a-4801-b8e0-d76a5260775a" TYPE="swap" 
/dev/mapper/rhel-usr: UUID="507a540b-3edd-4cee-9440-cfe7187bb43e" TYPE="xfs" 

4. Verify the current seen drive size
[root@servname~]# fdisk -l /dev/sdb
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


5. Expand the drive in VCENTER if this is a virtual machine.  Make certain to select the correct drive

6. recan the target drive for changed information
[root@servname~]# echo 1>/sys/class/block/sdb/device/rescan

7. Check the drive to verify the the OS see the new drive size
[root@servname~]# fdisk -l /dev/sdb
Disk /dev/sdb: 161.1 GB, 161061273600 bytes, 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

8. Expand the xfs partition to take all of the additional space.
[root@servname~]# xfs_growfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 26214400 to 39321600

9. Verify that the drive was expanded
[root@servname~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G     0  5.8G   0% /dev/shm
tmpfs                  5.8G   34M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/rhel-root   15G  1.9G   14G  13% /
/dev/mapper/rhel-usr   5.0G  3.8G  1.3G  76% /usr
/dev/sdb               150G   92G   59G  62% /data
/dev/sda1              3.0G  288M  2.8G  10% /boot
/dev/mapper/rhel-var    15G  2.4G   13G  16% /var
tmpfs                  1.2G     0  1.2G   0% /run/user/0
tmpfs                  1.2G     0  1.2G   0% /run/user/990
tmpfs                  1.2G   12K  1.2G   1% /run/user/42
tmpfs                  1.2G     0  1.2G   0% /run/user/1000
Category: Linux | Comments Off on Linux: Expanding a raw xfs drive
November 18

Linux: Windows AD integration

Now done with sssd and realmd

Use the following command to join an AD domain:
realm join companyname.com

Configuration files located at:
/etc/sssd/sssd.conf
[sssd]
domains = companyname.com
config_file_version = 2
services = nss, pam

[domain/companyname.com]
ad_domain = companyname.com
krb5_realm = COMPANYNAME.COM
realmd_tags = joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = True
fallback_homedir = /home/%u@%d
simple_allow_users = $, username, otherusername
access_provider = simple

systemctl restart sssd.service – reloads any sssd.conf changes

Category: Linux | Comments Off on Linux: Windows AD integration
November 18

Linux: snmp snmpwalk command

make certain snmp is installed
apt-get install snmp

make certain MIBS is installed
apt-get install snmp-mibs-downloader

Update MIBS
download-mibs

comment out the mibs : line in /etc/snmp/snmp.conf

snmpwalk -Os -v 1 -c communityname servername iso.3.6.1.2.1.1.1.0

Category: Linux | Comments Off on Linux: snmp snmpwalk command