There's a scenario where /storage partition is full. This is the one which hosts database which is used by vRSLCM 8.x
In this scenarion , application would not be available and neither you can connect to postgres database
What do you do then ?
Execute command pvdisplay
pvdisplay allows you to see the attributes of one or more physical volumes like size, physical extent size, space used for the volume group descriptor area and so on
root@lcm [ ~ ]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name data_vg
PV Size <150.00 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 38399
Free PE 0
Allocated PE 38399
PV UUID zyOl4J-z4kj-sG5N-9ZRM-2zgR-Nk0L-bAj0HG
--- Physical volume ---
PV Name /dev/sdc
VG Name stroage_vg
PV Size <10.00 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID nfAT0l-8gek-gJBm-hEop-DWd1-kGxp-tBm6Kl
--- Physical volume ---
PV Name /dev/sdd
VG Name swap_vg
PV Size <8.00 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2047
Free PE 0
Allocated PE 2047
PV UUID DN2RgK-3Fn5-jEf0-BvkK-3utn-enX2-z40mZK
Execute command lvdisplay
The lvdisplay command displays logical volume properties (such as size, layout, and mapping) in a fixed format
root@lcm [ ~ ]# lvdisplay
--- Logical volume ---
LV Path /dev/data_vg/data
LV Name data
VG Name data_vg
LV UUID GfSxfD-juGP-HAtH-liAq-0h37-6zcR-HN4n3E
LV Write Access read/write
LV Creation host, time photon-machine, 2021-10-08 18:55:37 +0000
LV Status available
# open 1
LV Size <150.00 GiB
Current LE 38399
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:1
--- Logical volume ---
LV Path /dev/stroage_vg/storage
LV Name storage
VG Name stroage_vg
LV UUID X5D8wB-ua1P-A8qe-ge3O-2Vu8-STpm-zQydUx
LV Write Access read/write
LV Creation host, time photon-machine, 2021-10-08 18:56:27 +0000
LV Status available
# open 1
LV Size <10.00 GiB
Current LE 2559
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2
--- Logical volume ---
LV Path /dev/swap_vg/swap1
LV Name swap1
VG Name swap_vg
LV UUID U3Syes-Ud5j-N47U-yLCf-OaC4-qkb0-QfPL0p
LV Write Access read/write
LV Creation host, time photon-machine, 2021-10-08 18:56:46 +0000
LV Status available
# open 2
LV Size <8.00 GiB
Current LE 2047
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
From vCenter if you try to inspect disk layout
If i go by the above data , my /storage is /dev/sdc which means if i have to increase /storage size , all i have to do is to go ahead and increase the disk size on the vCenter and then reboot appliance
Let's try the procedure
Before trying the procedure , here's the current disk space available
root@lcm [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.9G 0 2.9G 0% /dev
tmpfs 3.0G 20K 3.0G 1% /dev/shm
tmpfs 3.0G 1.1M 3.0G 1% /run
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/sda4 8.7G 3.3G 5.0G 40% /
tmpfs 3.0G 28M 2.9G 1% /tmp
/dev/sda2 119M 28M 85M 25% /boot
/dev/mapper/stroage_vg-storage 9.8G 521M 8.8G 6% /storage
/dev/mapper/data_vg-data 148G 42G 100G 30% /data
tmpfs 595M 0 595M 0% /run/user/0
Procedure
Increase the disk size to 15G as compared to 10G at the moment
Once you change the size of the volume from vCenter , reboot the vRSLCM appliance
Execute journalctl -x to inspect resizing mechanism
Sep 29 00:53:03 lcm.cap.org vaos[943]: Thu Sep 29 00:53:03 UTC 2022 Disk Util: INFO: LV Resizing /dev/stroage_vg/storage
Sep 29 00:53:03 lcm.cap.org vaos[943]: Size of logical volume stroage_vg/storage changed from <10.00 GiB (2559 extents) to <15.00 GiB (3839 extents).
Sep 29 00:53:04 lcm.cap.org vaos[943]: Logical volume stroage_vg/storage successfully resized.
Sep 29 00:53:04 lcm.cap.org vaos[943]: resize2fs 1.45.5 (07-Jan-2020)
Sep 29 00:53:04 lcm.cap.org kernel: EXT4-fs (dm-2): resizing filesystem from 2620416 to 3931136 blocks
Sep 29 00:53:05 lcm.cap.org rsyslogd[713]: imjournal: journal files changed, reloading... [v8.2202.0 try https://www.rsyslog.com/e/0 ]
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: 2022-09-29 00:53:05,329 main ERROR Unable to locate appender "MaskedLogs" for logger config "root"
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: . ____ _ __ _ _
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: \\/ ___)| |_)| | | | | || (_| | ) ) ) )
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: ' |____| .__|_| |_|_| |_\__, | / / / /
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: =========|_|==============|___/=/_/_/_/
Sep 29 00:53:05 lcm.cap.org launch-blackstone-spring[944]: :: Spring Boot :: (v2.1.16.RELEASE)
Sep 29 00:53:07 lcm.cap.org launch-blackstone-spring[944]: 2022-09-29 00:53:07.150 INFO lcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStarting] : Starting BlackstoneExternalApplication on lcm.cap.org with PID 962 (/opt/vmware/vlcm/blackstone/spring-common/blac>
Sep 29 00:53:07 lcm.cap.org launch-blackstone-spring[944]: 2022-09-29 00:53:07.167 DEBUG lcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStarting] : Running with Spring Boot v2.1.16.RELEASE, Spring v5.1.17.RELEASE
Sep 29 00:53:07 lcm.cap.org launch-blackstone-spring[944]: 2022-09-29 00:53:07.171 INFO lcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStartupProfileInfo] : No active profile set, falling back to default profiles: default
Sep 29 00:53:07 lcm.cap.org kernel: EXT4-fs (dm-2): resized filesystem to 3931136
Sep 29 00:53:07 lcm.cap.org vaos[943]: Filesystem at /dev/mapper/stroage_vg-storage is mounted on /storage; on-line resizing required
Sep 29 00:53:07 lcm.cap.org vaos[943]: old_desc_blocks = 1, new_desc_blocks = 1
Sep 29 00:53:07 lcm.cap.org vaos[943]: The filesystem on /dev/mapper/stroage_vg-storage is now 3931136 (4k) blocks long.
*
*
*
*
Sep 29 00:53:07 lcm.cap.org vaos[943]: + log '/etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0'
Sep 29 00:53:07 lcm.cap.org vaos[943]: ++ date '+%Y-%m-%d %H:%M:%S'
Sep 29 00:53:07 lcm.cap.org vaos[943]: + echo '2022-09-29 00:53:07 /etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0'
Sep 29 00:53:07 lcm.cap.org vaos[943]: 2022-09-29 00:53:07 /etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0
Sep 29 00:53:07 lcm.cap.org vaos[943]: + log 'main bootstrap everyboot done'
Sep 29 00:53:07 lcm.cap.org vaos[943]: ++ date '+%Y-%m-%d %H:%M:%S'
Sep 29 00:53:07 lcm.cap.org vaos[943]: + echo '2022-09-29 00:53:07 main bootstrap everyboot done'
Sep 29 00:53:07 lcm.cap.org vaos[943]: 2022-09-29 00:53:07 main bootstrap everyboot done
When you reboot , the autogrow script detects the change in the logical volume and increases the size automatically
Now if i go ahead and execute df -h , i should see the new disk size
root@lcm [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.9G 0 2.9G 0% /dev
tmpfs 3.0G 20K 3.0G 1% /dev/shm
tmpfs 3.0G 1.1M 3.0G 1% /run
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/sda4 8.7G 3.3G 5.0G 40% /
tmpfs 3.0G 9.3M 2.9G 1% /tmp
/dev/sda2 119M 28M 85M 25% /boot
/dev/mapper/stroage_vg-storage 15G 521M 14G 4% /storage
/dev/mapper/data_vg-data 148G 42G 100G 30% /data
tmpfs 595M 0 595M 0% /run/user/0
So now since i have made some room for my application to start appropriately , i'll go ahead and perform any maintanence needed on db or any other component as needed
Comments