Increasing the size of an LVM Physical Volume (PV) while running multipathd — without rebooting

If you’re using the Linux Logical Volume Manager (LVM) to manage your disk space it’s easy to enlarge a logical volume while a server is up and running. It’s also easy to add new drives to an existing volume group.

But if you’re using a SAN the underlying physical drives can have different performance characteristics because they’re assigned to different QOS bands on the SAN. If you want to keep performance optimized it’s important to know what physical volume a logical volume is assigned to — otherwise you can split a single logical volume across multiple physical volumes and end up degrading system performance. If you run out of space on a physical volume and then enlarge a logical volume you will split the LV across two or more PVs. To prevent this from happening you need to enlarge the LUN, tell multipathd about the change, then enlarge the PV, then enlarge the LV, and finally enlarge the file system.

I have three SANs at the company where I work (two Pillar Axioms and a Xyratex) which are attached two two fibrechannel switches and several racks of blade servers. Each blade is running an Oracle database with multiple physical volumes (PVs) grouped together into a single LVM. The PVs are tagged and as logical volumes (LVs) are added they’re assigned to the base physical volume with the same tag name as the logical volume. That way we can assign the PV to a higher or lower performance band on the SAN and optimize the database’s performance. Oracle tablespaces that contain frequently-accessed data get assigned to a PV with a higher QOS band on the SAN. Archival data gets put on a PV with a lower QOS band.

We run OpenSUSE 11.x using multipathd to deal with the multiple fiber paths available between each blade and a SAN. Since each blade has 2 fiber ports for redundancy, which are attached to two fiber switches, each of which is cross-connected to 2 ports on 2 different controllers on the SAN, so there are 4 different fiber paths that data can take between the blade and the SAN. If any path fails, or one port on a fiber card fails, or one fiber switch fails, multipathd re-routes the data using the remaining data paths and everything keeps working. If a blade fails we switch to another blade.

If we run out of space on a PV I can log into the SAN’s administrative interface and enlarge the size of the underlying LUN, but getting the operating system on the blade to recognize the fact that more physical disk space was available is tricky. LVM’s pvresize command would claim that it was enlarging the PV, but nothing would happen unless the server was rebooted and then pvresize was run again. I wanted to be able to enlarge physical volumes without taking a database off-line and rebooting its server. Here’s how I did it:

  • First log into the SAN’s administrative interface and enlarge the LUN in question.
  • Open two xterm windows on the host as root
  • Gather information – you will need the physical device name, the multipath block device names, and the multipath map name. (Since our setup gives us 4 data paths for each LUN there are 4 multipath block device names.)
  • List the physical volumes and their associated tags with pvs -o +tags:
    # pvs -o +tags
      PV         VG     Fmt  Attr PSize   PFree   PV Tags                
      /dev/dm-1  switch lvm2 a-   500.38G 280.38G db024-lindx,lindx      
      /dev/dm-10 switch lvm2 a-     1.95T 801.00G db024-ldata,ldata      
      /dev/dm-11 switch lvm2 a-    81.50G      0  db024-mindx,mindx      
      /dev/dm-12 switch lvm2 a-   650.00G 100.00G db024-reports,reports  
      /dev/dm-13 switch lvm2 a-    51.25G  31.25G db024-log,log          
      /dev/dm-14 switch lvm2 a-   450.12G  50.12G db024-home,home        
      /dev/dm-15 switch lvm2 a-     1.76T 342.00G db024-q_backup,q_backup
      /dev/dm-16 switch lvm2 a-     1.00G 640.00M db024-control,control  
      /dev/dm-2  switch lvm2 a-   301.38G 120.38G db024-dbs,dbs          
      /dev/dm-3  switch lvm2 a-   401.88G 101.88G db024-cdr_data,cdr_data
      /dev/dm-5  switch lvm2 a-   450.62G 290.62G db024-archlogs,archlogs
      /dev/dm-6  switch lvm2 a-    40.88G  22.50G db024-boot,boot        
      /dev/dm-7  switch lvm2 a-    51.25G   1.25G db024-rbs,rbs          
      /dev/dm-8  switch lvm2 a-    51.25G  27.25G db024-temp,temp        
      /dev/dm-9  switch lvm2 a-   201.38G 161.38G db024-summary,summary
  • Find the device that corresponds to the LUN you just enlarged, e.g. /dev/dm-11
  • Run multipath -ll, find the device name in the listing. The large hex number at the start of the line is the multipath map name and the sdX block devices after the device name are the multipath block devices. So in this example the map name is 2000b080112002142 and the block devices are sdy, sdan, sdj, and sdbc:
    2000b080112002142 dm-11 Pillar,Axiom 500                 
    [size=82G][features=1 queue_if_no_path][hwhandler=0][rw] 
    \_ round-robin 0 [prio=100][active]                      
     \_ 0:0:5:9  sdy        65:128 [active][ready]           
     \_ 1:0:4:9  sdan       66:112 [active][ready]           
    \_ round-robin 0 [prio=20][enabled]                      
     \_ 0:0:4:9  sdj        8:144  [active][ready]           
     \_ 1:0:5:9  sdbc       67:96  [active][ready]
  • Next get multipath to recognize that the device is larger:
    • For each block device do echo 1 > /sys/block/sdX/device/rescan:
      # echo 1 > /sys/block/sdy/device/rescan
      # echo 1 > /sys/block/sdan/device/rescan
      # echo 1 > /sys/block/sdj/device/rescan
      # echo 1 > /sys/block/sdbc/device/rescan
    • In the second root window, pull up a multipath command line with multipathd -k
    • Delete and re-add the first block device from each group. Since multipathd provides multiple paths to the underlying SAN, the device will remain up and on-line during this process. Make sure that you get an ‘ok’ after each command. If you see ‘fail’ or anything else besides ‘ok’, STOP WHAT YOU’RE DOING and go to the next step.
      multipathd> del path sdy                                             
      ok                                                                   
      multipathd> add path sdy                                             
      ok                                                                   
      multipathd> del path sdj                                             
      ok                                                                   
      multipathd> add path sdj                                             
      ok
    • If you got a ‘fail’ response:
      • Type exit to get back to a command line.
      • Type multipath -r on the command line. This should recover/rebuild all block device paths.
      • Type multipath -ll | less again and verify that the block devices were re-added.
      • At this point multipath may actually recognize the new device size (you can see the size in the multipath -ll output). If everything looks good, skip ahead to the pvresize step.
    • In the first root window run multipath -ll again and verify that the block devices were re-added:
      2000b080112002142 dm-11 Pillar,Axiom 500                 
      [size=82G][features=1 queue_if_no_path][hwhandler=0][rw] 
      \_ round-robin 0 [prio=100][active]                      
       \_ 1:0:4:9  sdan       66:112 [active][ready]           
       \_ 0:0:5:9  sdy        65:128 [active][ready]           
      \_ round-robin 0 [prio=20][enabled]                      
       \_ 1:0:5:9  sdbc       67:96  [active][ready]           
       \_ 0:0:4:9  sdj        8:144  [active][ready]
    • Delete and re-add the remaining two block devices in the second root window:
      multipathd> del path sdan
      ok                       
      multipathd> add path sdan
      ok                       
      multipathd> del path sdbc
      ok                       
      multipathd> add path sdbc
      ok
    • In the first root window run multipath -ll again and verify that the block devices were re-added.
    • Tell multipathd to resize the block device map using the map name:
      multipathd> resize map 2000b080112002142
      ok
    • Press Ctrl-D to exit multipathd command line.
  • In the first root window run multipath -llagain to verify that multipath sees the new physical device size. The device below went from 82G to 142G:
    2000b080112002142 dm-11 Pillar,Axiom 500
    [size=142G][features=1 queue_if_no_path][hwhandler=0][rw]
    \_ round-robin 0 [prio=100][active]
     \_ 0:0:5:9  sdy        65:128 [active][ready]
     \_ 1:0:4:9  sdan       66:112 [active][ready]
    \_ round-robin 0 [prio=20][enabled]
     \_ 0:0:4:9  sdj        8:144  [active][ready]
     \_ 1:0:5:9  sdbc       67:96  [active][ready]
  • Finally, get the LVM volume group to recognize that the physical volume is larger using pvresize:
    # pvresize /dev/dm-11
      Physical volume "/dev/dm-11" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized
    # pvs -o +tags
      PV         VG     Fmt  Attr PSize   PFree   PV Tags
      /dev/dm-1  switch lvm2 a-   500.38G 280.38G db024-lindx,lindx
      /dev/dm-10 switch lvm2 a-     1.95T 801.00G db024-ldata,ldata
      /dev/dm-11 switch lvm2 a-   141.50G  60.00G db024-mindx,mindx
      /dev/dm-12 switch lvm2 a-   650.00G 100.00G db024-reports,reports
      /dev/dm-13 switch lvm2 a-    51.25G  31.25G db024-log,log
      /dev/dm-14 switch lvm2 a-   450.12G  50.12G db024-home,home
      /dev/dm-15 switch lvm2 a-     1.76T 342.00G db024-q_backup,q_backup
      /dev/dm-16 switch lvm2 a-     1.00G 640.00M db024-control,control
      /dev/dm-2  switch lvm2 a-   301.38G 120.38G db024-dbs,dbs
      /dev/dm-3  switch lvm2 a-   401.88G 101.88G db024-cdr_data,cdr_data
      /dev/dm-5  switch lvm2 a-   450.62G 290.62G db024-archlogs,archlogs
      /dev/dm-6  switch lvm2 a-    40.88G  22.50G db024-boot,boot
      /dev/dm-7  switch lvm2 a-    51.25G   1.25G db024-rbs,rbs
      /dev/dm-8  switch lvm2 a-    51.25G  27.25G db024-temp,temp
      /dev/dm-9  switch lvm2 a-   201.38G 161.38G db024-summary,summary

    pvs shows that /dev/dm-11 is now 141.5G.

At this point you can enlarge any logical volumes residing on the underlying physical volume without splitting the logical volume across multiple (non-contiguous) physical volumes using lvresize and enlarge the file system using the file system tools, e.g. resize2fs.

If you ran out of space, your LVs were split across multiple PVs, and you need to coalesce a PV onto a single LV use pvmove to move the physical volume to a single device.

Hope you find this useful.

19 thoughts on “Increasing the size of an LVM Physical Volume (PV) while running multipathd — without rebooting

  1. This is very useful. It helped me re-size my LV from 700GB to 1000GB without going through the hustle of backing up data, re-creating 1000GB Vdisk, and restoring there. Thanks for the stuff.

  2. Great article!! It helped me add re-size my clustered gfs2 file system on CentOS 5.4. This filesystem is created on a Logical Volume created using LVM.

  3. Hi,

    Is there a reason you didn’t use “multipathd -k’resize map /dev/dm-11′ instead of adding and deleting the paths?

    • io: I tried a number of different things before I found a series of steps that worked. I’ll try your method the next time I need to adjust a LUN’s size and see if it works for my setup.

      I also found a recovery step with “multipath -r” which will save your bacon if you get any errors when re-adding a path. I’ll be adding that step to the tutorial right now.

    • io: I tried your suggestion, but all I get is “fail”:

      # multipathd -k'resize map /dev/dm-12'
      fail

      I tried this both before and after typing the “echo 1 > /sys/block/sdab/device/rescan” commands. Got “fail” both times.

  4. Thanks great article!

    Googleing “resizing online PV without rebooting” gave me numerous article only talking about LV resizing…

    I knew how to online rescan drives with Linux: “/sys/class/scsci_host/host*/scan”
    but “/sys/block/*/rescan” is way better and did it!
    No reboot and online, only had to “pvresize” ;-)
    Thank you so much

  5. Very nice article! Your steps worked perfectly for me. Thanks for posting it for all to use!!!

  6. Pingback: How to expand the VG after the LUNs expanded?

  7. Thank you for your article but when I try pvresize /dev/dm-65 there comes this message:

    No physical volume label read from /dev/dm-65
    Failed to read physical volume “/dev/dm-65”
    0 physical volume(s) resized / 0 physical volume(s) not resized

    I really followed your guide but it doesn’t function. Can you help?

    • It sounds like dm-65 is was not initialized with pvcreate, although if it wasn’t then it shouldn’t be part of an LVM volume group.

      Try running:

      multipath -r

      multipath -ll | less

      Look for dm-65. What size does multipath show?

      Then run:

      pvs -o +tags

      What does does pvs show for dm-65?

  8. Pingback: Need to grow an LV on LVM2 on KVM instance using storage on an array via iSCSI +LVM

  9. Thank you, great article, I just expanded my storage on FC with Citrix XenServer 6.2 without any trouble.

  10. Terrific. the del path and add path really helps to recognize the new extended size of LUN.
    Thanks so much.

  11. Thank you very much :D this helped me out to detected the new size of the LUN in my server

    really appreciate this :0

  12. hy,

    Thanks for your works.

    I know this process, but I have LUKS on device multipath.

    Do the pvresize indicate:
    1 physical volume resized
    But when I used pvs command, i d’ont show the new size

    Any idea ?

    • If you’re using LUKS you need to think about at what level is LUKS installed? On my servers I typically install LUKS on a physical disk partition and install LVM on top of LUKS, so in my case I’d need to enlarge LUKS first with “cryptsetup resize” before I could use any LVM commands, such as pvresize. In my case LUKS “contains” the physical volume:

      Physical disk > Disk Partition > LUKS > PV > VG > LV > FS

      However, it’s also possible to install LUKS on a LV:

      Physical disk > Disk Partition > PV > VG > LV > LUKS > FS

      Or install LUKS on a disk with no LVM at all:

      Physical disk > Disk Partition > LUKS > FS

      When enlarging your filesystem, you want to enlarge each level, left to right.

      When shrinking your filesystem, you want to shrink each level, right to left.

Leave a Reply to Sanjeev Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.