Skip to main content

How To Create LVM Using vgcreate in Linux



Logical Volume Management (LVM) creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. With LVM in place,
you are not bothered with physical disk sizes because the hardware storage is hidden from the software so it can be resized and moved without stopping
applications or unmounting file systems. You can think of LVM as dynamic partitions.

For example, if you are running out of disk space on your server, you can just add another disk and extend the logical volume on the fly.
Below are some advantages of using Logical volumes over using physical storage directly:
•Resize storage pools: You can extend the logical space as well as reduce it without reformatting the disks.
•Flexible storage capacity: You can add more space by adding more disks and adding them to the pool of physical storage, thus you have a flexible storage

capacity.
•Use of striped, mirrored and snapshot volumes: Striped logical volume that stripes data across two or more disks can dramatically increase throughput.

Mirrored Logical volumes provide a convenient way to configure a mirror for your data. And you can take device snapshots for backups or to test the effect
of changes without affecting the real data.
LVM respects 3 concepts (pvcreate, lvcreate, vgcreate) :
•Physical Volume (PV): it is a whole disk or a partition of a disk
•Volume Group (VG): corresponds to one or more PV
•Logical Volume (LV): represents a portion of a VG. A LV can only belong to one VG. It’s on a LV that we can create a file system.


With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes,
then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group.
The LVM commands listed in this article are used under Ubuntu Distribution. But, it is the same for other Linux distributions.

To create a LVM, we need to run through the following steps.
ā—¾Select the physical storage devices for LVM
ā—¾Create the Volume Group from Physical Volumes
ā—¾Create Logical Volumes from Volume Group


Select the Physical Storage Devices for LVM – Use pvcreate, pvscan, pvdisplay Commands
In this step, we need to choose the physical volumes that will be used to create the LVM. We can create the physical volumes using pvcreate command as
shown below.

Check attached LVM to os by using fdisk -l
[root@myLinuxVM1~]# fdisk -l
Disk /dev/sda: 343.5 GB, 343597383680 bytes
255 heads, 63 sectors/track, 41773 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          33      265041   83  Linux
/dev/sda2              34        9399    75232395   8e  Linux LVM
/dev/sda3            9400       41773   260044155   83  Linux

Disk /dev/dm-0: 17.1 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes



[root@myLinuxVM1 ~]#  pvcreate -ff /dev/sda3
  Writing physical volume data to disk "/dev/sda3"
  Physical volume "/dev/sda3" successfully created

As show above physical volume has been created
To view Physical volume use below command
[root@myLinuxVM1 ~]# pvscan
  PV /dev/sda3   VG Vg_LinuxVM1   lvm2 [248.00 GB / 1020.00 MB free]
  PV /dev/sda2   VG vg_root       lvm2 [71.72 GB / 0    free]
  Total: 2 [319.71 GB] / in use: 2 [319.71 GB] / in no VG: 0 [0   ]


TO view view the list of physical volumes
[root@myLinuxVM1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               Vg_LinuxVM1
  PV Size               248.00 GB / not usable 1.37 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              63487
  Free PE               255
  Allocated PE          63232
  PV UUID               nm0AC9-K589-izU1-7LE8-z5nP-e6Mf-NYiOXl

  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               vg_root
  PV Size               71.75 GB / not usable 29.14 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              2295
  Free PE               0
  Allocated PE          2295
  PV UUID               MhrE87-kWMc-s5TV-qbuL-1iX3-c4kq-vHgJKf

[root@myLinuxVM1 ~]#
For the Volume Group creation – Use vgcreate, vgdisplay Commands
Volume groups are nothing but a pool of storage that consists of one or more physical volumes. Once you create the physical volume, you can create the
volume group (VG) from these physical volumes (PV).

e.g.  volume group is created from phsycial volume as below
[root@myLinuxVM1 ~]# vgcreate Vg_LinuxVM1 /dev/sda3
  Volume group "Vg_LinuxVM1" successfully created

Display Information about VG's (Volume Group)
[root@myLinuxVM1 ~]# vgdisplay Vg_LinuxVM1
  --- Volume group ---
  VG Name               Vg_LinuxVM1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               248.00 GB
  PE Size               4.00 MB
  Total PE              63487
  Alloc PE / Size       0 / 0
  Free  PE / Size       63487 / 248.00 GB
  VG UUID               glXx9a-Vxe3-Rx9E-PxEb-nWTo-q3vK-RxiDFA


For LVM creation by lvcreate, lvdisplay command

[root@myLinuxVM1 ~]# lvcreate -L 247G -n lv_LinuxVM1 Vg_LinuxVM1
  Logical volume "lv_LinuxVM1" created


Display Information about Logical Volumes use lvdisplay
[root@myLinuxVM1 ~]# lvdisplay
  --- Logical volume ---
  LV Name                /dev/Vg_LinuxVM1/lv_LinuxVM1
  VG Name                Vg_LinuxVM1
  LV UUID                T0v9eL-nPGR-FQqD-dJlj-cj0H-OujY-wTBdmE
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                247.00 GB
  Current LE             63232
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_root
  VG Name                vg_root
  LV UUID                aY8ESO-y6d0-TQEZ-ylmb-X8Rf-u5uJ-eNunmT
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                16.00 GB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_usr
  VG Name                vg_root
  LV UUID                TeFV6z-0921-fUJr-Aere-1Bcu-tScy-XCS9BU
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                8.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_var
  VG Name                vg_root
  LV UUID                T8jmlq-1TMz-iB4w-xMk7-xXrf-qd6I-gs4GNi
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                8.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_tmp
  VG Name                vg_root
  LV UUID                LYZkc7-7wpq-gy55-a029-1SNk-jLB4-vojzTH
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                16.00 GB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_home
  VG Name                vg_root
  LV UUID                o5mDap-3JkA-tGkF-B0WT-3FVn-ZFxn-PClTs0
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                16.00 GB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

  --- Logical volume ---
  LV Name                /dev/vg_root/lv_swap
  VG Name                vg_root
  LV UUID                iXS3rh-96WI-u96A-WFS3-0Z05-27Ud-mZWFb3
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.72 GB
  Current LE             247
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5


After creating the appropriate files ystem on the logical volumes, it becomes ready to use for the storage purpose
Convert/format logical partition to ext2 file system
[root@myLinuxVM1 ~]# mke2fs -j /dev/Vg_LinuxVM1/lv_LinuxVM1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
32374784 inodes, 64749568 blocks
3237478 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1976 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.


Mount the volume to any directory
Now mount file system as /u01/
[root@myLinuxVM1 ~]# mount /dev/Vg_LinuxVM1/lv_LinuxVM1 /u01
Add entry into /etc/fstab for mounting default whenever OS reboot.
[root@myLinuxVM1 ~]# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root
                       16G  913M   14G   7% /
/dev/mapper/vg_root-lv_usr
                      7.8G  3.0G  4.5G  41% /usr
/dev/mapper/vg_root-lv_var
                      7.8G  2.3G  5.2G  31% /var
/dev/mapper/vg_root-lv_tmp
                       16G  176M   15G   2% /tmp
/dev/mapper/vg_root-lv_home
                       16G  3.9G   11G  26% /home
/dev/sda1             251M   37M  202M  16% /boot
tmpfs                 7.9G     0  7.9G   0% /dev/shm
/dev/mapper/Vg_LinuxVM1-lv_LinuxVM1
                      244G  188M  231G   1% /u01 (Newly Mounted filesystem)
[root@myLinuxVM1 ~]#



 

Comments

Popular posts from this blog

19c ORACLE HOME Cloning -Linux/Solaris

  Cloning an Oracle home involves creating a copy of the Oracle home and then configuring it for a new environment. If you are performing multiple Oracle Database installations, then you may want to use cloning to create each Oracle home, because copying files from an existing Oracle Database installation takes less time than creating a new version of them. This method is also useful if the Oracle home that you are cloning has had patches applied to it. When you clone the Oracle home, the new Oracle home has the patch updates which is already applied on oracle home. Steps to clone an Oracle home step 1 : Stop Services Stop all processes related to the Oracle home. Step 2 : Create a ZIP or TAR file with the Oracle home (/u01/app/oracle/product/19.0.0/dbhome_1)    Use ROOT user for ZIP and UNZIP  # zip -r dbhome_1.zip /u01/app/oracle/product/19.0.0/dbhome_1 TAR option: # tar -cvf dbhome_1.tar /u01/app/oracle/product/19.0.0/dbhome_1 Step 3: s cp zip/tar to target s...

EBS Standby Role Tranistion using standby database and standby application Tier

 Role Transitions A database can operate in either a primary or standby role - these roles are mutually exclusive. Oracle Data Guard enables you to change these roles dynamically by issuing SQL commands, and supports the following transitions: Switchover Allows the primary database to switch roles with one of its standby databases. There is no data loss during a switchover. After a switchover, each database continues to participate in the Oracle Data Guard configuration with its new role. Failover Changes a standby database to the primary role in response to a primary database failure. The following role transitions are discussed: 6.1 Performing a Switchover 6.2 Performing a Failover 6.3 Performing a Switchback to the Primary Following A Switchover/Failover Each of these three transitions requires some application configuration to be performed. Most of the application configuration step...

EBS R12.2.4 AutoConfig could not successfully execute the following scripts followed by error "txkGenADOPWrapper.pl INSTE8_APPLY 1"

issue:txkGenADOPWrapper.pl    INSTE8_APPLY       1 WARNING: [AutoConfig Error Report] The following report lists errors AutoConfig encountered during each phase of its execution.  Errors are grouped by directory and phase. The report format is:       <filename>  <phase>  <return code where appropriate>   [APPLY PHASE]   AutoConfig could not successfully execute the following scripts:     Directory: /u01/applprod/fs2/FMW_Home/webtier/perl/bin/perl -I /u01/applprod/fs2/FMW_Home/webtier/perl/lib/5.10.0 -I /u01/applprod/fs2/FMW_Home/webtier/perl/lib/site_perl/5.10.0 -I /u01/applprod/fs2/EBSapps/appl/au/12.0.0/perl -I /u01/applprod/fs2/FMW_Home/webtier/ohs/mod_perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u01/applprod/fs2/inst/apps/PRODDB_epc-apps12-node41v/admin/install       txkGenADOPWrapper.pl    ...