RHEL6-20-LOGICAL
VOLUME MANAGER (LVM)-7
LVM OPERATIONS:
IMPORT & DEPORT OF VOLUME GROUP (VG):
LVM CONFIGURATION BACKUP & RESTORE:
How to do vgexport in linux?
How to do vgimport in linux?
How to move a volume group to another server in linux?
How to take backup of
volume group (VG)?
What is LVM metadata?
How to restore volume group
(VG)?
How to restore Logical
volume group (LV) from VG backup?
Discuss various scenarios with
volume group restore.
IMPORT & DEPORT OF VOLUME
GROUP (VG):
Sometimes we might need that VG’s should be inaccessible.
vgexport and vgimport is not necessary to move disk drives from one
server to another. It is an administrative policy tool to prevent access to
volumes in the time it takes to move them.
1st, unmount the mounted FS’s under vg which need to move.
Here I want to move VG “vg01” having 2 LV’s mounted.
[root@rhel6-client1 ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 15G 6.3G
7.4G 46% /
tmpfs 937M 76K
937M 1% /dev/shm
/dev/sda1 485M 37M 423M 8% /boot
/dev/sda3 1008M 34M
924M 4% /home
/dev/sda2 2.9G 69M
2.7G 3% /opt
/dev/mapper/vg02-testlv01Fvg02
291M 11M
266M 4% /testlv01Fvg02
/dev/mapper/vg02-testlv02Fvg02
388M
11M 358M 3% /testlv02Fvg02
/dev/mapper/vg01-lv01Fvg01
485M 11M
449M 3% /lv01Fvg01
/dev/mapper/vg01-lv02Fvg01
291M 11M
266M 4% /lv02Fvg01
[root@rhel6-client1 ~]# umount /lv01Fvg01
[root@rhel6-client1 ~]# umount /lv02Fvg01
2nd, Deactivate the VG.
[root@rhel6-client1 ~]# vgchange -a n vg01
0 logical volume(s) in volume
group "vg01" now active
[root@rhel6-client1 ~]# vgexport vg01
Volume group "vg01"
successfully exported
This prevents it from being accessed by the system from which we are
removing it.
[root@rhel6-client1 ~]# pvs
PV VG
Fmt Attr PSize PFree
/dev/sdd2 vg01 lvm2 ax-
1020.00m 220.00m
/dev/sdd5 vg02 lvm2 a-- 2.00g
1.31g
Look at the extra “x” in o/p. it means VG is exported.
[root@rhel6-client1 ~]# vgdisplay vg01
Volume group vg01 is exported ççç
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 11
VG Access read/write
VG Status exported/resizable ççç
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV
1
VG Size 1020.00 MiB
PE Size 4.00 MiB
Total PE 255
Alloc PE / Size 200 / 800.00 MiB
Free
PE / Size 55 / 220.00 MiB
VG UUID
OK0vD5-GHIm-OQ6T-VyOX-vAtI-Dovk-asEiQ0
Now we need to export this VG on another system. My LVM Lab is based
on internal disks, there is no external storage configured. Hence I cannot
show. But here the steps.
3rd, we need to do the “pvscan” on another system.
It will show us something like,
# pvscan
pvscan
-- reading all physical volumes (this may take a while...)
pvscan
-- inactive PV "/dev/sdb1" is
in EXPORTED VG "applv" [2.00 GiB / 1.31 GiB free]
pvscan
-- inactive PV "/dev/sdb2" is
in EXPORTED VG "applv" [1020.00 MiB / 220.00 MiB free]
pvscan
-- Total: 2 [2.99 GiB] / in use: 2 [2.99 GiB] / in no VG: 0 [0 ]
check all exported lv’s are showing on other system.
4th, we need to import VG on that system.
[root@rhel6-client1 ~]# vgimport vg01
Volume group "vg01"
successfully imported
[root@rhel6-client1 ~]# vgdisplay vg01
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 12
VG Access read/write
VG Status resizable ççç
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1020.00 MiB
PE Size 4.00 MiB
Total PE 255
Alloc PE / Size 200 / 800.00 MiB
Free
PE / Size 55 / 220.00 MiB
VG UUID
OK0vD5-GHIm-OQ6T-VyOX-vAtI-Dovk-asEiQ0
5th, we need to activate the VG.
[root@rhel6-client1 ~]# vgchange -a y vg01
2 logical volume(s) in volume
group "vg01" now active
6th, mount the FS.
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01
/lv01Fvg01
[root@rhel6-client1 ~]# mount /dev/vg01/lv02Fvg01
/lv02Fvg01
[root@rhel6-client1 ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 15G 6.3G
7.4G 46% /
tmpfs 937M 76K
937M 1% /dev/shm
/dev/sda1 485M 37M
423M 8% /boot
/dev/sda3 1008M 34M
924M 4% /home
/dev/sda2 2.9G 69M
2.7G 3% /opt
/dev/mapper/vg02-testlv01Fvg02
291M 11M
266M 4% /testlv01Fvg02
/dev/mapper/vg02-testlv02Fvg02
388M 11M
358M 3% /testlv02Fvg02
/dev/mapper/vg01-lv01Fvg01
485M 11M
449M 3% /lv01Fvg01
/dev/mapper/vg01-lv02Fvg01
291M 11M
266M 4% /lv02Fvg01
LVM CONFIGURATION BACKUP &
RESTORE:
What does it mean to lvm config backup? Is it going to take entire
data backup?
Simply … NO
It is saying itself that I am config backup.
What is config backup?
It might be something which keep track of all PV/Vg/LV.
What it is called?
LVM Metadat.
The LVM metadata contains the configuration details of the LVM volume
groups on your system. By default, an identical copy of the metadata is
maintained in every metadata area in every physical volume within the volume
group. LVM metadata is small and stored as ASCII.
Currently LVM allows us to store 0, 1 or 2 identical copies of its
metadata on each physical volume. The default is 1 copy.
The first copy is stored at the start of the device, shortly after
the label. If there is a second copy, it is placed at the end of the device. If
you accidentally overwrite the area at the beginning of your disk by writing to
a different disk than you intend, a second copy of the metadata at the end of
the device will allow you to recover the metadata.
By default, the LVM label is placed in the second 512-byte sector. An
LVM label provides correct identification and device ordering for a physical
device, since devices can come up in any order when the system is booted. An
LVM label remains persistent across reboots and throughout a cluster.
The LVM label identifies the device as an LVM physical volume. It
contains a random unique identifier (the UUID) for the physical volume. It also
stores the size of the block device in bytes, and it records where the LVM
metadata will be stored on the device.
FIRST
SECTOR
|
SECOND SECTOR WITH LVM LABEL
|
LVM
METADATA
|
USER SPACE
|
Reference & Good Read:
By default backup is created under
/etc/lvm/backup/
[root@rhel6-client1 ~]# ls -ltr /etc/lvm/
total 52
-rw-r--r--. 1 root root 37554 Jan 23
2013 lvm.conf
drwx------. 2 root root 4096
Jan 23 2013 cache
drwx------. 2 root root 4096
Apr 2 18:59 archive
drwx------. 2 root root 4096
Apr 2 18:59 backup
[root@rhel6-client1 ~]# ls -ltr /etc/lvm/backup/
total 8
-rw-------. 1 root root 1714 Apr
1 15:37 vg02
-rw-------. 1 root root 1698 Apr
2 18:59 vg01
It is also interesting to look into
[root@rhel6-client1 ~]# ls -ltr /etc/lvm/archive/
total
300
-rw-------.
1 root root 1119 Mar 28 16:14 vg01_00000-1910704701.vg
-rw-------.
1 root root 882 Mar 28 18:37
testvg01_00000-1950708648.vg
-rw-------.
1 root root 873 Mar 28 18:37
testvg01_00001-682674067.vg
-rw-------.
1 root root 1367 Mar 28 18:47 myvg01_00000-2043074889.vg
-rw-------.
1 root root 1355 Mar 28 18:47 myvg01_00001-265674543.vg
-rw-------.
1 root root 2921 Mar 29 11:57 vg01_00005-1147184297.vg
-rw-------.
1 root root 2126 Apr 1 13:48
vg02_00004-1487957776.vg
-rw-------.
1 root root 3180 Apr 1 13:48
vg-test01_00000-642616270.vg
-rw-------.
1 root root 3626 Apr 1 14:42 vg-test01_00001-325208930.vg
-rw-------.
1 root root 6115 Apr 1 14:42
myvg01_00005-266411389.vg
-rw-------.
1 root root 3477 Apr 1 14:47
vg02_00008-2097263409.vg
-rw-------.
1 root root 3073 Apr 1 14:47
vg02_00009-202753883.vg
-rw-------.
1 root root 3705 Apr 1 14:48
vg-test01_00002-673949358.vg
-rw-------.
1 root root 3303 Apr 1 14:49
vg-test01_00003-1627573653.vg
-rw-------.
1 root root 6178 Apr 1 14:50
myvg01_00006-296488171.vg
-rw-------.
1 root root 5776 Apr 1 14:50
myvg01_00007-1539032331.vg
-rw-------.
1 root root 4047 Apr 1 14:50
myvg01_00008-2029974350.vg
-rw-------.
1 root root 1832 Apr 1 14:50
myvg01_00009-1009471958.vg
-rw-------.
1 root root 1353 Apr 1 14:53
myvg01_00010-1399856037.vg
-rw-------.
1 root root 1824 Apr 1 14:53
vg-test01_00006-1506723838.vg
-rw-------.
1 root root 1349 Apr 1 14:53
vg02_00013-1516538053.vg
===============O/P MODIFIED==============================
I can see all deleted as well as active vg’s under archive.
So actually what does it contain…?
Here the automatic archives go after a volume group change
How many days it will be available?
Depends upon the parameters set in /etc/lvm/lvm.conf
[root@rhel6-client1 ~]# grep retain_min /etc/lvm/lvm.conf
retain_min = 10
[root@rhel6-client1 ~]# grep retain_days /etc/lvm/lvm.conf
retain_days = 30
What if I want to take it manually?
[root@rhel6-client1 ~]# vgcfgbackup
Volume group "vg02"
successfully backed up.
Volume group "vg01"
successfully backed up.
But where…?
Overwritten on
[root@rhel6-client1 ~]# ls -ltr /etc/lvm/backup/
total 8
-rw-------. 1 root root 1702 Apr
2 19:21 vg02
-rw-------. 1 root root 1696 Apr
2 19:21 vg01
I want to save it at another place.
[root@rhel6-client1 ~]# vgcfgbackup -f /tmp/vg01 vg01
Volume group "vg01"
successfully backed up.
[root@rhel6-client1 ~]# vgcfgbackup -f /tmp/vg02 vg02
Volume group "vg02"
successfully backed up.
[root@rhel6-client1 ~]# ls -l /tmp/vg*
-rw-------. 1 root root 1686 Apr
2 19:24 /tmp/vg01
-rw-------. 1 root root 1692 Apr
2 19:24 /tmp/vg02
Can we test it…?
Sure…Why not.
Scenario-1 = Destroying only LV.
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi-ao--- 500.00m
lv02Fvg01
vg01 -wi-ao--- 300.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# cd /lv01Fvg01
[root@rhel6-client1 lv01Fvg01]# touch 1 2 3
[root@rhel6-client1 lv01Fvg01]# vi 1
[root@rhel6-client1 lv01Fvg01]# ls -l /lv01Fvg01
total 16
-rw-r--r--. 1 root root 57
Apr 3 15:17 1
-rw-r--r--. 1 root root 0
Apr 3 15:16 2
-rw-r--r--. 1 root root 0
Apr 3 15:16 3
drwx------. 2 root root 12288 Apr
1 14:59 lost+found
[root@rhel6-client1 lv01Fvg01]# cat 1
nfve/wnv/elwbnlewnbldfnmb.m b,cv c, n,nk ngklgnb xnb sn
I want to remove /lv01Fvg01, I had created 3 files under that.
Now I am going to remove /lv01Fvg01,
[root@rhel6-client1 lv01Fvg01]# cd
[root@rhel6-client1 ~]# umount /lv01Fvg01
[root@rhel6-client1 ~]# lvremove /dev/vg01/lv01Fvg01
Do you really want to remove active logical volume lv01Fvg01? [y/n]:
y
Logical volume
"lv01Fvg01" successfully removed
[root@rhel6-client1
~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv02Fvg01
vg01 -wi-ao--- 300.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# vgcfgrestore -f /tmp/vg01 vg01
Restored volume group vg01
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01 /lv01Fvg01
mount: you must specify the filesystem type
[root@rhel6-client1 ~]# lvdisplay /dev/vg01/lv01Fvg01
--- Logical volume ---
LV Path /dev/vg01/lv01Fvg01
LV Name lv01Fvg01
VG Name vg01
LV UUID
Qp3U0l-p3hB-3vgK-uV11-G9TI-KVz6-UszwZy
LV Write Access read/write
LV Creation host, time rhel6-client1,
2017-04-01 14:56:18 +0530
LV Status NOT available ççç
LV Size 500.00 MiB
Current LE 125
Segments 1
Allocation inherit
Read ahead sectors auto
[root@rhel6-client1 ~]# lvchange -ay /dev/vg01/lv01Fvg01
[root@rhel6-client1 ~]# lvdisplay /dev/vg01/lv01Fvg01
--- Logical volume ---
LV Path /dev/vg01/lv01Fvg01
LV Name lv01Fvg01
VG Name vg01
LV UUID
Qp3U0l-p3hB-3vgK-uV11-G9TI-KVz6-UszwZy
LV Write Access read/write
LV Creation host, time rhel6-client1,
2017-04-01 14:56:18 +0530
LV Status available ççç
# open 0
LV Size 500.00 MiB
Current LE 125
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01
/lv01Fvg01
[root@rhel6-client1 ~]# ls -l /lv01Fvg01
total 16
-rw-r--r--. 1 root root 57
Apr 3 15:17 1
-rw-r--r--. 1 root root 0
Apr 3 15:16 2
-rw-r--r--. 1 root root 0
Apr 3 15:16 3
drwx------. 2 root root 12288 Apr
1 14:59 lost+found
[root@rhel6-client1 ~]# cat /lv01Fvg01/1
nfve/wnv/elwbnlewnbldfnmb.m b,cv c, n,nk ngklgnb xnb sn
[root@rhel6-client1 ~]#
[root@rhel6-client1 ~]# vgs -o vg_name,vg_size,devices
VG VSize Devices
vg01 1020.00m /dev/sdd2(0)
vg01 1020.00m /dev/sdd2(125)
vg02 2.00g /dev/sdd5(0)
vg02 2.00g /dev/sdd5(75)
Great……LV restored.
Let’s consider one more scenario for restoration.
Scenario-2 = Destroying all LV, VG
& PV.
[root@rhel6-client1 ~]# umount /lv01Fvg01
[root@rhel6-client1 ~]# umount /lv02Fvg01
[root@rhel6-client1 ~]# vgremove vg01
Do you really want to remove volume group "vg01" containing
2 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01Fvg01? [y/n]:
y
Logical volume
"lv01Fvg01" successfully removed
Do you really want to remove active logical volume lv02Fvg01? [y/n]:
y
Logical volume
"lv02Fvg01" successfully removed
Volume group "vg01"
successfully removed
[root@rhel6-client1 ~]# pvremove /dev/sdd2
Labels on physical volume
"/dev/sdd2" successfully wiped
[root@rhel6-client1 ~]# vgcfgrestore -f /tmp/vg01 vg01
Couldn't find device with
uuid VJK3rn-O3ql-V6nG-WFx0-hF2n-VFcw-W6DryY.
Cannot restore Volume Group
vg01 with 1 PVs marked as missing.
Restore failed.
All gone, means we must have PV intact.
I created PV-VG-LV again
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi-a---- 300.00m
lv02Fvg01
vg01 -wi-a---- 400.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01
/lv01Fvg01
[root@rhel6-client1 ~]# mount /dev/vg01/lv02Fvg01
/lv02Fvg01
[root@rhel6-client1 ~]# cd /lv01Fvg01
[root@rhel6-client1 lv01Fvg01]# touch aa bb cc dd
[root@rhel6-client1 lv01Fvg01]# vi aa
[root@rhel6-client1 lv01Fvg01]# cat aa
hi this is test test test test
Now consider one more scenario,
Scenario-3 = Destroying LV & VG.
[root@rhel6-client1 ~]# vgcfgbackup -f /tmp/vg01 vg01
Volume group "vg01"
successfully backed up.
[root@rhel6-client1 ~]# umount /lv01Fvg01
[root@rhel6-client1 ~]# umount /lv02Fvg01
[root@rhel6-client1 ~]# lvremove /dev/vg01/lv01Fvg01
Do you really want to remove active logical volume lv01Fvg01? [y/n]:
y
Logical volume
"lv01Fvg01" successfully removed
[root@rhel6-client1 ~]# lvremove /dev/vg01/lv02Fvg01
Do you really want to remove active logical volume lv02Fvg01? [y/n]:
y
Logical volume
"lv02Fvg01" successfully removed
[root@rhel6-client1 ~]# vgremove vg01
Volume group "vg01"
successfully removed
[root@rhel6-client1 ~]# vgcfgrestore -f /tmp/vg01 vg01
Restored volume group vg01
[root@rhel6-client1 ~]# vgs
VG #PV #LV #SN Attr VSize
VFree
vg01 1
2 0 wz--n- 1020.00m 320.00m
vg02 1
2 0 wz--n- 2.00g
1.31g
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi------ 300.00m ççç (ao flag
missing)
lv02Fvg01
vg01 -wi------ 400.00m ççç (ao flag
missing)
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# lvchange -ay /dev/vg01/lv01Fvg01
[root@rhel6-client1 ~]# lvchange -ay /dev/vg01/lv02Fvg01
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi-a---- 300.00m ççç (o flag
missing)
lv02Fvg01
vg01 -wi-a---- 400.00m ççç (o flag
missing)
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01
/lv01Fvg01
[root@rhel6-client1 ~]# mount /dev/vg01/lv02Fvg01
/lv02Fvg01
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi-ao--- 300.00m
lv02Fvg01
vg01 -wi-ao--- 400.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# ls -l /lv01Fvg01
total
17
-rw-r--r--.
1 root root 31 Apr 3 16:23 aa
-rw-r--r--.
1 root root 0 Apr
3 16:23 bb
-rw-r--r--.
1 root root 0 Apr 3 16:23 cc
-rw-r--r--.
1 root root 0 Apr 3 16:23 dd
drwx------.
2 root root 12288 Apr 3 16:23 lost+found
[root@rhel6-client1 ~]# cat /lv01Fvg01/aa
hi this is test test test test
Great, again restored
What happen if I create some LV and then try to restore…?
Scenario-4 = Destroying LV and create
other LV on that VG, then restore.
[root@rhel6-client1 ~]# umount /lv01Fvg01
[root@rhel6-client1 ~]# umount /lv02Fvg01
[root@rhel6-client1 ~]# lvremove /dev/vg01/lv01Fvg01
Do you really want to remove active logical volume lv01Fvg01? [y/n]:
y
Logical volume
"lv01Fvg01" successfully removed
[root@rhel6-client1 ~]# lvremove /dev/vg01/lv02Fvg01
Do you really want to remove active logical volume lv02Fvg01? [y/n]:
y
Logical volume
"lv02Fvg01" successfully removed
[root@rhel6-client1
~]# lvcreate -L 450M -n testlv01 vg01
Rounding up size to full
physical extent 452.00 MiB
Logical volume
"testlv01" created
[root@rhel6-client1 ~]# lvs
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
testlv01
vg01 -wi-a---- 452.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# mkfs.ext4 /dev/vg01/testlv01
mke2fs
1.41.12 (17-May-2010)
Filesystem
label=
OS
type: Linux
Block
size=1024 (log=0)
Fragment
size=1024 (log=0)
Stride=0
blocks, Stripe width=0 blocks
115824
inodes, 462848 blocks
23142
blocks (5.00%) reserved for the super user
First
data block=1
Maximum
filesystem blocks=67633152
57
block groups
8192
blocks per group, 8192 fragments per group
2032
inodes per group
Superblock
backups stored on blocks:
8193, 24577, 40961, 57345, 73729,
204801, 221185, 401409
Writing
inode tables: done
Creating
journal (8192 blocks): done
Writing
superblocks and filesystem accounting information: done
This
filesystem will be automatically checked every 29 mounts or
180
days, whichever comes first. Use tune2fs
-c or -i to override.
[root@rhel6-client1 ~]# mkdir /testlv01
[root@rhel6-client1 ~]# mount /dev/vg01/testlv01 /testlv01
[root@rhel6-client1 ~]# cd /testlv01
[root@rhel6-client1 testlv01]# touch f1 f2 f3
[root@rhel6-client1 testlv01]# ls -l
total
15
-rw-r--r--.
1 root root 0 Apr 3 16:46 f1
-rw-r--r--.
1 root root 0 Apr 3 16:46 f2
-rw-r--r--.
1 root root 0 Apr 3 16:46 f3
drwx------.
2 root root 12288 Apr 3 16:45 lost+found
[root@rhel6-client1 testlv01]# cd
[root@rhel6-client1 ~]# vgs;lvs
VG
#PV #LV #SN Attr VSize VFree
vg01
1 1 0 wz--n- 1020.00m 568.00m
vg02
1 2 0 wz--n-
2.00g 1.31g
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
testlv01
vg01 -wi-ao--- 452.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# vgcfgrestore -f /tmp/vg01 vg01
Restored volume group vg01
[root@rhel6-client1 ~]# vgs;lvs
VG
#PV #LV #SN Attr VSize VFree
vg01
1 2 0 wz--n- 1020.00m 320.00m
vg02
1 2 0 wz--n-
2.00g 1.31g
LV
VG Attr LSize
Pool Origin Data% Move Log
Cpy%Sync Convert
lv01Fvg01
vg01 -wi------ 300.00m
lv02Fvg01
vg01 -wi------ 400.00m
testlv01Fvg02 vg02 -wi-ao--- 300.00m
testlv02Fvg02 vg02 -wi-ao--- 400.00m
[root@rhel6-client1 ~]# lvchange -ay /dev/vg01/lv01Fvg01
[root@rhel6-client1 ~]# lvchange -ay /dev/vg01/lv02Fvg01
[root@rhel6-client1 ~]#
[root@rhel6-client1 ~]# mount /dev/vg01/lv01Fvg01
/lv01Fvg01
[root@rhel6-client1 ~]# mount /dev/vg01/lv02Fvg01
/lv02Fvg01
[root@rhel6-client1 ~]# ls -l /lv01Fvg01
total
17
-rw-r--r--.
1 root root 31 Apr 3 16:23 aa
-rw-r--r--.
1 root root 0 Apr 3 16:23 bb
-rw-r--r--.
1 root root 0 Apr 3 16:23 cc
-rw-r--r--.
1 root root 0 Apr 3 16:23 dd
drwx------.
2 root root 12288 Apr 3 16:23 lost+found
[root@rhel6-client1 ~]# cat /lv01Fvg01/aa
hi this is test test test test
Great, again restored.
No comments:
Post a Comment