Restoring and Data Recovery From KVM VPS Image Print

  • 4

1. Find any node with ample disk space matching the KVM's required space.

SSH into it.

2. Copy the latest snapshots and config from SolusVM backup server.
e.g.
# scp -vrpCP 22222 10.0.29.13:/export/solusvm/server03/solus-node-backup/314-kvm-kvm106-1.bz .
# scp -vrpCP 22222 10.0.29.13:/export/solusvm/server03/solus-node-backup/kvm106.xml .

3. View the kvm106.xml to get the LVM details, e.g.
...

...

4. Create the same LVM with the same size:-
# lvcreate -L320G -n kvm106_img VolGroup00

5. Make sure LVM is created OK:-
# lvdisplay
...
LV Name /dev/VolGroup00/kvm106_img
...

6. Restore the image into the LVM (may take a long time depending of size
of original LVM):-
# bunzip2 -c 314-kvm-kvm106-1.bz | dd of=/dev/VolGroup00/kvm106_img
(While it's running, monitor using 'top' in another ssh session to ensure
bunzip2 or dd at top of activity)

7. Once done, make sure partition table in LVM restored OK:-
# fdisk -l /dev/VolGroup00/kvm106_img
...
Device Boot Start End Blocks Id System
/dev/VolGroup00/kvm106_img1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/VolGroup00/kvm106_img2 64 41774 335031296 8e Linux LVM
...

8. The above example is a typical CentOS default partition layout:-
- kvm106_img1 is a small normal ext3 partition for /boot partition
(where boot block, kernel files etc reside)
- kvm106_img2 is the much larger LVM partition where all other partitions
and data reside.

9. In order to mount the "LVM within LVM" for data extraction, we need to use a
3rd party tool to make the inner partitions available:-
- Install kpartx
# yum install kpartx
- Make partitions within LVM available
# kpartx -av /dev/VolGroup00/kvm106_img
- Scan and expose all LVM volume groups within the LVM for mounting
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_monitor" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
- Make all LVM volumes active and ready for use
# vgchange -ay
3 logical volume(s) in volume group "vg_monitor" now active
13 logical volume(s) in volume group "VolGroup00" now active
- All LVMs should now exist in /dev/mapper (ls -l /dev/mapper to view)

10. Mount the desired partition(s) to temporary directory(ies), e.g.
# mount /dev/mapper/vg_monitor-lv_root /mnt

11. Perform recovery / adjustment as see fit, e.g. backup
# cd /mnt; tar cvpf /root/vg_monitor-lv_root.tar *

12. Clean up when done (Important! Otherwise will occupy space in server)
# umount /mnt
# vgchange -an vg_monitor
0 logical volume(s) in volume group "vg_monitor" now active
# kpartx -d /dev/VolGroup00/kvm106_img
# lvremove VolGroup00/kvm106_img


References:-
http://blog.f1linux.com/2013/05/02/howto-backup-a-running-vm-to-sparse-file-and-restore-using-lvm-snapshots-dd-and-cp/

http://docs.openstack.org/grizzly/openstack-image/content/ch_modifying_images.html
http://forum.ubuntu-fr.org/viewtopic.php?id=1202571

 


Was this answer helpful?

« Back