VMFS 爆盘无法访问--记一次数据恢复过程

时间:2022-12-09 12:57:38

在我的HomeLAB环境,发现我的科学上外国学习网的服务器挂了,无法访问,如图:

VMFS 爆盘无法访问--记一次数据恢复过程

查看存储发现空间已经用完,容量识别也有问题。

VMFS 爆盘无法访问--记一次数据恢复过程

我的存储是个群晖,从存储上看,空间已经完全满了,我这个环境是个共享的实验平台,估计是哪个粗心的朋友给我塞满了

VMFS 爆盘无法访问--记一次数据恢复过程

没办法,想要找回里面的数据,得想办法恢复。

SSH到ESXI主机,根本不可访问了,提示已经是只读的文件系统。

VMFS 爆盘无法访问--记一次数据恢复过程

根据我的分析,现在要想恢复数据,由于文件系统是只读的,那么可以把这个lun通过iSCSI挂载到一台闲置的linux服务器上,然后想办法在linux服务器上读取VMFS文件系统,并读取里面的数据。我这里是找了一个ubuntu。

要想读取VMFS的文件系统,需要安装vmfs6-tools 。我这里是VMFS6

root@ubt:~# apt install vmfs6-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
libfwupdplugin1
Use 'apt autoremove' to remove it.
The following NEW packages will be installed:
vmfs6-tools
0 upgraded, 1 newly installed, 0 to remove and 40 not upgraded.
Need to get 59.5 kB of archives.
After this operation, 314 kB of additional disk space will be used.
Get:1 http://cn.archive.ubuntu.com/ubuntu focal/universe amd64 vmfs6-tools amd64 0.1.0-3 [59.5 kB]
Fetched 59.5 kB in 1s (47.5 kB/s)
Selecting previously unselected package vmfs6-tools.
(Reading database ... 108439 files and directories currently installed.)
Preparing to unpack .../vmfs6-tools_0.1.0-3_amd64.deb ...
Unpacking vmfs6-tools (0.1.0-3) ...
Setting up vmfs6-tools (0.1.0-3) ...
Processing triggers for man-db (2.9.1-1) ...

创建一个挂载的目录/mnt/vmfs 稍后把VMFS的卷挂载到这里。

root@ubt:~# mkdir /mnt/vmfs

要想挂载ISCSI 的LUN,还需要安装open-iscsi,这里是提前安装过

root@ubt:~# apt install open-iscsi
Reading package lists... Done
Building dependency tree
Reading state information... Done
open-iscsi is already the newest version (2.0.874-7.1ubuntu6.2).
open-iscsi set to manually installed.
The following package was automatically installed and is no longer required:
libfwupdplugin1
Use 'apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 40 not upgraded.
root@ubt:~#

然后连接到iSCSI Target,我这里是没有启用CHAP认证,在生产环境中是不安全的。

iscsiadm -m discovery -t sendtargets -p 192.168.1.138

登陆到这个存储

root@ubt:~# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: fe80::6eb3:11ff:fe1b:3cd6,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: 192.168.2.2,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: 192.168.1.138,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: 192.168.2.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: 10.1.43.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: fe80::211:32ff:fe2c:a785,3260] ( multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: fe80::202:c9ff:fee2:3188,3260] ( multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: fe80::6eb3:11ff:fe1b:3cd7,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-2.2ccfd47dd1, portal: fde8:9f50:9676:0:211:32ff:fe2c:a 785,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: fe80::6eb3:11ff:fe1b:3cd6,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: 192.168.2.2,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: 192.168.1.138,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: 192.168.2.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: 10.1.43.1,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: fe80::211:32ff:fe2c:a785,3260] ( multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: fe80::202:c9ff:fee2:3188,3260] ( multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: fe80::6eb3:11ff:fe1b:3cd7,3260] (multiple)
Logging in to [iface: default, target: iqn.2000-01.com.synology:DiskStation.Target-1.2ccfd47dd1, portal: fde8:9f50:9676:0:211:32ff:fe2c:a 785,3260] (multiple)

查看已经发现的LUN,由于是多路径的,可以看到LUN的重复出现

root@ubt:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 91.9M 1 loop /snap/lxd/24061
loop1 7:1 0 55.5M 1 loop /snap/core18/2409
loop2 7:2 0 141.4M 1 loop /snap/docker/2285
loop3 7:3 0 61.9M 1 loop /snap/core20/1518
loop5 7:5 0 95.7M 1 loop /snap/kata-containers/2312
loop6 7:6 0 118.4M 1 loop /snap/docker/1779
loop8 7:8 0 67.4M 1 loop /snap/powershell/208
loop9 7:9 0 55.6M 1 loop /snap/core18/2632
loop12 7:12 0 67.8M 1 loop /snap/lxd/22753
loop13 7:13 0 95.8M 1 loop /snap/kata-containers/2446
loop14 7:14 0 49.6M 1 loop /snap/snapd/17883
loop15 7:15 0 63.2M 1 loop /snap/core20/1738
loop16 7:16 0 70.8M 1 loop /snap/powershell/225
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 15G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 15G 0 lvm /
sdb 8:16 0 210G 0 disk
├─sdb1 8:17 0 210G 0 part
└─mpatha 253:1 0 210G 0 mpath
└─mpatha-part1 253:2 0 210G 0 part
sdc 8:32 0 4T 0 disk
├─sdc1 8:33 0 4T 0 part
└─mpathc 253:5 0 4T 0 mpath
└─mpathc-part1 253:6 0 4T 0 part
sdd 8:48 0 210G 0 disk
├─sdd1 8:49 0 210G 0 part
└─mpatha 253:1 0 210G 0 mpath
└─mpatha-part1 253:2 0 210G 0 part
sde 8:64 0 1.8T 0 disk
├─sde1 8:65 0 1.8T 0 part
└─mpathb 253:3 0 1.8T 0 mpath
└─mpathb-part1 253:4 0 1.8T 0 part
sdf 8:80 0 4T 0 disk
├─sdf1 8:81 0 4T 0 part
└─mpathc 253:5 0 4T 0 mpath
└─mpathc-part1 253:6 0 4T 0 part
sdg 8:96 0 210G 0 disk
├─sdg1 8:97 0 210G 0 part
└─mpatha 253:1 0 210G 0 mpath
└─mpatha-part1 253:2 0 210G 0 part
sdh 8:112 0 1.8T 0 disk
├─sdh1 8:113 0 1.8T 0 part
└─mpathb 253:3 0 1.8T 0 mpath
└─mpathb-part1 253:4 0 1.8T 0 part

挂载VMFS6的文件系统

root@ubt:~# vmfs6-fuse /dev/sdg1 /mnt/vmfs/
VMFS version: 6

可以查看到虚拟机的文件夹

VMFS 爆盘无法访问--记一次数据恢复过程

root@ubt:/mnt/vmfs# cd win2012\ vtep2/
root@ubt:/mnt/vmfs/win2012 vtep2#
root@ubt:/mnt/vmfs/win2012 vtep2#
root@ubt:/mnt/vmfs/win2012 vtep2# ls
vmware-683.log vmware-688.log 'win2012 vtep2-flat.vmdk' 'win2012 vtep2.vmx~'
vmware-684.log vmware.log 'win2012 vtep2.nvram' 'win2012 vtep2.vmxf'
vmware-685.log 'vmx-win2012 vtep2-1c9c303770ce285c60031b35678ef91664f15bb1-1.vswp' 'win2012 vtep2.vmdk' 'win2012 vtep2.vmx.lck'
vmware-686.log 'win2012 vtep2-094a08de.hlog' 'win2012 vtep2.vmsd'
vmware-687.log 'win2012 vtep2-e5ea32a0.vswp' 'win2012 vtep2.vmx'
root@ubt:/mnt/vmfs/win2012 vtep2#

把当前的虚拟机文件夹复制到NFS的存储上。

root@ubt:/mnt/vmfs# cp -r  win2012\ vtep2/ /mnt/nfs/

VMFS 爆盘无法访问--记一次数据恢复过程

复制过程遇到报错,不过vswp文件也不是必须的,理论上有没有无所谓

cp: error reading 'win2012 vtep2/win2012 vtep2-e5ea32a0.vswp': Input/output error

我们先试一试能不能开机,先重新注册虚拟机

VMFS 爆盘无法访问--记一次数据恢复过程

然后打开此虚拟机电源,需要回答问题

VMFS 爆盘无法访问--记一次数据恢复过程

VMFS 爆盘无法访问--记一次数据恢复过程

回答之后能够正常开机进入系统

VMFS 爆盘无法访问--记一次数据恢复过程

至此系统恢复正常

VMFS 爆盘无法访问--记一次数据恢复过程

参考:

​https://techviewleo.com/configure-iscsi-initiator-on-ubuntu/​

​https://njit.io/kb/os/linux/ubuntu/mounting-vmfs-volume-read-only-in-ubuntu/​

​http://woshub.com/how-to-access-vmfs-datastore-from-linux-windows/​