CephFS metadata recovery

Normally it would be suggested to export the journal for safe-keeping to minimize data loss, but in this case as we know we can safely just reset it straight away:

cephfs-journal-tool journal reset

The following screenshot is the output for the preceding command:

The next command resets the RADOS state of the filesystem to allow the recovery process to rebuild from a consistent state:

ceph fs reset cephfs --yes-i-really-mean-it

Next, the MDS tables are reset to enable them to be generated from scratch. These tables are stored as objects in the metadata pool. The following commands create new objects:

 cephfs-table-tool all reset session
cephfs-table-tool all reset snap
cephfs-table-tool all reset inode

The following screenshot is the output for the preceding command:

Reset the CephFS journal:

cephfs-journal-tool --rank=0 journal reset

Finally, create the root inodes and prepare for data-object discovery:

cephfs-data-scan init

Now that the state of CephFS has been fully reset, scans of the data pool can be undertaken to rebuild the metadata from the available data objects. This is a three-stage process using the following three commands. The first command scans through the data pool, finds all the extents that make up each file, and stores this as temporary data. Information, such as creation time and file size is also calculated and stored. The second stage then searches through this temporary data and rebuilds inodes into the metadata pool. Finally the linking of the inodes occurs:

cephfs-data-scan scan_extents cephfs_data
cephfs-data-scan scan_inodes cephfs_data
cephfs-data-scan scan_links
The scan inodes and extents commands can take an extremely long time to run on large filesystems. The operations can be run in parallel to speed the process up; check out the official Ceph documentation for more information.

Once the process is complete, check that the CephFS filesystem is now in a healthy state:

We should also now be able to browse the filesystem from the mount point where we mounted it at the start of this section:

Note that although the recovery tools have managed to locate the files and rebuild some of their metadata, information such as their name has been lost and hence placed inside the lost+found directory. By examining the contents of the files, we could identify which file is which and rename it to the original filename.

In practice, although we have restored the CephFS filesystem, the fact that we are missing the files' original names and directory location likely means recovery is only partially successful. It should also be noted that the recovered filesystem may not be stable and it is highly recommended that any salvaged files be recovered before the filesystem is trashed and rebuilt. This is a disaster-recovery process that should only be used after ruling out restoring from backups.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.76.164