Page MenuHomeFreeBSD

wait for device mounts in zpool and dumpon
ClosedPublic

Authored by chuck on Mar 5 2021, 6:38 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Oct 24, 2:57 AM
Unknown Object (File)
Sep 28 2024, 5:29 PM
Unknown Object (File)
Sep 4 2024, 4:11 PM
Unknown Object (File)
Aug 26 2024, 1:32 AM
Unknown Object (File)
Aug 24 2024, 2:19 AM
Unknown Object (File)
Aug 24 2024, 2:19 AM
Unknown Object (File)
Aug 24 2024, 1:33 AM
Unknown Object (File)
Aug 19 2024, 2:57 AM
Subscribers

Details

Summary

If the root file system is composed from multiple devices, wait for
devices to be ready before running zpool and dumpon rc scripts.

An example of this is if the bulk of the root file system exists on a
fast device (e.g. NVMe) but the /var directory comes from a ZFS dataset
on a slower device (e.g. SATA). In this case, it is possible that the
zpool import may run before the slower device has finished being probed,
leaving the system in an intermediate state.

Fix is to add root_hold_wait to the zpool and dumpon (which has a
similar issue) rc scripts.

PR:             242189
Reported by:    osidorkin@gmail.com
Reviewed by:    <If someone else reviewed your modification.>
Tested by:      osidorkin@gmail.com
MFC after:      1 week
Test Plan

Modified the ATA device in bhyve to wait 10 seconds before returning data from the Identify command and was able to reproduce this.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

chuck requested review of this revision.Mar 5 2021, 6:38 PM
chuck edited the summary of this revision. (Show Details)

The other places using root_hold_wait try to avoid it first. I'd expect that strategy to work at least for dumpon, though I could imagine zpool import not failing so gracefully. Did you consider it?

No, I hadn't considered that. Are you think of something like the following for zpool

		if [ -r $cachefile ]; then
			zpool import -c $cachefile -a -N
			if [ $? -ne 0 ]; then
				echo 'Import of zpool cache ${cachefile} failed,' \
				    'will retry after root mount hold release'
				root_hold_wait
				zpool import -c $cachefile -a -N
			fi
			break
		fi
	done

Yeah that looks like the dance everyone else is doing.

  • only set root_hold_wait if zpool import fails

I've update zpool to use root_hold_wait only after import fails, but I wasn't quite sure how to make similar changes for dumpon. Any suggestions?

For root on ZFS kernel automatically calls vfs_mountroot_wait() before doing anything. I think it is the only safe way. Attempt to import pool before all disks are detected may end up in importing pool at older transaction group or using old, removed now from pool devices. So I would insert root_hold_wait call at the beginning of zpool_start().

dumpon on the other side could possibly follow the same logic as UFS does -- wait for particular device to become available until all the root_hold's are dropped, after which point fail.

In D29101#652342, @mav wrote:

For root on ZFS kernel automatically calls vfs_mountroot_wait() before doing anything. I think it is the only safe way. Attempt to import pool before all disks are detected may end up in importing pool at older transaction group or using old, removed now from pool devices. So I would insert root_hold_wait call at the beginning of zpool_start().

So the recommendation for zpool is to use the first version of the patch that call root_hold_wait before the zpool import, correct?

dumpon on the other side could possibly follow the same logic as UFS does -- wait for particular device to become available until all the root_hold's are dropped, after which point fail.

Apologies, but I'm not understanding this as I'm not familiar with the code path. Are the changes I made to libexec/rc/rc.d/dumpon incorrect? If so what change do I need to make?

So the recommendation for zpool is to use the first version of the patch that call root_hold_wait before the zpool import, correct?

Yes, that is what I would do.

dumpon on the other side could possibly follow the same logic as UFS does -- wait for particular device to become available until all the root_hold's are dropped, after which point fail.

Apologies, but I'm not understanding this as I'm not familiar with the code path. Are the changes I made to libexec/rc/rc.d/dumpon incorrect? If so what change do I need to make?

It is correct, but may be more aggressive than necessary. UFS root unlike ZFS waits only for particular device to be present, allowing other device probe to continue in background. I am not sure it is absolutely safe in all cases, but that is what done there now. I see no reason for dumpon to be more strict that UFS root mount, but I have no objections for your version either.

Reviewed By: allanjude
Approved By: allanjude

This revision is now accepted and ready to land.Mar 26 2021, 12:03 AM
This revision was automatically updated to reflect the committed changes.