Skip to content

Create / convert Btrfs pools as their own subvolumes #9825

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tasket opened this issue Mar 4, 2025 · 9 comments
Open

Create / convert Btrfs pools as their own subvolumes #9825

tasket opened this issue Mar 4, 2025 · 9 comments
Labels
C: installer This issue pertains to the Qubes OS installer. C: storage This issue pertains to storage in Qubes OS. P: default Priority: default. Default priority for new issues, to be replaced given sufficient information.

Comments

@tasket
Copy link

tasket commented Mar 4, 2025

The problem you're addressing (if any)

Installing Qubes on Btrfs leaves the system with a monolithic default subvolume, encompassing all of dom0 and domU storage. This creates a hurdle for users wishing to exploit Btrfs features beyond reflink copies, such as subvolume snapshots (which can themselves be required in preparation for various procedures such as backups via btrfs-send or Wyng); isolating domU pools from rest of filesystem can be important.

The solution you'd like

Create a separate Btrfs subvolume at or just after system installation on Btrfs. Also perhaps offer to convert referenced directory to subvolume when the user is defining a new pool on Btrfs.

The value to a user and who that user might be

Avoiding after-the-fact moves of existing data:

This allows the user to pursue their Btrfs storage agenda without the uncertainly of moving possibly 100GB+ of data into a new subvolume and then moving the subvol into the pool's path. (Not time/resource consuming but its still a manual operation that may put some people off.)

Related:

Completion criteria checklist

No response

@tasket tasket added the P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. label Mar 4, 2025
@andrewdavidwong andrewdavidwong added C: storage This issue pertains to storage in Qubes OS. C: installer This issue pertains to the Qubes OS installer. labels Mar 5, 2025
@alimirjamali
Copy link

alimirjamali commented Mar 5, 2025

I once suggested that even private and root pools should be separate pools from each other (and from /). Since user might prefer different backup schedule for applications and data:

#9424

And I still that the more we go forward, the more it makes sense (specially for backup and snapshots).

@marmarek
Copy link
Member

marmarek commented Mar 5, 2025

Just to clarify - is it about a single subvolume for all VMs (basically /var/lib/qubes), or separate subvolume for every VM? @tasket

@bi0shacker001
Copy link

I would advocate for separate subvolumes for each VM. This would also allow for snapshotting and reverting individual Qubes by tracking subvolumes, an incredibly useful feature of pretty much every other virt infrastructure, and an in-os feature of various OSes which run on BTRFS, including OpenSUSE. The ability to rollback Qubes is incredibly useful if you plan to do something potentially destructive.

@marmarek
Copy link
Member

. This would also allow for snapshotting and reverting individual Qubes by tracking subvolumes, an incredibly useful feature of pretty much every other virt infrastructure

You can do that already, see qvm-volume info and qvm-volume revert.

@bi0shacker001
Copy link

If I'm not mistaken, this doesn't allow for a version history

@marmarek
Copy link
Member

It does keep last 2 versions by default. But if you want, you can increase this number (see revisions_to_keep volume property).

@alimirjamali
Copy link

alimirjamali commented Mar 30, 2025

It does keep last 2 versions by default. But if you want, you can increase this number (see revisions_to_keep volume property).

I believe what @bi0shacker001 refers to is a named snapshot. Something like (end-2024, after-successful-upgrade, before-major-upgrade, etc.). Something which would stay untouched regardless of number of times a qube is restarted.

p.s.: While the above is easily doable with BTRFS snapshots, it should not be something which "requires" btrfs. It is possible to implement it via other means.

@bi0shacker001
Copy link

It does keep last 2 versions by default. But if you want, you can increase this number (see revisions_to_keep volume property).

I believe what @bi0shacker001 refers to is a named snapshot. Something like (end-2024, after-successful-upgrade, before-major-upgrade, etc.). Something which would stay untouched regardless of number of times a qube is restarted.

p.s.: While the above is easily doable with BTRFS snapshots, it should not be something which "requires" btrfs. It is possible to implement it via other means.

This is much more in line with my intent, though I wasn't aware that you could keep multiple revisions of the vms. The problem with that system is, by the time you realize you need to roll back, it might be too late. Sometimes issues take longer to manifest. So unless I'm gonna keep 50 revisions of each of my VMs, that doesn't help with my specific concern, though it's an amazing little bit of functionality that should probably be added to the gui. Particularly for Windows HVMs, the ability to roll back after a broken driver install would have saved me so many times recently

@marmarek
Copy link
Member

FWIW, thanks to CoW, cloning VMs is pretty cheap. You can have a clone called for example "windows-known-good" and use it only as a named snapshot (if you want to revert a broken VM to it, remove the broken one and clone "windows-known-good" to it). Not ideal, but does cover some cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: installer This issue pertains to the Qubes OS installer. C: storage This issue pertains to storage in Qubes OS. P: default Priority: default. Default priority for new issues, to be replaced given sufficient information.
Projects
None yet
Development

No branches or pull requests

5 participants