You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I go through a lot of loops to delete unneeded archives right now.
Scripting things on
destination so that I get du -ch | sort -h wyng_archive_dir/* to get voldir occupied space
then on dom0, wyng list --debug to get mapping of voldir to volname for the biggest consumers
then wyng delete volname in a loop
I was wondering if there could be a way that list could provide occupied disk space on remote archive voldir (considering that dedup makes one voldir occupy real consume space, and not volname?)
That would be a nice improvement when comes time to get rid of unneeded volnames.
Then offer a wrapper for wyng-qubes-util?
Thoughts?
The text was updated successfully, but these errors were encountered:
You can get the vol dir mapping with simply wyng list --verbose in a simpler format.
Having du scan the whole archive dir is probably the most accurate way right now, as it takes hardlinks into account. I'm not sure if running it from the wyng list command is that helpful.
BTW, using find to output the inodes of chunk files allows you to compute the overlap between deduplicated volumes.
FWIW, existing Wyng metadata could provide details on the uncompressed data in each vol and session, but it would have no clue about the compressed amount.
I go through a lot of loops to delete unneeded archives right now.
Scripting things on
du -ch | sort -h wyng_archive_dir/*
to get voldir occupied spacewyng list --debug
to get mapping of voldir to volname for the biggest consumersI was wondering if there could be a way that list could provide occupied disk space on remote archive voldir (considering that dedup makes one voldir occupy real consume space, and not volname?)
That would be a nice improvement when comes time to get rid of unneeded volnames.
Then offer a wrapper for wyng-qubes-util?
Thoughts?
The text was updated successfully, but these errors were encountered: