• 0 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle

  • I did (am doing) something very similar. I definitely have issues with my indexing, but I’m just ordering it manually by year/date for now.

    I’m doing a little extra for parity though. I’m using 50-100gb discs for the data, and using 25gb discs as a full parity disc via dvdisaster for each disc I burn. Hopefully that reduces the risk of the parity data also being unreadable, and gives MORE parity data without eating into my actual data discs. It’s hard enough to break up the archives into 100gb chunks as is.

    Need to look into bacula as suggested by another poster.




  • I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.

    Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.

    It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.
















  • Sure, but you’ll get diminishing returns most likely as consumer hardware doesn’t really have the resources to scale that way very well if all the VMs are running demanding apps simultaneously.

    Even for something like 4 VMs that just do NVenc, there are limits for how many streams the GPU can do. I think there’s another patch that lets you raise that, but at some point you’ll run out of resources quick. Even powerful consumer gear isn’t really designed to be used by more than one user/app and it starts to show the more you virtualize and split those resources.