Borgbackup in addition to git. Since there’s probably not much data, any cheap VPS could act as storage.
A.k.a @oranki@lemmy.world, @oranki@lemmini.fi
Borgbackup in addition to git. Since there’s probably not much data, any cheap VPS could act as storage.
Keep at it! The learning curve is not a straight line, just like with any skill. You’ll see fast progress, just to be followed by a long plateau of no progress or even feel you’re getting worse. And then you notice possibly big improvement again. And again.
Don’t worry about following sheets/chords initially. If chords are not in your muscle memory, you’re basically doing three complex tasks simultaneously, reading, figuring out chords and fingering chords. I’d try to memorize one or two simple pieces first, to get the chords under your belt. Start simple and stay patient, it’ll take time.
Don’t forget the rhythm. Play on top of recordings. You can be pretty liberal with the harmonics, but if you keep a steady beat it’ll probably still sound good.
Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.
Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.
The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.
Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.
Storage in the cloud is expensive, there’s just no way around it.
There was a good blog post about the real cost of storage, but I can’t find it now.
The gist was that to store 1TB of data somewhat reliably, you probably need at least:
Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.
I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.
I wish I knew about Photon before. Just spun up my own instance and loving it!
They could explain things better, you are right. I actually think I remember having almost the exact same confusion a few years back when I started. I still have two keys stored in my pw manager, no idea what the other one is for…
The decryption has gotten much more reliable in the past year or two, I also try out new clients a lot and have had no issues in a long time. Perhaps you could give it a new go, with the info that you use the same key for all sessions.
I have a feeling you are overthinking the Matrix key system.
Basically it’s just another password, just one you probably can’t remember.
Most of the client apps support verifying a new session by scanning a QR code or by comparing emoji. The UX of these could be better (I can never find the emoji option on Element, but it’s there…). So if you have your phone signed in, just verify the sessions with that. And it’s not like most people sign in on new devices all the time.
I’d give Matrix a new look if I were you.
Wireguard runs over UDP, the port is undistinguishable from closed ports for most common port scanning bots. Changing the port will obfuscate the traffic a bit. Even if someone manages to guess the port, they’ll still need to use the right key, otherwise the response is like from a wrong port - no response. Your ISP can still see that it’s Wireguard traffic if they happen to be looking, but can’t decipher the contents.
I would drop containers from the equation and just run Wireguard on the host. When issues arise, you’ll have a hard time identifying the problem when container networking is in the mix.
Perhaps I misunderstand the words “overlapping” and “hot-swappable” in this case, I’m not a native english speaker. To my knowledge they’re not the same thing.
In my opinion wanting to run an extra service as root to be able to e.g. serve a webapp on an unprivileged port is just strange. But I’ve been using Podman for quite some time. Using Docker after Podman is a real pain, I’ll give you that.
on surface they may look like they are overlapping solutions to the untrained eye.
You’ll need to elaborate on this, since AFAIK Podman is literally meant as a replacement for Docker. My untrained eye can’t see what your trained eye can see under the surface.
In my limited experience, when Podman seems more complicated than Docker, it’s because the Docker daemon runs as root and can by default do stuff Podman can’t without explicitly giving it permission to do so.
99% of the stuff self-hosters run on regular rootful Docker can run with no issues using rootless Podman.
Rootless Docker is an option, but my understanding is most people don’t bother with it. Whereas with Podman it’s the default.
Docker is good, Podman is good. It’s like comparing distros, different tools for roughly the same job.
Pods are a really powerful feature though.
Thank you!
Oh the times when getting GTA from a friend required 30+ 3½" floppy disks IIRC. That plus making 5 or 6 round trips to friend’s house, because one of them almost always got corrupted during the zip process.
And since no one had the disk space or knowhow to store the zip packets on HDD for the inevitable re-copying, had to redo the whole pack from scratch each time.
Edit: disk->HDD
Remember to check the polarity of the plug too. Some have + in the center pin, others have -
I’d go the SSH + sudo way.
Sudo can be quite finely tuned to only allow specific commands. If you want to lock the SSH session further, look into rbash
.
Plain NGINX has served me well.
+1 for rootless Podman. Kubernetes YAMLs to define pods, which are started/controlled by systemd. SELinux for added security.
Also +1 for not using auto updates. Using the latest
tag has bit me more times I can count, now I only use it for testing new stuff. All the important services have at least the major version as tag.
My biased opinion is that most people run Nextcloud on an underpowered platform, and/or they install and enable every possible addon. Many also skip some important configurations.
If you run NC on a bit more powerful machine, like a used USFF PC, with a good link to it, the experience is better than e.g. OneDrive.
Another thing is, people say “Nextcloud does too much”, but a default installation really doesn’t do much more than files. If you add every imaginable app, sure it slows down and gets buggy. Disable everything you don’t need, and the experience gets much better. You can disable even the built-in Photos app if you don’t need it.
Not saying NC is a speed daemon, but it really is OK. The desktop and mobile clients don’t get enough love, that’s true.
I’m talking about the “bare metal” installation or the community Apache/FPM container images. AIO seems to be a hot mess, and does just about everything a container shouldn’t be doing, but that’s just my opinion.