• 0 Posts
  • 39 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle
  • lorentz@feddit.ittoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.


  • My Synology has an auto block feature that from my understanding is essentially fail2ban, what I don’t know is if such a feature works for all my exposed services but Synology’s

    I’d be surprised if it works for custom services. Fail2ban has to know what’s running and haw to have access to its log file to know what is a failed authentication request. The best you can do without log access is to rate limit new tcp connections. But still you should know what’s the service behind because 5 new SSH sessions per minute and IP can be reasonable 5 new http1.0 connections likely cannot load a single html page.


  • lorentz@feddit.ittoSelfhosted@lemmy.worldEncrypting data on local servers?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    9 days ago

    If you want to encrypt only the data partition you can use an approach like https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/#encrypted-zfs to ulock it at boot.

    TL;DR: store half of the decryption key on the computer and another half online and write a script that at boot fetches the second half and decrypt the drive. There is a timewindow where a thief could decrypt your data before you remove the key if they connect your computer to the network, but depending on your thread model can be acceptable. you can also decrypt the root portion with a similar approach but you need to store the script in the initramfs and it is not trivial.

    Another option I’ve seen suggested is storing the decryption key on a USB pendrive and connect it with a long extension cord to the server. The assumption is that a thief would unplug all the cables before stealing your server.






  • Nginx for my intranet because configuration is fully manual and I have complete control over it.

    Caddy for the public services on my vps because it handles cert renewal automatically and most of its configuration is magic which just works.

    It is unbelievable how shorter caddy configuration is, but on my intranet:

    1. I don’t want my reverse proxy to dial on internet to try to fetch new SSL certs. I know it can be disabled, but this is the default.
    2. I like to learn how stuff works, Nginx forces you to know more details but it is full of good documentation so it is not too painful compared to Caddy.





  • If security is one of your concerns, search for “HTTP client side certificates”. TL;DR: you can create certificates to authenticate the client and configure the server to allow connections only from trusted devices. It adds extra security because attackers cannot leverage known vulnerabilities on the services you host since they are blocked at http level.

    It is a little difficult to find good and updated documentation but I managed to make it work with nginx. The downside is that Firefox mobile doesn’t support them, but Firefox PC and Chrome have no issues.

    Of course you want also a server side certificate, the easiest way is to get it from Let’s Encrypt


  • You can configure caddy to use 80 and be a reverse proxy for both the services, serving one site or the other depending on the name (you will need a second DNS entry pointing to the same IP). about not exposing 443, I really doubt that caddy can automatically retrieve SSL certificates for you if not running on the default port. Check the documentation, if I’m right either you open an empty website on 443 just for the sake of getting SSL certs to run https, and manually configure the other port to do the same, or you get the certificates manually using the DNS verification (check let’s encrypt documentation) and configure caddy to use them.


  • lorentz@feddit.ittoSelfhosted@lemmy.worldNetwork server/NAS
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    NAS are essentially small computers made for connecting a lot of storage and with a fancy OS that can be configured with a browser.

    So the real question between the NAS or a custom build is how much time do you want to spend being a sysadmin. NAS mostly work out of the box, you can configure them to autoupdate and get notification only when something important happens. While with a custom build everything is completely on your own. Are you already familiar with some linux distribution? How much do you want to learn?

    Once you answer the previous question, the next is about the power. To store files on the network you don’t need any big CPU, on the contrary, you may want something small that doesn’t cost too much in electricity. But you mentioned you want to stream video. If you need transcoding (because you have a chromcast that wants only video in a specific format for example) you need something more powerful. If you stream only to computer there is no need for transcoding because they can digest any format, so anything will work.

    After this you need to decide how much space you need, and what type. NMVE are faster, but spinning HD were still more reliable (and cheaper per TB) last time I checked. Also, do you want some kind of raid? RAID1 is the bare minimum to protect you from a disk failure, but you need twice as much disks to store the same amount of data. RAID5 is more efficient but you need at least 3 disks. Said so, remember that RAID is not backup. You still need a backup for important stuff.

    My honest suggestion is to start experimenting with your raspberry and see what you need. Likely it will fit already most of your needs, just attach an external HD and configure samba shares. I don’t do any automated backup, but I know that syncthing and Syncthing-Fork are very widely used tools. On linux you can very easily use rsync in a crontab.

    If you want an operating system that offers you an out of the box experience more similar to a commercial NAS you can check FreeNAS. I personally started with a QNAP and have been happy for years, but after starting self hosting some stuff I wanted more flexibility so I decided to change to a TerraMaster where I installed a plain Debian and I’m happy with it, but it definitely requires more knowledge and patience to configure and administrate it.



  • FAT32 doesn’t support unix file permission, so when you mount the disk linux has to assign a default ownership which usually is to root. And this is the issue you are facing.

    You confused the disk permission with the filesystem permission. The udev rule you wrote gives you permission to write the disk (in other words, you can format it or rewrite the whole content) but doesn’t give you permission on the files stored inside because they are on a higher abstraction level.

    If you use this computer in interactive mode (in other words if you usually sit in front of it and plug the disk on demand) my suggestion is to remove that line in /etc/fstab and let the ubuntu desktop environment mounting the external hard drive for the current logged in user.

    If you use this computer as a server with the USB disk always connected (likely since you mention Jellyfin) you need to modify the fstab line to specify which user should get permission on the files written on the disk.

    You can see the full list of options at https://www.kernel.org/doc/Documentation/filesystems/vfat.txt

    You either want uid=Mongostein (assuming that’s your username on your computer too) to assign to yourself the ownership of all the files, or umask=000 to give everyone all the permissions to the files and directories while ownership will remain to root. You should prefer the second option if jellifin runs as a different user, while the first one is better if there are other users on your computer which shouldn’t access your external disk.

    To summarize, the line in /etc/fstab should be one of these two.

    LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,umask=000 0 0
    
    LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,uid=Mongostein 0 0
    

  • There is no need to add a udev rule to make the device writeble by your user. If you have a full Ubuntu setup the external drive should appear in Nautilus as soon as you attach it and it can be mounted and umounted from UI.

    if it doesn’t work you can add a line te /etc/fstab like

    /dev/sdb1 /mnt/mydisk noauto,user,uid=yourname 0 0

    duble check the man page for the right sintax (I’m going by memory), but what you are saying here is that any user can mount this device which shouldn’t be mount automatically on boot, and files there are owned by the user “yourname” The issue with this approach is that the device name changes depending on what you have connected, Udev should also add some symlink which contains the device ID so it is more stable.


  • I got a terramaster nas and I’m super happy https://www.terra-master.com/global/f4-5067.html

    The main reason to choose it is that it is just a PC in the form factor of a NAS. You can just boot it from a pendrive and install your favourite operating system. I had a Qnap before, and while it was great to start, self hosting wasn’t the best experience on their OS.

    this is a small form factor, it should be low power consumption (I’ve never measured to confirm it) and supports both nvme and sata drives. Currently I’ve an nvme for the OS and two sata for storage. CPU is powerful enough to run home assistant, vpn, pihole, commafeed, and a bunch of other Docker images. I just plan to increase the ram soonish because the stock feels a little constrained.


  • I did some experiments in the past. The nicer option I could find was enabling webdav API on the hosting side (it was an option on cPanel if I recall correctly, but there are likely other ways to do it). These allow using the webserver as a remote read/write filesystem. After you can use rclone to transfer files, the nice part is that rclone supports client side encryption so you don’t have to worry too much about other people accessing files.