Just an explorer in the threadiverse.

  • 2 Posts
  • 205 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle
  • I asked them elsewhere in the thread and Connect doesn’t have crossposting either, fwiw. I have no idea why they’re posting in this thread, their answer has nothing to do with your question.

    I have both Connect and Jerboa installed, they’re both fine. Connect looks prettier, and the search is definitely better. I end up using Jerboa more out of the two.

    When I want to cross-post from mobile I end up switching over to Lemmy’s mobile web interface, which can be saved to your home screen as a progressive web app. Not a Jerboa-native solution, but I’ve tried a lot of the Android apps and I haven’t seen any of them support a proper cross-post.



  • You connect to Headscale using the tailscale clients, and configuration is exactly the same irrespective of which control server you use… with the exception of having to configure the custom server url with Headscale (which requires navigating some hoops and poor docs for mobile/windows clients).

    But to my knowledge there are no client-side configs related to NAT traversal (which is kind of the goal… to work seamlessly everywhere). The configs themselves on the headscale server aren’t so bad either, but the networking concepts involved are extremely advanced, so debugging if anything goes sideways or validating that your server-side NAT traversal setup is working as expected can be a deep dive. With Tailscale, you know any problems are client-side and can focus your attention accordingly… which simplifies initial debugging quite a lot.




  • I use Headscale, but Tailscale is a great service and what I generally recommend to strangers who want to approximate my setup. The tradeoffs are pretty straightforward:

    • Tailscale is going to have better uptime than any single-machine Headscale setup, though not better uptime than the single-machine services I use it to access… so not a big deal to me either way.
    • Tailscale doesn’t require you to wrestle with certs or the networking setup required to do NAT traversal. And they do it well, you don’t have to wonder whether you’ve screwed something up that’s degrading NAT traversal only in certain conditions. It just works. That said, I’ve been through the wringer already on these topics so Headscale is not painful for me.
    • Headscale is self-hosted, for better and worse.
    • In the default config (and in any reasonable user-friendly, non professional config), Tailscale can inject a node into your network. They don’t and won’t. They can’t sniff your traffic without adding a node to your tailnet. But they do have the technical capability to join a node to your tailnet without your consent… their policy to not do that protects you… but their technology doesn’t. This isn’t some surveillance power grab though, it’s a risk that’s essential to the service they provide… which is determining what nodes can join your tailnet. IMO, the tailscale security architecture is strong. I’d have no qualms about trusting them with my network.
    • Beyond 3 devices users, Tailscale costs money… about $6 US in that geography. It’s a pretty reasonable cost for the service, and proportional in the grand scheme of what most self-hosters spend on their setups annually. IMO, it’s good value and I wouldn’t feel bad paying it.

    Tailscale is great, and there’s no compelling reason that should prevent most self-hosters that want it from using it. I use Headscale because I can and I’m comfortable doing so… But they’re both awesome options.




  • My money is also on IO. Outside of CPU and RAM, it’s the most likely resource to get saturated (especially if using rotational magnetic disks rather than an SSD, magnetic disks are going to be the performance limiter by a lot for many workloads), and also the one that OP said nothing about, suggesting it’s a blind spot for them.

    In addition to the excellent command-line approaches suggested above, I recommend installing netdata on the box as it will show you a very comprehensive set of performance metrics without having to learn to collect each one on the CLI. A downside is that it will use RAM proportional to the data retention period, which if you’re swapping hard will be an issue. But even a few hours of data can be very useful and with 16gb of ram I feel like any swapping is likely to be a gross misconfiguration rather than true memory demand… and once that’s sorted dedicating a gig or two to observability will be a good investment.


  • Tailscale is out, unfortunately. Because the server also runs Plex and I need to use it with Chromecast on remote access…

    I rather suspect you already understand this, but for anyone following along… Tailscale can be combined with other networking techniques as well. So one could:

    • Access Plex from a Chromecast on your home network using your physical IP, and on your tailnet using the overlay IP.
    • Or one could have some services exposed publicly and others exposed on the tailnet. So Immich could be on the tailnet while Plex is exposed differently.

    It’s not an all or nothing proposition, but of course the more networking components you have the more complicated everything gets. If one can simplify, it’s often well worth doing so.

    Good luck, however you approach it.


  • So for something like Jellyfin that you are sharing to multiple people you would suggest a VPS running a reverse proxy instead of using DDNS and port forwarding to expose your home IP?

    I run my Jellyfin on Tailscale and don’t expose it directly to the internet. This limits remote access to my own devices, or the devices of those I’m willing to help install and configure tailscale on. I don’t really trust Jellyfin on the public internet though. It’s both a bit buggy, which doesn’t bode well for security posture… and also a misconfiguration that exposes your content could generate a lot of copyright liability even if it’s all legitimately licensed since you’re not allowed to redistribute it.

    But if you do want it publicly accessible there isn’t a hoge difference between a VPS proxying and a dynamic DNS setup. I have a VPS and like it, but there’s nothing I do with it that couldn’t be done with Cloudflare tunnel or dyndns.

    What VPS would you recommend? I would prefer to self host, but if that is too large of a security concern I think there is a real argument for a VPS.

    I use linode, or what used to be linode before it was acquired by Akamai. Vultr and Digitalocean are probably what I’d look to if I got dissatisfied. There’s a lot of good options available. I don’t see a VPS proxy as a security improvement over Cloudflare tunnel or dyndns though. Tailscale is the security improvement that matters to me, by removing public internet access to a service entirely, while lettinge continue to use it from my devices.


  • Do I need to set up NGINX on a VPS (or similar cloud based server) to send the queries to my home box?

    A proxy on a VPS is one way to do this, but not the only way and not necessarily the best one… depending on your goals.

    • You can also use port-forwarding and dyndns to just expose the port off your home-ip. If your ISP is sucky, this may not work though.
    • You can also use Cloudflare’s free tunneling product, which is basically a hosted proxy that acts like a super port-forward that bypasses sucky ISP restrictions.
    • If you want to access Immich yourself from your own devices but don’t need to make it available to (many) others on devices you don’t control, I like and use tailscale the best. The advantage of tailscale is that Immich remains on a private network, not directly scannable from the internet. If there’s a preauth exploit published and you don’t pay attention to update promptly, scanners WILL exploit your Immich instance with internet-exposed techniques… whereas tailscale allows you to access services that internet scanners cannot connect to, which is a nice safety net.

    Do I need to purchase a domain (randomblahblah.xyz) to use as the main access route from outside my house?

    Not for tailscale, and I don’t think for Cloudflare tunnel. Yes for a VPS proxy.

    I’ve run a VPS for a long while and use multiple techniques for different services.

    • Some services I run directly on the VPS because it’s simple and I want them to be truly publicly accessible.
    • Other services I run on a bigger server at home and proxy through the VPS because although I want them to be publicly accessible, they require more resources than my VPS has available. When I get around to installing Immich, there’s a decent chance it will go into this category.
    • Still other services, I run wherever and attach them to my tailnet. These I access myself on my own devices (or maybe invite a handful of trusted people into my tailnet), but aren’t visible to the public internet. If I decide not to use immich’s shared gallery features (and so don’t need it publicly accessible) or decide I don’t trust it security-wise… it will go here instead of the proxy-by-vps category.

    1. …create a sidebar with some contents… At least some of these communities have empty sidebars.
    2. Every community needs enough moderators. A single-mod community is not “enough” for a healthy community because things can blow up when you’re asleep or away, even in a community that was previously inactive. If a community member reaches out to offer to join a single-mod team… that contact warrants a response from the existing mod. Not necessarily to immediately accept the offer, but at least to discuss the possibility of extra mod coverage.
    3. It’s just not at all true that if others aren’t posting there’s no moderation work that could be done. Mods of inactive communities can jumpstart them by soliciting feedback on proposed rules, advertising them elsewhere, making scheduled discussion posts, and more. Some of these things can be done by a “regular” community member as well, but if community members try to include mods in discussions about how best to promote the community and the mods ignore them… that’s a sign that the community is abandoned.
    4. If a mod is notified that they’re their community is about to get reassigned and they don’t respond… the community is definitely abandoned.

    All of which is to say, there are lots of way to detect abandoned communities when post volume is low, and the process I highlighted is the standard way to request a takeover.


  • I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).

    1. K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don’t want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that’s not quite precisely k8s either. If I’m going to start trimming off the parts of k8s I don’t need, I end up going all the way to single-node podman/docker… not the halfway point that is k3s.
    2. If you don’t use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It’s totally necessary with you have a thousand engineers slinging services around your cluster, but there’s no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
    3. Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.

    Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.


  • The more normal transfer path is to offer to take over a specific community or communities by:

    1. Reaching out to the existing mod and asking to be added to the mod team.
    2. Documenting their lack of response after a few days or a week.
    3. Documenting the failure to abide by Lemmy world moderation guidelines: https://lemmy.world/post/424735 by linking spam or off-topic posts and to communities that lack rules/useful-sidebar-content, etc.
    4. Posting this info in !moderators@lemmy.world and offering to takeover moderation.

    This is better than mass deletion because it keeps whatever small list of existing subscribers and post content intact across the transition. For moderation, Lemmy world admins will get notified of reports and can address anything that violates instance rules.





  • I feel like you’re combatively advocating for a specific vision and not collecting and processing feedback as your OP suggests, at any rate… you don’t seem to be understanding what I was trying to say at all… but it’s not something I’m going to fight about with someone who is questioning if I know what a multi-reddit is and dismissing client-side techniques as nonsense without seeming to understand why they were being discussed in the first place.

    I’ll leave with these thoughts, do with them what you will:

    1. I’m not interested in any multireddit feature that reduces sub privacy. I’d consider it a net loss for lemmy.
    2. On Reddit, multi-reddits personal in nature. Such a personal multireddit for lemmy doesn’t require interaction with federation or privacy changes.
    3. I realize that a shared super-community feature is frequently requested on Lemmy aimed at addressing duplication of communities across instances. I don’t think that’s more than superficially similar to actual multireddits, and I don’t think it’s a good idea because it creates moderation problems that are far worse than the community duplication problems it purports to address.

  • What you’ve described is one way. It could also be a filtered view based on the subscribed/all feed which provides a single API call that can return material from multiple communities. I’m not suggesting that a client-side only solution is a GOOD solution. But from an information-flow perspective, I’m suggesting that multireddits are a “local” function. Theu are so local that they’re possible without server-side support at all, and especially local enough not to require representation in the federated feed… which is a more significant change with potential impacts to other federated projects like kbin and mastodon… and shouldn’t require relaxing privacy constraints in any case.