

I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
Oh my Gwyn, this comment section is just amazing.
That’s wonderful to know! Thank you again.
I’ll follow your instructions, this implementation is exactly what I was looking for.
Absolutely stellar write up. Thank you!
I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account total vram memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?
If at all true this would be world-changing news.
Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general