

Crossfading and normalization would both independently be dealbreakers for me. I can’t go back
Crossfading and normalization would both independently be dealbreakers for me. I can’t go back
I would be genuinely surprised if fair use draws the line on format-shifted, legally purchased media, at “remote watch-together”, leaving format-shifting and local watch-together in-tact.
If it were up to the studio’s interpretation of the law, you’d need to purchase a license for each person during local watch-together.
agree in principal, but in practice:
parents who live across the state
plexamp for music
They are indeed just that keen on our data.
They know they can’t get rid of it for all of their customers, but they do want to make it as hard as possible for random users to do so.
The problem with this is it doesn’t work for home users that want to pay for their software. Crazy… I know… but those people do exist.
For people with “that one game” there is a middle ground. Mine is Destiny 2 and they use a version of easy anticheat that refuses to run on Linux. My solution was to buy a $150 used Dell on eBay, a $180 GPU to be able to output to my 4 high-res displays, and install Debian + moonlight on it. I moved my gaming PC downstairs and a combination of wake-on-lan + sunshine means that I can game at functionally native performance, streaming from the basement. In my setup, windows only exists to play games on.
The added bonus here is now I can also stream games to my phone, or other ~thin clients~ in the house, saving me upgrade costs if I want to play something in the living room or upstairs. All you need is the bare minimum for native-framerate, native-res decoding, which you can find in just about anything made in the last 5-10 years.
Outlook being on that list is crazy.
The canvas API needs specific access to hardware that isn’t usually available via browser APIs. It’s usually harder to get specific capability information from a user’s GPU for example. The canvas API needs capability information to decide how to draw objects across differently capable hardware, and those extra data points make it that much easier to uniquely identify a user. The more data points you can collect, the more unique each visitor is.
Here’s a good utility from the EFF to demonstrate the concept if you or anyone else is curious.
Just think, an extra long shirt can cover that hole, and we could embed a flexible display, wifi module, and a camera in the extra space. This could scan the faces of those around you, and display personalized ads! This is an excellent solution to the hole in your pants, and frankly, the only secure one.
You’re correct that nesting namespaces is unlikely to introduce measurable performance degradation. For performance, I was thinking mostly in the nested virtual network stack adding latency. Both docker and lxc run their own virtual interfaces.
There’s also the issue of running nested apparmor, selinux, and/or seccomp checks on processes in the child containers. I know that single instances of those are often enough to kill performance on highly latency sensitive applications (SAP netweaver is the example that comes to mind) so I would imagine two instances of those checks would exacerbate those concerns.
There are security performance and capability concerns with that approach, apparmor on the first layer lxc probably being the most annoying.
If you want to isolate your docker sandbox from your main host, you should use a vm not a container.
I’ve always wondered why board partners didn’t just raise to scalper prices and take a $2200 profit per card sold.
And tbh, it’s Nvidia’s fault that the partners don’t have enough dies, I’d much rather a partner take the margin than an unnecessary middleman.
The proper deepseek r1 requires about 500gb of ram/vram to run, which is orders of magnitude more ram than modern phones have. The smaller models called “deepseek r1” are not the real deepseek model that everyone is talking about.
When you’re the size of LMG you don’t hire investigative law firms for PR; you do it for liability. The goal is to limit corporate liability by removing individuals likely to get you sued, and most importantly to distance leadership from it with plausible deniability. The firm also has its own reputation to consider, and wouldn’t let a client get away with materially misrepresenting their results.
I don’t think its unreasonable to suggest that a positive finding from an investigative firm is evidence to support their position that they, materially, did nothing wrong. The fact that no one was fired as a result of that investigation is a good sign externally, as it would open them up to more liability if they knew about it and did nothing.
The source to this compat library is in their sources last I checked, but because it’s not part of their standard repos it doesn’t technically have to be. I suspect this is eventually the end-goal.
A lot of industries are semi-forced into it. Let me give you an example I know of first-hand. Modern SAP stacks support 3 operating systems. Windows Server, RHEL, and SuSE.
You’re probably thinking to yourself: “but rhel is just regular linux, surely you can install it on anything if you have the appropriate dependencies, I’ll bet it even just works on rhel-compatibles like rocky, alma, or centos stream!”
And you would be ~sort of~ right, but wrong in the most dystopian way possible. The installer itself does hardcoded checks for “compatible” operating systems, using /etc/os-release and a few other common system files. Spoofing those to rhel 8.5 or whatever is easy enough, but the one that really gets you is a dependency for compat-glibc-X.Y-ZZZZ.x86_64. This “glibc compatibility library” is conveniently only accessible via a super special redhat repository granted by a super special sap license (which is like ~$2,000/year/cpu). Looking at the redhat sources it is actually just a bog-standard semi-modern glibc compile with nothing special. The only other thing you get with this license as far as I can tell is another metapackage that installs dependencies, and makes a few kernel tweaks recommended by SAP.
So you can install it on alma/rocky by impersonating rhel in /etc/os-release, and then compiling a version of glibc and linking it in a special hardcoded location, but SAP/Redhat put as many roadblocks in your way as possible to do this. It took me weeks of reverse-engineering the installer to get our farm off of the ~100k/yr that redhat wanted to charge us for essentially:
./configure --enable-bootstrap --enable-languages=c,c++,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libquadmath --disable-libsanitizer --disable-libvtv --disable-libgomp --disable-libitm --disable-libssp --disable-libatomic --disable-libcilkrts --without-isl --disable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 9.1.1 20190605 (Red Hat 9.1.1-2) (GCC)
definitely worth $100,000/yr… much capitalism, many line go up
Why?
There’s nothing preventing you from forking a Lemmy client or server to prototype this. Depending on how you implement the activitypub backend, you might be able to make it transparent to a user if you present an algorithm as an array of cross posts via a /c/ of a server.
Anything more might require forking a client, which might be easier to implement but may be harder to convince a large userbase to migrate to.
Thank you for letting me know what software not to use; good bot