It’s sliced along the center, rotating the axis of slice 360 degrees as it goes along the circle, cutting it in two halves which interlock
Cryptography nerd
Fediverse accounts;
Natanael@slrpnk.net (main)
Natanael@infosec.pub
Natanael@lemmy.zip
@Natanael_L@mastodon.social
Bluesky: natanael.bsky.social
It’s sliced along the center, rotating the axis of slice 360 degrees as it goes along the circle, cutting it in two halves which interlock
Your workaround is precisely why I said “more practical”. Any updates to your tooling might break it because it’s not an expected usecase
Given the perfect grid pattern and a certain kind of coherence this kind of ML doesn’t usually preserve it’s much more likely somebody cut and paste the individual images into an ML based image generator to repaint them with English text
Stupid? Yes. They could have just taken the text alone into an LLM, or better yet regular translation program. But since when was the kind of people who blindly rely on ML smart?
You don’t want FIDO2 security tokens for that, use an OpenPGP applet (works with some Yubikeys and with many programmable smartcards). Much more practical for authenticating a server.
BTW we have a lot of cryptography experts in www.reddit.com/r/crypto (yes I know, I’m trying to get the community moved, I’ve been moderating it for a decade and it’s a slow process)
Could very well be ML repainting, “draw this image with English text”
It’s probably in-place translation using AI for a French book
The Nyquist-Shannon sampling theorem isn’t subjective, it’s physics.
Your example isn’t great because it’s about misconceptions about the eye, not about physical limits. The physical limits for transparency are real and absolute, not subjective. The eye can perceive quick flashes of objects that takes less than a thousandth of a second. The reason we rarely go above 120 Hz for monitors (other than cost) is because differences in continous movement barely can be perceived so it’s rarely worth it.
We know where the upper limits for perception are. The difference typically lies in the encoder / decoder or physical setup, not the information a good codec is able to embedd with that bitrate.
Newer fractional arithmetic encoding can get crazy
Why use lossless for that when transparent lossy compression already does that with so much less bandwidth?
Opus is indistinguishable from lossless at 192 Kbps. Lossless needs roughly 800 - 1400 Kbps. That’s a savings of between 4x - 7x with the exact same quality.
Your wireless antenna often draws more energy in proportion to bandwidth use than the decoder chip does, so using high quality lossy even gives you better battery life, on top of also being more tolerant to radio noise (easier to add error correction) and having better latency (less time needed to send each audio packet). And you can even get better range with equivalent radio chips due to needing less bandwidth!
You only need lossless for editing or as a source for transcoding, there’s no need for it when just listening to media
Except Opus. Beats it at most bitrates
You literally can not distinguish 192 Kbps Opus from true lossless. Not even with movie theater grade speakers. You only benefit from lossless if you’re editing / applying multiple effects, etc, which you will not do at the receiving end of a Bluetooth connection.
That’s more than a codec question, that’s a Bluetooth audio profile question. Bluetooth LE Audio should support higher quality (including with Opus)
Nobody needs lossless over Bluetooth
Edit: plenty of downvotes by people who have never listened to ABX tests with high quality lossy compare versus lossless
At high bitrate lossy you literally can’t distinguish it. There’s math to prove it;
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
At 44 kHz 16 bit with over 192 Kbps with good encoders your ear literally can’t physically discern the difference
Transparency is good enough, it’s intended to be a good fit for streaming, not masters for editing
Transparent or indistinguishable lossy compression are other common terms
There’s a push for Opus now, it’s the perfect codec for Bluetooth because it’s a singular codec that fits the whole spectrum from low bandwidth speech to high quality audio, and it’s fully free
Opus! It’s a merge of a codec designed for speech (from Skype!) with one designed for high quality audio by Xiph (same people who made OGG/Vorbis).
Although it needs some more work on latency, it prefers to work on bigger frames but default than Bluetooth packets likes, but I’ve seen there’s work on standardizing a version that fits Bluetooth. Google even has it implemented now on Pixel devices.
Fully free codec!
Depends on the specific system, but yes it often does
Reddit keeps opting communities to features that make no sense for them. Recap, talks, community awards, etc, which only fit a tiny number of communities.