

Oh, you think Canadians aren’t going to get in on tariff evasion? They 100% will.
The tremendous irony is America was founded to evade massive tariffs, sorta… and now we’re doing it to our self.
Oh, you think Canadians aren’t going to get in on tariff evasion? They 100% will.
The tremendous irony is America was founded to evade massive tariffs, sorta… and now we’re doing it to our self.
the slop that Ubisoft craps out
I’d, uh, argue there are some exceptions, like the better asscreed games or the anno series.
I don’t mean to be rude though, PM me about stuff if you want! But I make no promises about a timely response lol.
Yeah. Valve’s 30% cut is greed. So is their (alleged) anticompetitive behavior of forcing price parity with other stores (aka devs can’t price things cheaper than Steam elsewhere).
I mean, I like their store. I like most of their behavior, but I am also waiting for the hammer to drop, and everyone should.
Not particularly, just frequent posters I’m familiar with.
I plan to get more involved once I get some personal stuff straight.
Heh, I’m an EE dropout kinda in machine learning stuff now. Good luck, chemical engineering seems tough (but cool).
But yeah, on Lemmy the idea is to post in communities that fit your niches, rather than trying to follow people directly like Mastadon, Twitter or whatever. Those are a bit slim but growing (for instance, there are some active science-focused communities/servers).
You can generally toggle LLM “grounding” features, aka inserting web searches into their context.
Modern LLMs have a information “cutoff” of a few months ago, at the latest, so the base models will have zero awareness of this formula.
TBH it’s probably human written.
I used to write small articles for a tech news outlet on the side (HardOCP), and the entire site went under well before the AI boom because no one can compete with conveyer belts of of thoughtless SEO garbage, especially when Google promotes it.
Point being, this was a problem well before the rise of LLMs.
In this case, it’s as simple as “type it into ChatGPT, like the Reddit users did” :/
That they didn’t try to replicate it.
How about the outlet checks and finds out?
I did, and I couldn’t get low-temperature Gemini or a local LLM to replicate it, and not all the tariffs seem to be based on the trade deficit ratio, though some suspiciously are.
Sorry, but this is a button of mine, outlets that ask stupidly easy to verify questions but dont even try. No, just cite people on Reddit and Twitter…
True! Models not trained on a specific language are generally bad at that language.
However, there are some exceptions, like a Japanese tune of Qwen 32B which dramatically enhances it Japanese, but the training has to be pretty extensive.
And even that aside… the effect is still there. The point it to illustrate that LLMs are sort of “language independent” internally, like you said.
It’s a metaphor.
They’re translating the input tokens to intent in the model’s middle layers, which is a bit more precise.
I use local instances of Aya 32B (and sometimes Deepseek, Qwen, LG Exaone, Japanese finetunes, others depending on the language) to translate stuff, and it is quite different than Google Translate or any machine translation you find online. They get the “meaning” of text instead of transcribing it robotically like Google, and are actually pretty loose with interpretation.
It has soul… sometimes too much. That’s the problem: It’s great for personal use where it can ocassionally be wrong or flowery, but not good enough for publishing and selling, as the reader isn’t necessarily cognisant of errors.
In other words, AI translation should be a tool the reader understands how to use, not something to save greedy publishers a buck.
EDIT: Also, if you train an LLM for some job/concept in pure Chinese, a surprising amount of that new ability will work in English, as if the LLM abstracts language internally. Hence they really (sorta) do a “meaning” translation rather than a strict definitional one… Even when they shouldn’t.
Another thing you can do is translate with one local LLM, then load another for a reflection/correction check. This is another point for “open” and local inference, as corporate AI goes for cheapness, and generally tries to restrict you from competitors.
It’s not though.
To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.
They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.
Grok and Gemini are both making that up. They have no awareness of anything that’s “happened” to them. Grok cannot be tweaked because it starts from a static base with every conversation.
The important part is: Grok has no memory.
Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it,” it made that up.
A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.
…Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.
That essentially wastes electricity for OpenAI (assuming you aren’t paying for the response), and its “filler” data for training on.
Many (American) folks of mine, even more conservative ones, tend to tune out familiar news sources because they’re so bad. Others are really glued to Facebook or whatever their feed of choice is.
TBH I think America (on average) just lives in a stronger information dystopia than Europe. People here don’t connect social security cuts to them, or even know about Trump’s/Musk’s statements on it.
Morale of the story… please ban Facebook, X, really most engagement-driven social media as fast as you can. Or risk turning into… us.
The Switch 2 chip is effectively older than the now aging (but fantastic for the price/size) Van Gogh chip in the Steam Deck. It shouldn’t be expensive to make.