• 6 Posts
  • 564 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle






  • Heh, I’m an EE dropout kinda in machine learning stuff now. Good luck, chemical engineering seems tough (but cool).

    But yeah, on Lemmy the idea is to post in communities that fit your niches, rather than trying to follow people directly like Mastadon, Twitter or whatever. Those are a bit slim but growing (for instance, there are some active science-focused communities/servers).









  • I use local instances of Aya 32B (and sometimes Deepseek, Qwen, LG Exaone, Japanese finetunes, others depending on the language) to translate stuff, and it is quite different than Google Translate or any machine translation you find online. They get the “meaning” of text instead of transcribing it robotically like Google, and are actually pretty loose with interpretation.

    It has soul… sometimes too much. That’s the problem: It’s great for personal use where it can ocassionally be wrong or flowery, but not good enough for publishing and selling, as the reader isn’t necessarily cognisant of errors.

    In other words, AI translation should be a tool the reader understands how to use, not something to save greedy publishers a buck.

    EDIT: Also, if you train an LLM for some job/concept in pure Chinese, a surprising amount of that new ability will work in English, as if the LLM abstracts language internally. Hence they really (sorta) do a “meaning” translation rather than a strict definitional one… Even when they shouldn’t.

    Another thing you can do is translate with one local LLM, then load another for a reflection/correction check. This is another point for “open” and local inference, as corporate AI goes for cheapness, and generally tries to restrict you from competitors.


  • It’s not though.

    To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.

    They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.



  • The important part is: Grok has no memory.

    Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it,” it made that up.

    A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.

    …Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.



  • Many (American) folks of mine, even more conservative ones, tend to tune out familiar news sources because they’re so bad. Others are really glued to Facebook or whatever their feed of choice is.

    TBH I think America (on average) just lives in a stronger information dystopia than Europe. People here don’t connect social security cuts to them, or even know about Trump’s/Musk’s statements on it.

    Morale of the story… please ban Facebook, X, really most engagement-driven social media as fast as you can. Or risk turning into… us.