I coalesce the vapors of human experience into a viable and meaningful comprehension.…

  • 0 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle


  • In addition to the point about Western mythologies dominating because of cultural exports, I think there is also the undercurrent of England’s original mythologies having been “lost” and so the English were always fascinated by the mythologies of the Norse (due to being invaded) and by the Greeks and Romans (as previous “great” civilizations they aspired to be).

    Combine that with America’s obvious English influences and the influence of England as a colonizer around the world, and those mythologies gained a huge outsized influence.


  • I probably didn’t explain well enough. Consuming media (books, TV, film, online content, and video games) is predominantly a passive experience. Obviously video games less so, but all in all, they only “adapt” within the guardrails of gameplay. These AI chatbots however are different in their very formlessness - they’re only programmed to maintain engagement and rely on the LLM training to maintain an illusion of “realness”. And because they were trained on all sorts of human interactions, they’re very good at that.

    Humans are unique in how we continually anthropomorphize tons of not only inert, lifeless things (think of someone alternating between swearing at and pleading to a car that won’t start) but abstract ideals (even scientists often speak of evolution “choosing” specific traits). Given all of that, I don’t think it’s unreasonable to be worried about a teen with a still developing prefrontal cortex and who is in the midst of working on understanding social dynamics and peer relationships to embue an AI chatbot with far more “humanity” than is warranted. Humans seem to have an anthropomorphic bias in how we relate to the world - we are the primary yardstick we use to measure and relate everything around us, and things like AI chatbots exploit that to maximum effect. Hell, the whole reason the site mentioned in the article exists is that this approach is extraordinarily effective.

    So while I understand that on a cursory look, someone objecting to it comes across as a sad example of yet another moral panic, I truly believe this is different. For one, we’ve never had access to such a lively psychological mirror before and it’s untested waters; and two, this isn’t some objection on some imagined slight against a “moral authority” but based in the scientific understanding of specifically teen brains and their demonstrated fragility in certain areas while still under development.



  • I understand what you mean about the comparison between AI chatbots and video games (or whatever the moral panic du jour is), but I think they’re very much not the same. To a young teen, no matter how “immersive” the game is, it’s still just a game. They may rage against other players, they may become obsessed with playing, but as I said they’re still going to see it as a game.

    An AI chatbot who is a troubled teen’s “best friend” is different and no matter how many warnings are slapped on the interface, it’s going to feel much more “real” to that kid than any game. They’re going to unload every ounce of angst into that thing, and by defaulting to “keep them engaged”, that chatbot is either going to ignore stuff it shouldn’t or encourage them in ways that it shouldn’t. It’s obvious there’s no real guardrails in this instance, as if he was talking about being suicidal, some red flags should’ve popped up.

    Yes the parents shouldn’t have allowed him such unfettered access, yes they shouldn’t have had a loaded gun that he had access to, but a simple “This is all for funsies” warning on the interface isn’t enough to stop this from happening again. Some really troubled adults are using these things as defacto therapists and that’s bad too. But I’d be happier if lawmakers were much more worried about kids having access to this stuff than accessing “adult sites”.


  • That’s certainly where the term originated, but usage has expanded. I’m actually fine with it, as the original idea was about the pattern recognition we use when looking at faces, and I think there’s similar mechanisms for matching other “known” patterns we see. Probably with some sliding scale of emotional response on how well known the pattern is.










  • geekwithsoul@lemm.eetoLemmy.world Support@lemmy.worldBan the MBFC bot
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    12
    ·
    7 months ago

    Currently the bot’s media ratings come from just some guy, who is unaccountable and has an obvious rightwing bias.

    Wow! Talk about misinformation!!! https://mediabiasfactcheck.com/about/

    Or maybe you think they were bought and paid for by some nefarious source? Nope…

    Media Bias/Fact Check funding comes from reader donations, third-party advertising, and membership subscriptions. We use third-party advertising to prevent influence and bias, as we do not select the ads you see displayed. Ads are generated based on your search history, cookies, and the current web page content you are viewing. We receive $0 from corporations, foundations, organizations, wealthy investors, or advocacy groups. See details on funding.

    …I would suggest making the ratings instead come from an open sourced and crowdsourced system. A system where everyone could give their inputs and have transparency, similar to an upvote/downvote system.

    Such a system would take many hours to design and maintain, it is not something I personally am willing to contribute, nor would I ask it of any volunteers.

    Thank you for at least providing an iota of something constructive. It’s an interesting idea, and there is academic research that shows it might be possible. But the problem is then in a world already filled with state- and corpo-sponsored organized misinformation campaigns, how does any crowdsourced solution avoid capture and infiltration from the very sources of misinformation it should be assessing? Look at the feature on Twitter and how often that is abused. Then you’d need a fact checker for your fact checker.


  • geekwithsoul@lemm.eetoLemmy.world Support@lemmy.worldBan the MBFC bot
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    26
    ·
    7 months ago

    “universally destructive to understanding”

    So what you’re saying is that no one derives any use from the bot? Wow, with that kind of omniscience, I’d expect we could just ask you to judge every news source. Win-win for everyone I suppose if you’re up for it.

    Now “generally destructive” would probably be better wording for us mere mortals, but stills seems to be a wildly generalized statement. Or maybe “inadequately precise” would be more realistic, but then that really takes the wind out of the sails to ban it, doesn’t?


  • geekwithsoul@lemm.eetoLemmy.world Support@lemmy.worldBan the MBFC bot
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    24
    ·
    7 months ago

    Because this is the first thing I think I’ve seen you post and blocking everything you disagree with seems sort of stupid?

    I think the bot has issues, but I hardly agree that it’s posting misinformation. Incomplete? Imperfect? You bet. But that’s not “misinformation” in any commonly understood meaning. I think the intent of providing additional context on information sources is laudable.

    As someone with such a distaste for misinformation, how would you suggest fixing it? That’s a much more useful discussion than “BAN THE THING I PERSONALLY AND SUBJECTIVELY THINK IS BAD!!!” You obviously think misinformation is a problem, so why not suggest a solution?