
There is no lie. It’s just tough to admit that it’s fairly accurate.
I keep picking instances that don’t last. I’m formerly known as:
@EpeeGnome@lemm.ee
@EpeeGnome@lemmy.fmhy.net
@EpeeGnome@lemmy.antemeridiem.xyz
@EpeeGnome@lemmy.fmhy.ml
There is no lie. It’s just tough to admit that it’s fairly accurate.
Yes, I just glossed over that detail by saying “similar to”, but that is a more accurate explanation.
Unfortunately the most probable response to a question is an authoritative answer, so that’s what usually comes out of them. They don’t actually know what they do or don’t know. If they happen to describe themselves accurately, it’s only because a similar description was in the training data, or they where specifically instructed to answer that way.
Dumpster concerns aside, I think these count as feral yeast.
People think the stem is too tough, but it just needs to be cooked right. The trick is to start cooking the stem pieces first, then when they just start to be cooked, add the florets. Exact timing depends on the cooking method, but if done right all of it will be tender and tasty.
If I recall correctly, the follow up was the same person complaining about being painfully constipated for several days.
It’s been a while, so I must have exaggerated it in my mind. Three days certainly sounds more plausible.
Thanks for reminding me, I’ve been meaning to do this. I’m doing random text edit of them all now. I’ll go back later and have it edit my top comments to a nice site-ban-worthy message.
Yeah, this really depends on what you meant by winning here. Are we actually changing the other person’s mind or do they admit we are right, but still believe otherwise? Are people who witness the argument included? Do people continue to agree indefinitely? Does it change reality to match?
If all 6 got the same answer multiple times, then that means that your query very strongly correlated with that reply in the training data used by all of them. Does that mean it’s therefore correct? Well, no. It could mean that there were a bunch of incorrect examples of your query they used to come up with that answer. It could mean that the examples it’s working from seem to follow a pattern that your problem fits into, but the correct answer doesn’t actually fit that seemingly obvious pattern. And yes, there’s a decent chance it could actually be correct. The problem is that the only way to eliminate those other still also likely possibilities is to actually do the problem, at which point asking the LLM accomplished nothing.