Is this another thing that the rest of the world didn’t know the US doesn’t have?
Is this another thing that the rest of the world didn’t know the US doesn’t have?
Comfortably-off customers casting aspersions on “minimum wage workers” are the absolute pits.
There is lots to say here but you are too clueless to say any of it. FFS
Every right-wing accusation is a confession.
many years now
This appears to be an escalating fraud, affecting newer models more than old. So I’d guess that’s ^^ the answer.
It’s not just a Reuters investigation, they’ve been fined by a few jurisdictions and they absolutely do have the ability to pay lawyers to defend those charges if they’re false.
They don’t seem to list the instances they trawled (just the top 25 on a random day with a link to the site they got the ranking from but no list of the instances, that I can see).
We performed a two day time-boxed ingest of the local public timelines of the top 25 accessible Mastodon instances as determined by total user count reported by the Fediverse Observer…
That said, most of this seems to come from the Japanese instances which most instances defederate from precisely because of CSAM? From the report:
Since the release of Stable Diffusion 1.5, there has been a steady increase in the prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with increasing levels of realism.17 This content is highly prevalent on the Fediverse, primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in Japan, its laws exclude computer-generated content as well as manga and anime. The difference in laws and server policies between Japan and much of the rest of the world means that communities dedicated to CG-CSAM—along with other illustrations of child sexual abuse—flourish on some Japanese servers, fostering an environment that also brings with it other forms of harm to children. These same primarily Japanese servers were the source of most detected known instances of non-computer-generated CSAM. We found that on one of the largest Mastodon instances in the Fediverse (based in Japan), 11 of the top 20 most commonly used hashtags were related to pedophilia (both in English and Japanese).
Some history for those who don’t already know: Mastodon is big in Japan. The reason why is… uncomfortable
I haven’t read the report in full yet but it seems to be a perfectly reasonable set of recommendations to improve the ability of moderators to prevent this stuff being posted (beyond defederating from dodgy instances, which most if not all non-dodgy instances already do).
It doesn’t seem to address the issue of some instances existing largely so that this sort of stuff can be posted.
There are exceptions to the rule, and this is one of them.
The rule works so well because journalists who can make a statement of fact, make a statement of fact. When they can’t stand the idea up, they use a question mark for cover. eg China is in default on a trillion dollars in debt to US bondholders. Will the US force repayment? .
This is an opinion piece which is asking a philosophical question. The rule does not apply.
tbf this is not very much different from how many flesh’n’blood journalists have been finding content for years. The legendary crack squirrels of Brixton was nearly two decades ago now (yikes!). Fox was a little late to the party with U.K. Squirrels Are Nuts About Crack in 2015.
Obviously, I want flesh’n’blood writers getting paid for their plagiarism-lite, not the cheapskates who automate it. But this kind of embarrassing error is a feature of the genre. And it has been gamed on social media for some time now (eg Lib Dem leader Jo Swinson forced to deny shooting stones at squirrels after spoof story goes viral)
I don’t know what it is about squirrels…
This lil robot was trained to know facts and communicate via natural language.
Oh stop it. It does not know what a fact is. It does not understand the question you ask it nor the answer it gives you. It’s a very expensive magic 8ball. It’s worse at maths than a 1980s calculator because it does not know what maths is let alone how to do it, not because it’s somehow emulating how bad the average person is at maths. Get a grip.
It’s OK. Ordinary people will have no trouble at all making sure they use a different vehicle every time they drive their kid to college or collect an elderly relative for the holidays. This will only inconvenience serious criminals.
Marxism works as a critique of lassiez-faire capitalism, but as a standalone system always results in the creation od totalitarian regimes.
Marx never devised any kind of “system” and there has never been a Marxist revolution (if you mean, of the kind Marx predicted would occur). Marx thought revolution would result from the concentration of labour in factories in heavily industrialised countries but so-called Marxist revolutions have only happened in agrarian economies so far.
It turns out that fascism (which is power protecting itself) is the primary beneficiary of crises of capitalism because they happen when labour is at its weakest.
Narrator: you cannot believe Tesla’s numbers.
*accurately
You’re agreeing with me but using more words.
I’m more annoyed than upset. This technology is eating resources which are badly needed elsewhere and all we get in return is absolute junk which will infest the literature for decades to come.
Efficacy of prehospital administration of fibrinogen concentrate in trauma patients bleeding or presumed to bleed (FIinTIC)
Sam Altman is a know-nothing grifter. HTH
They cannot be anything other than stochastic parrots because that is all the technology allows them to be. They are not intelligent, they don’t understand the question you ask or the answer they give you, they don’t know what truth is let alone how to determine it. They’re just good at producing answers that sound like a human might have written them. They’re a parlour trick. Hi-tech magic 8balls.
In context. And that is exactly how they work. It’s just a statistical prediction model with billions of parameters.
That’s not true! There’s heaps of early-GPT articles pointing out how much bullshit it regurgitates (eg Why does ChatGPT constantly lie?). And no evidence at all that the breathless fanboys have even stopped to check.
It will almost always be detectable if you just read what is written. Especially for academic work. It doesn’t know what a citation is, only what one looks like and where they appear. It can’t summarise a paper accurately. It’s easy to force laughably bad output by just asking the right sort of question.
The simplest approach for setting homework is to give them the LLM output and get them to check it for errors and omissions. LLMs can’t critique their own work and students probably learn more from chasing down errors than filling a blank sheet of paper for the sake of it.
Yes they are. Probably not in the country that calls it transit, mind. And lots of people would like to be able to have more private conversations in public, whether or not they’re travelling at the time.
Plus, I’ve seen a lot of threads over the years from gamers, or the people who have to live with them, looking for something exactly like this.