

Nope, it will just be ‘thing that beeps’ to justify whatever they were planning on doing to immigrants anyway, like police dogs.
Also known as snooggums on midwest.social and kbin.social.
Nope, it will just be ‘thing that beeps’ to justify whatever they were planning on doing to immigrants anyway, like police dogs.
Yeah, it isn’t like any one person can really understand everything about everything. There is just too much for anyone to know.
Oooohhhhh, that makes sense.
informed dedication opinions.
Hmmmmm
If you sailed by riding the figurehead, sure.
We shouldn’t be satisfied until high level execs and Musk, all of whom profit from Tesla’s sales, are jailed for fraud related to the scam sales in Canada.
Autocorrect and I are enemies.
You can tell by the way she is.
That isn’t Selma.
How so?
It was factual and stated neutrally as far as I can tell.
You can have a job and still do the things you think you would have done as a child. It isn’t like 1/3 of childhood days aren’t taken up by school, not even counting homework.
I had so much more time as a young adult than I had in school. Except for summer break I guess.
Ewww, no. The programmer should have run their unit tests, maybe even told you about them. You should be testing for edge cases not covered by the unit tests at a minimum and replicating the unit tests if they don’t appear to be very thorough.
Choosing not to roll dice avoids needing to follow the outcome. That was a smart decision on the door knocking!
Yeah, people are frequently terrible at understanding context so it shouldn’t be surprising that a computer has difficulty too.
There are actually a lot of specialized applications of neural network based computing being used for science, but they don’t get the flashy headlines because they are a tool. Those projects use it to find things to focus on narrowing down what people should look into first for confirmation, like ancient settlement patterns, stars that might have planets, and other things where patterns exist but are hard to see.
Some examples are listed here at a high level. In all cases the ai leads to humans confirming and then working from there, it isn’t the end result on its own. https://medium.com/@jeyadev_needhi/uncovering-the-past-how-ai-is-transforming-archaeology-38ded420896d
It is hard because they chose to make it hard by trying to do far too many things at the same time and sell it as a complete product.
Yes, the tradeoff between constrained randomization and accurately vomiting back the information it was fed is going to be difficult as long as it it designed to be interacted with as if it was a human who can know the difference.
It could be handled by having clearly defined ways of conveying whether the user wants factual or randomized output, but that would shatter the veneer of being intelligent.
This is because AI is not aware of context due to not being intelligent.
What is called creative is really just randomization within the constraints of the design. That reduces accuracy, because of the randomization. If the ‘creativity’ is reduced, it becomes more accurate because it is no longer adding changes.
Using words like creativity, self sabotage, hallucinations, etc. all make it seem like AI is far more advanced than it actually is.
Design requirements are what it should do, not how it does it.
What, like some kind of design requirements?
Heresy!
Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.
The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.
That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.