“We think we’re on the cusp of the next evolution, where AI happens not just in that chatbot and gets naturally integrated into the hundreds of millions of experiences that people use every day,” says Yusuf Mehdi, executive vice president and consumer chief marketing officer at Microsoft, in a briefing with The Verge. “The vision that we have is: let’s rewrite the entire operating system around AI, and build essentially what becomes truly the AI PC.”
…yikes



@sugar_in_your_tea @BarneyPiccolo especially in a language as widely used as English with regional nuance that an NLP could never distinguish. When I say “quite” is it an American “quite” or a British “quite”? Same for “rather”? What does it mean if we’re tabling this thing in the agenda? When/for how long is something happening, momentarily? Neither the speaker nor the program will have a clue how these things are being interpreted, and likely will not even realise there are differences.
Even if they solve the regional dialect problem, there’s still the problem of people being really imprecise with natural language.
For example, I may ask, “what is the weather like?” I could mean:
An internet search would be “weather <location> <time>”. That’s it. Typing that takes a few seconds, whereas voice control requires processing the message (a couple seconds usually) and probably an iteration or two to get what you want. Even if you get it right the first time, it’s still as long or longer than just typing a query.
Even if voice activation is perfect, I’d still prefer a text interface.
My autistic brain really struggles with natural language and its context-based nuances. Human language just isn’t built for precision, it’s built for conciseness and efficacy. I don’t see how a machine can do better than my brain.