• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: June 1st, 2023

help-circle






  • Perhaps I should rephrase the argument as Searle did. He didn’t actually discuss “abstract understanding”, instead he made a distinction between “syntax” and “semantics”. And he claimed that computers as we know them cannot have semantics, whereas humans can (even if we don’t all have the same semantics).

    Now consider a quadratic expression. If you want to solve it, you can insert the coefficients into the quadratic formula. There are other ways to solve it, but this will always give you the right answer.

    If you remember your algebra class, you will recognize that the quadratic formula isn’t just some random equation to compute. You use it with intention, because the answer is semantically meaningful. It describes things like cars accelerating or apples falling.

    You can teach a three year old to identify the coefficients, you can show them the symbols that make up the quadratic formula: “-”, second number, “+”, “√”, “(”, etc. And you can teach them to copy those symbols into a calculator in order. So a three year old could probably solve a quadratic expression. But they almost certainly have no idea why they are doing what they are doing. It’s just a series of symbols that they were told to copy into a calculator, their only intention was to copy them in order correctly. There are no semantics behind the equation.

    For that matter, a three year old could equally well enter the symbols necessary to calculate relativistic time dilation, which is an even shorter equation. But if their parents proudly told you that their toddler can solve problems in special relativity, you might think, “Yes… but not really.”

    That three year old is every computer program. Sure, an AI can enter symbols into a calculator and report the answer. If you tell them to enter a different series of symbols, they will report a different answer. You can tell the AI that one answer scores 0.1 and another scores 0.8, and to calculate a different equation that is based partly on those scores. But to the AI, those scores and equations have no semantic meaning. At some point those scores might stop increasing, and you will declare that the AI is “trained”. But at no point does the AI assign any semantic content behind those symbols or scores. It is pure syntax.


  • It doesn’t matter if the answer is right. If the AI does not have an abstract understanding of “red” then it is using a different process to get to the answer than humans. And according to Searle, a Turing machine cannot have an abstract understanding of “red”, no matter how complex the question or how complex an internal model is used to determine its answers.

    Going back to the Chinese Room, it is possible that the instructions carried out by the human are based on a complex model. In fact, it is possible that the human is literally calculating the output of a trained neural net by summing the weights of nodes, etc. You could even carry out these calculations yourself, if you could memorize the parameters.

    Your use of “black box” gets to the heart of it. Memorizing all of the parameters of a trained NN allows you to calculate an answer, but they don’t give you any understanding what the answer means. And if they don’t tell you anything about the meaning, then they don’t tell the CPU doing that calculation anything about meaning either.




  • “The room understands” is a common counterargument, and it was addressed by Searle by proposing that a person memorize the contents of the book.

    And the room passes the Turing test, that does not mean that “it passes all the tests we can throw at it”. Here is one test that it would fail: it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”. This level of abstraction is part of what we mean by understanding. Internal representation matters.







  • Most people don’t get seed money from their parents, but the median American will inherit $70K from their parents.

    Regardless, most people who start a business can take out a loan. And to put things in perspective, even opening a fast food restaurant will require over $200K.

    In other words, the advantage that $28K gave Musk was nowhere near enough to open a small restaurant, much less automatically turn him into a billionaire.


  • Peer review is a general principle that goes beyond the formalities of journal publication.

    Even if you never submit your work to a peer-reviewed journal, your scientific claims will be judged by a community of scientific peers. If your work is not accepted by your scientific peers, then you are not contributing to scientific knowledge.

    For example, most homeopathic claims are never submitted to journals. They are nevertheless judged by the scientific community, and are not persuasive enough to be accepted as scientific knowledge.


  • Science is based on peer review, which means that a scientific opinion will be accepted only if it can convince a sufficient number of other scientists. This is not too different from using an explicit voting system to rank answers.

    All scientists accept the possibility that what they currently believe to be true may one day be considered false. Science does not pretend to describe only eternal truths. So it’s not a problem if the most popular answer today becomes the least popular answer in the future, or vice versa.