My view of AI (3)

Collapse
This topic is closed.
X
X
Collapse
 
  • Time
  • Show
Clear All
new posts
  • chuft
    Stepher
    SPECIAL MEMBER
    MODERATOR
    Level 33 - New Superhero
    • Dec 2007
    • 4470

    #1

    My view of AI (3)

    On how weird the people are who are creating AI. A gifted New York Times article.


    This Changes Everything


    Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I donโ€™t know that I can convey just how weird that culture is. And I donโ€™t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.

    In a 2022 survey, A.I. experts were asked, โ€œWhat probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?โ€ The median reply was 10 percent.

    I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?

    We typically reach for science fiction stories when thinking about A.I. Iโ€™ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

    I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.โ€™s perspective. Many โ€” not all, but enough that I feel comfortable in this characterization โ€” feel that they have a responsibility to usher this new form of intelligence into the world.
    โ€‹
    l i t t l e s t e p h e r s
  • boredjedi
    Master
    SPECIAL MEMBER
    MODERATOR
    Level 35 - Rockin' Poster
    • Jun 2007
    • 8122

    #2
    Originally posted by chuft
    On how weird the people are who are creating AI. A gifted New York Times article.


    This Changes Everything



    often ask them the same question: If you think calamity so possible, why do this at all?
    I would have guessed the answer would be "If not us, someone else will".

    http://eighteenlightyearsago.ytmnd.com/

    Note

    • chuft
      Stepher
      SPECIAL MEMBER
      MODERATOR
      Level 33 - New Superhero
      • Dec 2007
      • 4470

      #3
      Personally I would not want to be responsible for the annihilation of the human race, so that would never be my answer.
      l i t t l e s t e p h e r s

      Note

      • BRBFBI
        The Long Arm of the Law
        SPECIAL MEMBER
        Level 14 - Sportscandy
        • Oct 2023
        • 300

        #4
        I had the same thought.

        Note

        • BRBFBI
          The Long Arm of the Law
          SPECIAL MEMBER
          Level 14 - Sportscandy
          • Oct 2023
          • 300

          #5
          Thanks, chuft. I found the NYT Op-Ed insightful.

          The U.K. vs Apple article is crazy. That Great Brittan is pushing for more invasive surveillance than China in this particular area is pretty unfathomable.

          Note

          • chuft
            Stepher
            SPECIAL MEMBER
            MODERATOR
            Level 33 - New Superhero
            • Dec 2007
            • 4470

            #6
            Apple tells the UK government to take a hike:

            "Two years after Apple introduced an encrypted storage feature for iPhone users, the company is pulling those security protections in Britain rather than comply with a government request that it create a tool to give law enforcement organizations access to customersโ€™ cloud data."


            https://www.nytimes.com/2025/02/21/t...smid=url-share
            l i t t l e s t e p h e r s

            Note

            • BRBFBI
              The Long Arm of the Law
              SPECIAL MEMBER
              Level 14 - Sportscandy
              • Oct 2023
              • 300

              #7
              Originally posted by chuft
              Apple tells the UK government to take a hike:
              Thanks for the update. Good to see Apple sticking to their guns.

              Note

              • chuft
                Stepher
                SPECIAL MEMBER
                MODERATOR
                Level 33 - New Superhero
                • Dec 2007
                • 4470

                #8
                And back to the bad guys


                Googleโ€™s Sergey Brin Says Engineers Should Work 60-Hour Weeks in Office to Build AI That Could Replace Them



                l i t t l e s t e p h e r s

                Note

                • BRBFBI
                  The Long Arm of the Law
                  SPECIAL MEMBER
                  Level 14 - Sportscandy
                  • Oct 2023
                  • 300

                  #9
                  My mom uses ChatGPT in place of a search engine. She says she only uses it for trivial tasks. She saves emotional and personal topics for Antrhopic's Claude.

                  Click image for larger version  Name:	IMG_5001_1.jpg Views:	0 Size:	482,6 KB ID:	204664

                  Note

                  • chuft
                    Stepher
                    SPECIAL MEMBER
                    MODERATOR
                    Level 33 - New Superhero
                    • Dec 2007
                    • 4470

                    #10
                    Linguistics professor Noam Chomsky on large language models (echoes of Wittgenstein):

                    "Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case โ€” thatโ€™s description and prediction โ€” but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence."

                    https://www.nytimes.com/2023/03/08/o...smid=url-share

                    (gifted article so no paywall)


                    l i t t l e s t e p h e r s

                    Note

                    • BRBFBI
                      The Long Arm of the Law
                      SPECIAL MEMBER
                      Level 14 - Sportscandy
                      • Oct 2023
                      • 300

                      #11
                      Originally posted by chuft
                      Linguistics professor Noam Chomsky on large language models (echoes of Wittgenstein):

                      "Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case โ€” thatโ€™s description and prediction โ€” but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence."

                      https://www.nytimes.com/2023/03/08/o...smid=url-share

                      (gifted article so no paywall)

                      I can't quite grasp his point. Let me break down one paragraph to explain what I mean, with my commentary in red.

                      "For instance, a young child acquiring a language is developing โ€” unconsciously, automatically and speedily from minuscule data โ€” a grammar, a stupendously sophisticated system of logical principles and parameters. While linguists have broken languages down into "logical principles and parameters," that's not how a child (or anyone without formal grammar training) understands it. Children get stuff wrong all the time, like saying speeded instead of sped because the past tense of speed is irregular. Brute repetition is the only way to learn. I also take issue with "from minuscule data." I'm not sure how much data is require to train an AI and how it compares to a human, but the amount of language input children get is hardly minuscule. This grammar can be understood as an expression of the innate, genetically installed โ€œoperating systemโ€ that endows humans with the capacity to generate complex sentences and long trains of thought. Sort of a vague metaphor based on his opinion rather than any evidence presented. When linguists seek to develop a theory for why a given language works as it does (โ€œWhy are these โ€” but not those โ€” sentences considered grammatical?โ€), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The childโ€™s operating system is completely different from that of a machine learning program." Unlike a linguist, a child learns a subconscious list of probabilities (again, if you didn't know any better you'd guess the past tense of drink is drinked, only through repetition can you learn it's actually drank).Nothing presented here has convinced me of the conclusion that "the child's operating system is completely different from that of a machine learning program."

                      I'm not able to rationalize or cross-check the arguments being presented with anything I know, and there's no information in the article to convince me of anything, just Noam Chomsky's opinion. I'm sure he's a very smart guy, but not very talented at making his point to us simpleminded folk (me).

                      Note

                      • chuft
                        Stepher
                        SPECIAL MEMBER
                        MODERATOR
                        Level 33 - New Superhero
                        • Dec 2007
                        • 4470

                        #12
                        I am no linguist but I have read a bit of Wittgenstein (philosophy of language) - his famous saying is "The world is all that is the case," the first proposition in his book "Tractatus Logico-Philosophicus."


                        Chomsky is talking about grammar - how thoughts are communicated (or even built - words allow thinking in the abstract) via a sentence structure and various other rules like word endings. For example in English grammar "man bites dog" and "dog bites man" mean very different things because word order matters. In Russian for example the word endings determine which noun is the subject and which the object, so word order is irrelevant. Well it might be used for nuance - for example stressing that a dog bit the man, as compared to something else biting him, or that a man was bitten, rather than a woman or another dog, or that it was a bite, rather than a scratch or some other action - but it is not used to show who bit whom.


                        Things like speeded vs sped are vocabulary, not grammar. They are arbitrary exceptions to the rules, for particular words, and do not follow a structure, which is why you just have to memorize non-standard words and their cases/declensions/tenses etc. There can be systems for these word endings but inevitably there are exceptions which must be memorized, similar to the way idioms can be ungrammatical and must be memorized - "make believe" "I could care less" "same difference" etc. It took me years to understand the grammar of "You can't have your cake and eat it too" although I knew what it meant. The child has internalized the grammar but just doesn't know all the exceptions.


                        This grammar can be understood as an expression of the innate, genetically installed โ€œoperating systemโ€ that endows humans with the capacity to generate complex sentences and long trains of thought. Sort of a vague metaphor based on his opinion rather than any evidence presented.โ€‹
                        I think here he is referencing studies of animals as compared to humans, as the claim is often made that animals use language of a sort, but there is no evidence they have the innate brain structure to learn grammar or that they can employ grammar. A crow might figure out it can drop stones into a partially filled pitcher of water to raise the water level high enough to get a drink - an extraordinary feat - but it can't communicate this to another crow short of performing the action in front of it.

                        I don't know how large language models work internally (in some ways I think nobody does) but they seem to aggregate words based on how the model sees them grouped in its samples. These words are often followed by those words in response to some other words. It doesn't "understand" the meaning of the words, just that they typically go together. It can substitute words you ask about in these groupings which often leads to these false statements or "hallucinations."

                        A human understands the words. That is why a human can determine, not only what is the case (what is a true fact), what was the case, and what will be the case (the sun will rise tomorrow), but also what can and cannot be the case (we will not have two suns tomorrow). For example a human knows, that since President Kennedy died in 1963, that he cannot be seen driving a car in 1980. It is impossible. A human would never seriously say Kennedy was seen driving a car in 1980 and mean it literally. This does not rely on someone telling him that Kennedy, being dead, cannot drive a car 17 years later. The human can reason to this result using grammar in his mind.

                        A LLM does not "know" anything. if you ask it why cats fly so slowly, it relies on articles on the web about cats flying through the air to answer. If you ask it why cats fly to the moon, it will produce language based on an article where a cat was used in a space mission. It is just doing this because these are examples of where these words were grouped together, it does not understand that cats cannot fly nor travel through space. A human would say "cats cannot fly and they especially cannot fly through space."

                        A child can interact with a cat and understand that cats cannot fly. This does not require the enormous training data a LLM requires, which only works if there is data on almost every conceivable thing. An LLM cannot deduce that a cat cannot fly, it has to rely on internet articles about cats happening to cover instances in which cats were taken on vehicles, or somebody answering a question about cats and their aerodynamics.



                        Click image for larger version

Name:	Capture.png
Views:	281
Size:	61,6 KB
ID:	204740



                        Note the articles it is relying upon. Without these articles it could not answer at all despite millions of articles about cats in general. It does not understand that cats flying "cannot be the case" as Wittgenstein would say despite an enormous amount of data about cats - far more than any human child would have access to.




                        Click image for larger version

Name:	Capture2.png
Views:	282
Size:	45,5 KB
ID:	204741



                        Aspirin does not cause weight gain. This answer shows the LLM does not really know what it is talking about, it is just failing to find a specific grouping ("aspirin and weight gain") so it finds a more general one ("aspirins are medicines" and "medicines and weight gain") and gives this general response. It should say "aspirin does not cause weight gain" but without finding anything on the internet specifically saying this, it is at a loss. It cannot reason from the fact weight gain is never listed as a side effect of aspirin use.




                        Click image for larger version

Name:	Capture3.png
Views:	274
Size:	29,0 KB
ID:	204742


                        Click image for larger version

Name:	Capture4.png
Views:	272
Size:	43,3 KB
ID:	204743



                        It didn't take much to make it contradict itself in the first sentence of each response. These word groupings are coming from different sources. ChatGPT doesn't understand that it is contradicting itself because it does not understand the grammar and meaning of the words it is using.

                        A child, shown one flying dinosaur, would understand that "some dinosaurs being able to fly" "was the case" and that would influence all future thinking by the child about dinosaurs and flying - what "could be the case and what could not". ChatGPT cannot say what is possible and not possible, so it will contradict itself or make wrong answers frequently.



                        As an aside, in science (and weirdly in finance) the term "black swan" is often used as a shorthand for what can be the case. If your hypothesis is that all swans are white, it only takes finding one black swan to prove that "all swans are white" is in fact not the case, and that a swan being black can be the case. A "black swan event" in finance is an unforeseen event that radically changes perception of what can be the case. For example economics textbooks had to be revised in the post-2008 period as some governments started offering (and selling) negative interest rate bonds. A negative interest rate bond means you are paying the government to borrow your money. This was widely considered impossible in economics, but then it happened in the real world, and economics itself had to be revised to account for this occurrence.

                        https://trerc.tamu.edu/article/negative-interest-rates/


                        A LLM cannot make use of a black swan to reason that all swans need not be white. It cannot state would could and could not be the case, because it cannot think, and abstract thinking (thinking about things not actually in your direct experience) uses grammar.


                        l i t t l e s t e p h e r s

                        Note

                        • chuft
                          Stepher
                          SPECIAL MEMBER
                          MODERATOR
                          Level 33 - New Superhero
                          • Dec 2007
                          • 4470

                          #13
                          It occurs to me I probably should have introduced that article a little better. From Wiki:


                          Avram Noam Chomsky[a] (born December 7, 1928) is an American professor and public intellectual known for his work in linguistics, political activism, and social criticism. Sometimes called "the father of modern linguistics",[b] Chomsky is also a major figure in analytic philosophy and one of the founders of the field of cognitive science. He is a laureate professor of linguistics at the University of Arizona and an institute professor emeritus at the Massachusetts Institute of Technology (MIT). Among the most cited living authors, Chomsky has written more than 150 books on topics such as linguistics, war, and politics.
                          l i t t l e s t e p h e r s

                          Note

                          • BRBFBI
                            The Long Arm of the Law
                            SPECIAL MEMBER
                            Level 14 - Sportscandy
                            • Oct 2023
                            • 300

                            #14
                            Thanks for explaining that in a way I could understand. I find that philosophical writings are sometimes really hard for me to understand. I think to academics words have very precise meanings, and if you're not well read on a topic then a lot of the nuance slips by. Your use of examples was really helpful.

                            Note

                            • chuft
                              Stepher
                              SPECIAL MEMBER
                              MODERATOR
                              Level 33 - New Superhero
                              • Dec 2007
                              • 4470

                              #15
                              You're welcome. Hopefully I'm not too far off base. I would not be surprised if LLMs had dictionaries so they could classify words as nouns, verbs, adjectives etc by comparing them to the dictionary. Perhaps have thesauruses too. This would facilitate replacing one word with another in its responses. In that sense they could be using grammar (put a noun here, like Mad Libs), but not understanding grammar. As we know, the substitutions often make no sense or are wrong.



                              Click image for larger version

Name:	Capture5.png
Views:	273
Size:	58,6 KB
ID:	204758


                              Does Chipotle give Bing diarrhea
                              l i t t l e s t e p h e r s

                              Note

                              Working...