Easy Way shares my view of AI

Collapse
X
Collapse
 
  • Time
  • Show
Clear All
new posts
  • chuft
    Stepher
    SPECIAL MEMBER
    MODERATOR
    Level 32 - Secret Agent
    • Dec 2007
    • 3504

    #46
    Apple tells the UK government to take a hike:

    "Two years after Apple introduced an encrypted storage feature for iPhone users, the company is pulling those security protections in Britain rather than comply with a government request that it create a tool to give law enforcement organizations access to customers’ cloud data."


    https://www.nytimes.com/2025/02/21/t...smid=url-share
    l i t t l e s t e p h e r s

    Note

    • BRBFBI
      The Long Arm of the Law
      SPECIAL MEMBER
      Level 10 - LazyTowner
      • Oct 2023
      • 138

      #47
      Originally posted by chuft
      Apple tells the UK government to take a hike:
      Thanks for the update. Good to see Apple sticking to their guns.

      Note

      • chuft
        Stepher
        SPECIAL MEMBER
        MODERATOR
        Level 32 - Secret Agent
        • Dec 2007
        • 3504

        #48
        And back to the bad guys


        Google’s Sergey Brin Says Engineers Should Work 60-Hour Weeks in Office to Build AI That Could Replace Them



        l i t t l e s t e p h e r s

        Note

        • BRBFBI
          The Long Arm of the Law
          SPECIAL MEMBER
          Level 10 - LazyTowner
          • Oct 2023
          • 138

          #49
          My mom uses ChatGPT in place of a search engine. She says she only uses it for trivial tasks. She saves emotional and personal topics for Antrhopic's Claude.

          Click image for larger version  Name:	IMG_5001_1.jpg Views:	0 Size:	482,6 KB ID:	204664

          Note

          • chuft
            Stepher
            SPECIAL MEMBER
            MODERATOR
            Level 32 - Secret Agent
            • Dec 2007
            • 3504

            #50
            Linguistics professor Noam Chomsky on large language models (echoes of Wittgenstein):

            "Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence."

            https://www.nytimes.com/2023/03/08/o...smid=url-share

            (gifted article so no paywall)


            l i t t l e s t e p h e r s

            Note

            • BRBFBI
              The Long Arm of the Law
              SPECIAL MEMBER
              Level 10 - LazyTowner
              • Oct 2023
              • 138

              #51
              Originally posted by chuft
              Linguistics professor Noam Chomsky on large language models (echoes of Wittgenstein):

              "Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence."

              https://www.nytimes.com/2023/03/08/o...smid=url-share

              (gifted article so no paywall)

              I can't quite grasp his point. Let me break down one paragraph to explain what I mean, with my commentary in red.

              "For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. While linguists have broken languages down into "logical principles and parameters," that's not how a child (or anyone without formal grammar training) understands it. Children get stuff wrong all the time, like saying speeded instead of sped because the past tense of speed is irregular. Brute repetition is the only way to learn. I also take issue with "from minuscule data." I'm not sure how much data is require to train an AI and how it compares to a human, but the amount of language input children get is hardly minuscule. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. Sort of a vague metaphor based on his opinion rather than any evidence presented. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program." Unlike a linguist, a child learns a subconscious list of probabilities (again, if you didn't know any better you'd guess the past tense of drink is drinked, only through repetition can you learn it's actually drank).Nothing presented here has convinced me of the conclusion that "the child's operating system is completely different from that of a machine learning program."

              I'm not able to rationalize or cross-check the arguments being presented with anything I know, and there's no information in the article to convince me of anything, just Noam Chomsky's opinion. I'm sure he's a very smart guy, but not very talented at making his point to us simpleminded folk (me).

              Note

              • chuft
                Stepher
                SPECIAL MEMBER
                MODERATOR
                Level 32 - Secret Agent
                • Dec 2007
                • 3504

                #52
                I am no linguist but I have read a bit of Wittgenstein (philosophy of language) - his famous saying is "The world is all that is the case," the first proposition in his book "Tractatus Logico-Philosophicus."


                Chomsky is talking about grammar - how thoughts are communicated (or even built - words allow thinking in the abstract) via a sentence structure and various other rules like word endings. For example in English grammar "man bites dog" and "dog bites man" mean very different things because word order matters. In Russian for example the word endings determine which noun is the subject and which the object, so word order is irrelevant. Well it might be used for nuance - for example stressing that a dog bit the man, as compared to something else biting him, or that a man was bitten, rather than a woman or another dog, or that it was a bite, rather than a scratch or some other action - but it is not used to show who bit whom.


                Things like speeded vs sped are vocabulary, not grammar. They are arbitrary exceptions to the rules, for particular words, and do not follow a structure, which is why you just have to memorize non-standard words and their cases/declensions/tenses etc. There can be systems for these word endings but inevitably there are exceptions which must be memorized, similar to the way idioms can be ungrammatical and must be memorized - "make believe" "I could care less" "same difference" etc. It took me years to understand the grammar of "You can't have your cake and eat it too" although I knew what it meant. The child has internalized the grammar but just doesn't know all the exceptions.


                This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. Sort of a vague metaphor based on his opinion rather than any evidence presented.
                I think here he is referencing studies of animals as compared to humans, as the claim is often made that animals use language of a sort, but there is no evidence they have the innate brain structure to learn grammar or that they can employ grammar. A crow might figure out it can drop stones into a partially filled pitcher of water to raise the water level high enough to get a drink - an extraordinary feat - but it can't communicate this to another crow short of performing the action in front of it.

                I don't know how large language models work internally (in some ways I think nobody does) but they seem to aggregate words based on how the model sees them grouped in its samples. These words are often followed by those words in response to some other words. It doesn't "understand" the meaning of the words, just that they typically go together. It can substitute words you ask about in these groupings which often leads to these false statements or "hallucinations."

                A human understands the words. That is why a human can determine, not only what is the case (what is a true fact), what was the case, and what will be the case (the sun will rise tomorrow), but also what can and cannot be the case (we will not have two suns tomorrow). For example a human knows, that since President Kennedy died in 1963, that he cannot be seen driving a car in 1980. It is impossible. A human would never seriously say Kennedy was seen driving a car in 1980 and mean it literally. This does not rely on someone telling him that Kennedy, being dead, cannot drive a car 17 years later. The human can reason to this result using grammar in his mind.

                A LLM does not "know" anything. if you ask it why cats fly so slowly, it relies on articles on the web about cats flying through the air to answer. If you ask it why cats fly to the moon, it will produce language based on an article where a cat was used in a space mission. It is just doing this because these are examples of where these words were grouped together, it does not understand that cats cannot fly nor travel through space. A human would say "cats cannot fly and they especially cannot fly through space."

                A child can interact with a cat and understand that cats cannot fly. This does not require the enormous training data a LLM requires, which only works if there is data on almost every conceivable thing. An LLM cannot deduce that a cat cannot fly, it has to rely on internet articles about cats happening to cover instances in which cats were taken on vehicles, or somebody answering a question about cats and their aerodynamics.



                Click image for larger version

Name:	Capture.png
Views:	33
Size:	61,6 KB
ID:	204740



                Note the articles it is relying upon. Without these articles it could not answer at all despite millions of articles about cats in general. It does not understand that cats flying "cannot be the case" as Wittgenstein would say despite an enormous amount of data about cats - far more than any human child would have access to.




                Click image for larger version

Name:	Capture2.png
Views:	28
Size:	45,5 KB
ID:	204741



                Aspirin does not cause weight gain. This answer shows the LLM does not really know what it is talking about, it is just failing to find a specific grouping ("aspirin and weight gain") so it finds a more general one ("aspirins are medicines" and "medicines and weight gain") and gives this general response. It should say "aspirin does not cause weight gain" but without finding anything on the internet specifically saying this, it is at a loss. It cannot reason from the fact weight gain is never listed as a side effect of aspirin use.




                Click image for larger version

Name:	Capture3.png
Views:	29
Size:	29,0 KB
ID:	204742


                Click image for larger version

Name:	Capture4.png
Views:	28
Size:	43,3 KB
ID:	204743



                It didn't take much to make it contradict itself in the first sentence of each response. These word groupings are coming from different sources. ChatGPT doesn't understand that it is contradicting itself because it does not understand the grammar and meaning of the words it is using.

                A child, shown one flying dinosaur, would understand that "some dinosaurs being able to fly" "was the case" and that would influence all future thinking by the child about dinosaurs and flying - what "could be the case and what could not". ChatGPT cannot say what is possible and not possible, so it will contradict itself or make wrong answers frequently.



                As an aside, in science (and weirdly in finance) the term "black swan" is often used as a shorthand for what can be the case. If your hypothesis is that all swans are white, it only takes finding one black swan to prove that "all swans are white" is in fact not the case, and that a swan being black can be the case. A "black swan event" in finance is an unforeseen event that radically changes perception of what can be the case. For example economics textbooks had to be revised in the post-2008 period as some governments started offering (and selling) negative interest rate bonds. A negative interest rate bond means you are paying the government to borrow your money. This was widely considered impossible in economics, but then it happened in the real world, and economics itself had to be revised to account for this occurrence.

                https://trerc.tamu.edu/article/negative-interest-rates/


                A LLM cannot make use of a black swan to reason that all swans need not be white. It cannot state would could and could not be the case, because it cannot think, and abstract thinking (thinking about things not actually in your direct experience) uses grammar.


                l i t t l e s t e p h e r s

                Note

                • chuft
                  Stepher
                  SPECIAL MEMBER
                  MODERATOR
                  Level 32 - Secret Agent
                  • Dec 2007
                  • 3504

                  #53
                  It occurs to me I probably should have introduced that article a little better. From Wiki:


                  Avram Noam Chomsky[a] (born December 7, 1928) is an American professor and public intellectual known for his work in linguistics, political activism, and social criticism. Sometimes called "the father of modern linguistics",[b] Chomsky is also a major figure in analytic philosophy and one of the founders of the field of cognitive science. He is a laureate professor of linguistics at the University of Arizona and an institute professor emeritus at the Massachusetts Institute of Technology (MIT). Among the most cited living authors, Chomsky has written more than 150 books on topics such as linguistics, war, and politics.
                  l i t t l e s t e p h e r s

                  Note

                  • BRBFBI
                    The Long Arm of the Law
                    SPECIAL MEMBER
                    Level 10 - LazyTowner
                    • Oct 2023
                    • 138

                    #54
                    Thanks for explaining that in a way I could understand. I find that philosophical writings are sometimes really hard for me to understand. I think to academics words have very precise meanings, and if you're not well read on a topic then a lot of the nuance slips by. Your use of examples was really helpful.

                    Note

                    • chuft
                      Stepher
                      SPECIAL MEMBER
                      MODERATOR
                      Level 32 - Secret Agent
                      • Dec 2007
                      • 3504

                      #55
                      You're welcome. Hopefully I'm not too far off base. I would not be surprised if LLMs had dictionaries so they could classify words as nouns, verbs, adjectives etc by comparing them to the dictionary. Perhaps have thesauruses too. This would facilitate replacing one word with another in its responses. In that sense they could be using grammar (put a noun here, like Mad Libs), but not understanding grammar. As we know, the substitutions often make no sense or are wrong.



                      Click image for larger version

Name:	Capture5.png
Views:	33
Size:	58,6 KB
ID:	204758


                      Does Chipotle give Bing diarrhea
                      l i t t l e s t e p h e r s

                      Note

                      • chuft
                        Stepher
                        SPECIAL MEMBER
                        MODERATOR
                        Level 32 - Secret Agent
                        • Dec 2007
                        • 3504

                        #56
                        Everything you say to your Echo will be sent to Amazon starting on March 28



                        https://arstechnica.com/gadgets/2025...g-on-march-28/
                        l i t t l e s t e p h e r s

                        Note

                        • possessor
                          I like LazyTown.
                          SPECIAL MEMBER
                          Level 31 - Number 9
                          • Oct 2021
                          • 3039

                          #57
                          Originally posted by chuft
                          Everything you say to your Echo will be sent to Amazon starting on March 28



                          https://arstechnica.com/gadgets/2025...g-on-march-28/
                          Oh, brother this GUY- I mean.. COMPANY STINKS

                          But srsly, is there a genuine reason that this will be useful??
                          \
                          As we continue to expand Alexa’s capabilities with generative AI features that rely on the processing power of Amazon’s secure cloud, we have decided to no longer support this feature.
                          Yeah, because AI evolution is much more important then privacy.. smh

                          Note

                          Working...