Easy Way shares my view of AI

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • BRBFBI
    The Long Arm of the Law
    GETLAZY MEMBER
    Level 10 - LazyTowner
    • Oct 2023
    • 101

    #31
    Originally posted by chuft
    I don't think ethics are naive. And that was first posted in 2023, not very long ago at all. I don't want to end up in some creepy surveillance state like China has so Google can make even more money.
    Neither do I want to live in a surveillance state. I don't think ethics are naive, but Google was naive to think they (and a small number of other US companies) had a technological moat around AI large enough that they could shape how AI would be used. And I think people who had faith in Google's promises were naive to the way that a publicly traded corporation's singular goal is to return value to their shareholders. Any code of ethics that gets in the way of that is going to quickly go away.

    Note

    • chuft
      Stepher
      SPECIAL MEMBER
      MODERATOR
      Level 31 - Number 9
      • Dec 2007
      • 3385

      #32
      I don't think that's true. Most if not all companies of any size have codes of ethics. I think a perfect comparison to Google is Apple.


      Our Values


      “We believe that business, at its best, serves the public good, empowers people around the world, and binds us together as never before.”


      -Apple CEO Tim Cook


      Apple are the good guys. Anybody in the privacy field will tell you Apple is by far the best company if you want to keep your information private. The encryption keys for things like passwords and text messages are kept on the user's device. iMessages have always had end to end encryption. Apple does not have access to them. Google on the other hand is in the business of selling your information. They have been caught selling Gmail user content to outside companies, in addition to facilitating cross-website tracking of users.

      There are many ethical companies out there. Google is not one of them. They used to be better - they left China rather than let Chinese censorship and hacking of activists' Gmail accounts continue - but they have gone down a dark hole since then.

      If companies were just focused on returning value to shareholders, they wouldn't pay their executives so much....

      A good code of ethics among other things keeps companies from doing things that land them in the news with bad publicity, which is bad for the company's reputation and thus hurts shareholders. It also lets you attract better quality employees.
      l i t t l e s t e p h e r s

      Note

      • BRBFBI
        The Long Arm of the Law
        GETLAZY MEMBER
        Level 10 - LazyTowner
        • Oct 2023
        • 101

        #33
        Originally posted by chuft
        A good code of ethics among other things keeps companies from doing things that land them in the news with bad publicity, which is bad for the company's reputation and thus hurts shareholders. It also lets you attract better quality employees.
        I think that's true, and I also think it requires a strong leadership team to take the long view. I've been surprised how shortsighted some corporate decisions can be (e.g. Google making their search results worse to increase ad views [no matter how hard I search I cannot seem to find your post on that right now]). I think in the business world with the revolving door of CEOs there's an incentive to do whatever will boost quarterly profits and give hard numbers to show the board of directors. Taking the long view requires conviction that I don't think a lot of corporate structures are designed to accept. I suppose that was that was the point of your above post: that Google is hurting their reputation to make some money on AI for weapons and surveillance tools. My point is that I'm not hedging my bets on a corporation doing the right thing.

        Note

        • BRBFBI
          The Long Arm of the Law
          GETLAZY MEMBER
          Level 10 - LazyTowner
          • Oct 2023
          • 101

          #34
          On the topic of Apple:

          I have an iPhone. As someone who likes having control over my files they've always rubbed me the wrong way (I hate how hard it is to move photos on and off of them, and they don't let you expand your storage with a microSD card like Andriod does). If they didn't have a good reputation for security that would have been all the excuse I'd need to leave them.

          However, I've heard that Apple Intelligence (starting with iPhone 16) potentially circumvents encryption.

          To explain this, let's go back to Apple's plan to scan iCloud photos for CSAM. To do this, Apple would use a hashing algorithm included in your device's OS. When a user uploads an image to iCloud the device first hashes the photo and compares it to an on-device database of known CSAM hashes. If there is a match it sends this information along with the photo to iCloud. "Apple says the system does not work for users who have iCloud Photos disabled, though it’s not totally clear if that scanning is only performed on images uploaded to iCloud Photos, or all images are scanned and compared but the results of the scan (a hash match or not) are only sent along with the photo when it’s uploaded to iCloud Photos (MacWorld).

          To be clear, this isn't a "smart" (AI) system, it's a simple hashing program, but it circumvents encryption by scanning photos on a user's device then sending the results of the scan to Apple. This type of system is known as Client Side Scanning. While the intention was to detect CSAM, there is nothing technology-wise stopping Apple from adding hashes for known images of, say, Tienanmen Square or LGBT content in a software update.

          Apple recognized the potential for abuse of this system and, in the end, chose not to implement it. According to Erik Neuenschwander, Apple's director of user privacy and child safety, “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit. It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”

          Which brings us to Apple Intelligence. Unlike a hashing algorithm which has a very specific function, Apple Intelligence processes everything on your device all the time. Some claims about Apple Intelligence:

          -Writing Tools: AI will be able to rewrite and proofread your text anywhere you write it, even third-party apps. This will be especially useful when sending an email or writing an article or social media post.

          -Onscreen awareness: Now Siri will know what’s on your screen and can assist accordingly. For example, you might say, “Add this address to their contact card.”

          -Personalized assistance: Siri can perform new actions with a deep awareness of your personal context. For example, you can say, “Play that song my wife sent me the other day.”

          -Summaries of content: Whether in a transcript from a meeting or when viewing an email, AI will summarize content for you.
          (9to5mac)

          This is essentially the hashing algorithm but for everything that happens on your device. If you can see it or hear it, so can Apple Intelligence. While iMessage might still be encrypted and apple can still (correctly) claim they can't decrypt your messages, it is circumvented if Apple Intelligence can read them directly from your device. Theoretically a government could demand that Apple Intelligence flag users who discuss, view, engage in, or display tendencies toward anything deemed illegal. In the context of the old CSAM scanning system "Apple claimed that it would never have allowed this [government overreach], but the promise was predicated on Apple having the legal freedom to refuse, which would simply not be the case. In China, for example, Apple has been legally required to remove VPN, news, and other apps, and to store the iCloud data of Chinese citizens on a server owned by a government-controlled company." (9to5mac)

          Now, as you've pointed out, Apple has a great track record of protecting user privacy, but as I understand it Microsoft Copilot+ PCs will basically be the same thing and I don't trust them nearly as much, which is why I've been using Linux for years. Regardless of who you trust, Client Side Scanning is a disturbing development for digital privacy. I suspect you already knew a lot of this, but it was helpful for me to collect my thoughts by writing them out. I'm attaching a research paper on Client Side Scanning in case you want to do some followup reading.
          Attached Files

          Note

          • chuft
            Stepher
            SPECIAL MEMBER
            MODERATOR
            Level 31 - Number 9
            • Dec 2007
            • 3385

            #35
            I think AI is the devil and I would disable it the instant I got a new iPhone (I have a 15 which thankfully can't use it).

            I didn't know that about how it would work, but I never looked into it because I have zero interest in using AI of any kind, especially on my personal devices, from Apple or anybody else. By its nature it is evil in every way, built on theft and spying.

            I am using Windows 10 at the moment, but am seriously considering moving to Linux rather than Windows 11. My summer project may be building a Linux PC.



            It is quite easy to download photos from an iPhone to Windows, I don't know about Linux. It used to be easy to get other documents on and off it like PDFs, then they took that away as part of the enshittification of iTunes to try to get you to use iCloud and Apple Music. I can access my photos via Windows Explorer, or I can use a utility that comes with Windows to download photos, which shows a thumbnail of each one and lets you select which to download. It used to remember which ones you had already downloaded but it seems to have lost that feature or it doesn't work reliably.

            But in general Apple's philosophy in the past decade has been "everybody should use wireless and the cloud for everything" - they dislike putting ports on Macs for example and stopped including CD/DVD drives long before anybody else - so they just expect that you will use iCloud, so you can easily use it to transfer photos between devices that way. Microsoft is the same with OneDrive. All companies have recognized the profitability of subscription services and try to push them.

            In fairness, all my younger coworkers look at me like I'm from another planet when I mention I rip my old CDs and load MP3's on my iPhone. They all stream and listen to whatever the algorithm feeds them, or pay for the (temporary) privilege of getting to pick what they want to listen to.

            l i t t l e s t e p h e r s

            Note

            • chuft
              Stepher
              SPECIAL MEMBER
              MODERATOR
              Level 31 - Number 9
              • Dec 2007
              • 3385

              #36
              Regarding why Google Search now sucks, I had put that in a private message, sorry if I caused confusion by acting as if it was public.

              Some articles on it:

              The Man Who Killed Google Search

              The Rot Economy


              This week I was searching for medications which caused a particular side effect. I tried 15 times on Google and went through 150 results or whatever it was. Then I tried Bing and found exactly what I was looking for in the very first result on the first page. A reminder of how good Google used to be, but now isn't.
              l i t t l e s t e p h e r s

              Note

              • BRBFBI
                The Long Arm of the Law
                GETLAZY MEMBER
                Level 10 - LazyTowner
                • Oct 2023
                • 101

                #37
                Originally posted by chuft
                Regarding why Google Search now sucks, I had put that in a private message, sorry if I caused confusion by acting as if it was public.

                Some articles on it:

                The Man Who Killed Google Search

                The Rot Economy
                Funny, I've ready that very article. Maybe that's why I'm confused. I could swear I'd read your post on it.

                Originally posted by chuft
                I am using Windows 10 at the moment, but am seriously considering moving to Linux rather than Windows 11. My summer project may be building a Linux PC.
                I'm sure you know this, but you can install linux on a USB stick and play around with it if you don't want to commit. I actually use Windows on my desktop for some Adobe products as well as occasional gaming, but I use Linux Mint on my laptop which is where I spend most of my time browsing the web and watching media. This is a funny quibble, but on Windows from the lock screen you have to click the mouse or hit a key before you can type your password. On Linux Mint you can just start typing. It irks me so much when I log into Windows. I was also getting sick of Windows forcing updates; I'd be in the middle of gaming with friends and need to restart and Windows would take advantage of that to update without my permission, causing me to be absent for 10 minutes. Modern Windows is built for the lowest common denominator; Linux Mint is built for normal people. It feels like how I remember Windows 7.

                Originally posted by chuft
                I think AI is the devil and I would disable it the instant I got a new iPhone (I have a 15 which thankfully can't use it).

                I didn't know that about how it would work, but I never looked into it because I have zero interest in using AI of any kind, especially on my personal devices, from Apple or anybody else. By its nature it is evil in every way, built on theft and spying.

                It's funny, you normally write a lot, and with a lot of nuance, but when it comes to AI you speak so curtly of it. I guess I've read some of your more nuanced opinions on it in other posts around here so I won't ask you to repeat them. I think I feel the same way as you, I just haven't taken the time to fully justify my feelings.

                Originally posted by chuft
                This week I was searching for medications which caused a particular side effect. I tried 15 times on Google and went through 150 results or whatever it was. Then I tried Bing and found exactly what I was looking for in the very first result on the first page. A reminder of how good Google used to be, but now isn't.

                I want you to try an experiment. Go to chatgpt (you can access it from TOR if you want, no account needed) and search for that same medication. Surely you're not too superstitious of AI for that. I predict it will find it on the first prompt.

                Note

                • chuft
                  Stepher
                  SPECIAL MEMBER
                  MODERATOR
                  Level 31 - Number 9
                  • Dec 2007
                  • 3385

                  #38
                  Since Microsoft/Bing are using ChatGPT-4 to power searches, that would not be a particularly surprising result. But ChatGPT.com did not list the specific drugs that would cause this side effect, but rather classes of drugs, and gave a real howler of a reason why they would in at least one category. Hallucination.

                  Bing and Bing Deep Search (which uses ChatGPT-4) both gave better results than ChatGPT.com, interestingly.

                  Unlike Bing and Bing Deep Search, ChatGPT.com does not list its sources. Bing's implementation of ChatGPT is actually better than OpenAI's, from my point of view.


                  Google's results, both the AI Overview and the search results, were just crap, as has long been the case. I am used to using Google by habit, but that may soon change.


                  I should note ChatGPT Search, which uses ChatGPT-4o, was only made free to the public with no sign-in two days ago. So this is a very recent development.


                  Surely you're not too superstitious of AI for that.
                  I am not sure what superstition has to do with any of this.
                  l i t t l e s t e p h e r s

                  Note

                  • chuft
                    Stepher
                    SPECIAL MEMBER
                    MODERATOR
                    Level 31 - Number 9
                    • Dec 2007
                    • 3385

                    #39
                    On Google's abandonment of its promise not to use AI for weapons:


                    ‘Godfather of AI’ sounds alarm on Google weapons plan

                    The “godfather of AI” who pioneered Google’s work in artificial intelligence (AI) has accused the company of putting profits over safety after it dropped a commitment to not using the technology in weapons.

                    Geoffrey Hinton, the British computer scientist who last year won the Nobel Prize in Physics for his work in AI, said the tech giant’s decision to backtrack on its previous pledge was a “sad example” of companies ignoring concerns about AI.

                    ...

                    Mr Hinton’s comments are the sharpest criticism he has leveled at Google since he quit the company two years ago over fears the technology could not be controlled.

                    In 2012, he and two students at the University of Toronto developed the neural network technology that has become the foundation for how modern AI systems are built.

                    He joined Google the following year after the tech giant acquired his start-up and helped advance the company’s work in AI, leading to developments that have paved the way for chatbots such as ChatGPT and Google’s Gemini.

                    He left in 2023 saying he wanted to be free to criticise it and other companies when they made reckless decisions about AI.

                    Mr Hinton said at the time that part of him regretted his life’s work and he was worried about the “existential risk of what happens when these things get more intelligent than us”.

                    I don't normally link to stories by the Torygraph, er the Telegraph, but even a stopped clock is right twice a day.

                    l i t t l e s t e p h e r s

                    Note

                    • chuft
                      Stepher
                      SPECIAL MEMBER
                      MODERATOR
                      Level 31 - Number 9
                      • Dec 2007
                      • 3385

                      #40
                      On Apple:

                      U.K. orders Apple to let it spy on users’ encrypted accounts


                      The law, known by critics as the Snoopers’ Charter, makes it a criminal offense to reveal that the government has even made such a demand.

                      ...

                      One of the people briefed on the situation, a consultant advising the United States on encryption matters, said Apple would be barred from warning its users that its most advanced encryption no longer provided full security. The person deemed it shocking that the U.K. government was demanding Apple’s help to spy on non-British users without their governments’ knowledge. A former White House security adviser confirmed the existence of the British order.

                      This is really unbelievable. It is not just British users. It's every Apple user in the world. To avoid it I think Apple would have to stop doing business entirely in the UK. No more iPhones for Brits. No iCloud, Macs, or iPads either. And presumably no more software updates to any Apple device in the UK.
                      l i t t l e s t e p h e r s

                      Note

                      • boredjedi
                        Master
                        SPECIAL MEMBER
                        MODERATOR
                        Level 35 - Rockin' Poster
                        • Jun 2007
                        • 7272

                        #41
                        Originally posted by chuft
                        On Google's abandonment of its promise not to use AI for weapons:


                        ‘Godfather of AI’ sounds alarm on Google weapons plan




                        I don't normally link to stories by the Torygraph, er the Telegraph, but even a stopped clock is right twice a day.

                        of putting profits over safety after it dropped a commitment to not using the technology in weapons.
                        The military of course. They always get first dibs on technology for weapons of war. The rest of us we get the seconds. ​

                        It is not just British users
                        I don't know what's going on with Britain but holy crap are they turning into what George Orwell actually warned us about.
                        http://eighteenlightyearsago.ytmnd.com/

                        Note

                        • chuft
                          Stepher
                          SPECIAL MEMBER
                          MODERATOR
                          Level 31 - Number 9
                          • Dec 2007
                          • 3385

                          #42
                          On how weird the people are who are creating AI. A gifted New York Times article.


                          This Changes Everything


                          Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.

                          In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

                          I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?

                          We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

                          I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.
                          l i t t l e s t e p h e r s

                          Note

                          • boredjedi
                            Master
                            SPECIAL MEMBER
                            MODERATOR
                            Level 35 - Rockin' Poster
                            • Jun 2007
                            • 7272

                            #43
                            Originally posted by chuft
                            On how weird the people are who are creating AI. A gifted New York Times article.


                            This Changes Everything



                            often ask them the same question: If you think calamity so possible, why do this at all?
                            I would have guessed the answer would be "If not us, someone else will".

                            http://eighteenlightyearsago.ytmnd.com/

                            Note


                            • BRBFBI
                              BRBFBI commented
                              Editing a comment
                              I had the same thought.
                          • chuft
                            Stepher
                            SPECIAL MEMBER
                            MODERATOR
                            Level 31 - Number 9
                            • Dec 2007
                            • 3385

                            #44
                            Personally I would not want to be responsible for the annihilation of the human race, so that would never be my answer.
                            l i t t l e s t e p h e r s

                            Note

                            • BRBFBI
                              The Long Arm of the Law
                              GETLAZY MEMBER
                              Level 10 - LazyTowner
                              • Oct 2023
                              • 101

                              #45
                              Thanks, chuft. I found the NYT Op-Ed insightful.

                              The U.K. vs Apple article is crazy. That Great Brittan is pushing for more invasive surveillance than China in this particular area is pretty unfathomable.

                              Note

                              Working...