Easy Way shares my view of AI

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • chuft
    Stepher
    SPECIAL MEMBER
    MODERATOR
    Level 30 - Stepher
    • Dec 2007
    • 2863

    Easy Way shares my view of AI

    Easy Way has become one of my favorites because it shows Steph shares my view of AI.

    "It's sorta tragic
    It does it all for you
    There's nothing left to do
    And soon we'll all be unemployed and destitute"


    l i t t l e s t e p h e r s
  • boredjedi
    Master
    SPECIAL MEMBER
    MODERATOR
    Level 35 - Rockin' Poster
    • Jun 2007
    • 6674

    #2
    Originally posted by chuft
    "It's sorta tragic
    It does it all for you
    There's nothing left to do
    I dunno sounds good to me
    A little help would take the tediousness
    out of doing certain things in shooping.
    For example, the Pokémon Go Steph Go
    image. Putting in those Poke' candies I
    had flashbacks of doing the draw by
    numbers ytmnd (By The Numbers).




    http://eighteenlightyearsago.ytmnd.com/

    Comment

    • chuft
      Stepher
      SPECIAL MEMBER
      MODERATOR
      Level 30 - Stepher
      • Dec 2007
      • 2863

      #3
      I'm not saying AI has no potential benefits. I just think the potential downsides - some not so potential, but virtually guaranteed - are so catastrophic that the technology should be banned.
      l i t t l e s t e p h e r s

      Comment

      • boredjedi
        Master
        SPECIAL MEMBER
        MODERATOR
        Level 35 - Rockin' Poster
        • Jun 2007
        • 6674

        #4
        Originally posted by chuft
        I'm not saying AI has no potential benefits. I just think the potential downsides - some not so potential, but virtually guaranteed - are so catastrophic that the technology should be banned.
        Yeah, pretty much like most things we invent. Ends up being used in very corrupt and greedy ways.
        http://eighteenlightyearsago.ytmnd.com/

        Comment

        • chuft
          Stepher
          SPECIAL MEMBER
          MODERATOR
          Level 30 - Stepher
          • Dec 2007
          • 2863

          #5
          Even the people working on it say it has a 10% chance of exterminating humanity. I think they are insane and should have their toys taken away.
          l i t t l e s t e p h e r s

          Comment

          • possessor
            I like LazyTown.
            SPECIAL MEMBER
            Level 30 - Stepher
            • Oct 2021
            • 2625

            #6
            Originally posted by chuft
            Even the people working on it say it has a 10% chance of exterminating humanity.
            AHHHHHHHHH!!!!!!!!!1 I'M SO SCARED OF THE ROBOT WHO MADE THIS!! https://www.youtube.com/watch?v=tM7n9rgOUII

            Comment

            • chuft
              Stepher
              SPECIAL MEMBER
              MODERATOR
              Level 30 - Stepher
              • Dec 2007
              • 2863

              #7
              In your attempt to mock me you have pointed out another rapidly approaching problem - not being able to know what is true. When sounds and videos can be generated by AI to convincingly sound or look exactly like the real thing, along obviously with written words, it will be easier than ever to spread disinformation. How will you be able to verify what is true for things not in your immediate physical location? You can't. Any communications you receive from a friend or news reporter could also be faked. It is going to become a very serious problem, even more than disinformation already is.
              l i t t l e s t e p h e r s

              Comment

              • Buzz
                Der Postmeister
                SPECIAL MEMBER
                Level 33 - New Superhero
                • Jan 2009
                • 4139

                #8
                Originally posted by chuft
                Even the people working on it say it has a 10% chance of exterminating humanity. I think they are insane and should have their toys taken away.
                AI needs electricity....just pull the plug....problem solved
                Gallery

                Comment


                • possessor
                  possessor commented
                  Editing a comment
                  Never knew Einstein had a GetLazy account.. :XD
              • chuft
                Stepher
                SPECIAL MEMBER
                MODERATOR
                Level 30 - Stepher
                • Dec 2007
                • 2863

                #9
                Richard Blumenthal kicked off a Senate hearing on artificial intelligence with introductory remarks generated by AI. He said the words used in the recording were created by ChatGPT and the audio came from an AI voice-cloning software. Tuesday's hearing by the Judiciary Subcommittee on Privacy, Technology and the Law aims to examine potential rules for the use of artificial intelligence.



                l i t t l e s t e p h e r s

                Comment

                • chuft
                  Stepher
                  SPECIAL MEMBER
                  MODERATOR
                  Level 30 - Stepher
                  • Dec 2007
                  • 2863

                  #10
                  Leaving aside the mass unemployment, disinformation, enemy nation-states using AI for espionage or hacking, or AI deciding to wipe us all out, at the very least we will see a total disruption of the music industry. Rick Beato:


                  l i t t l e s t e p h e r s

                  Comment

                  • boredjedi
                    Master
                    SPECIAL MEMBER
                    MODERATOR
                    Level 35 - Rockin' Poster
                    • Jun 2007
                    • 6674

                    #11
                    Yeah saw that video. Few other channels I'm subbed too have been going over the issue and
                    other channels that pop up in recommended.
                    http://eighteenlightyearsago.ytmnd.com/

                    Comment

                    • StingX
                      ROTTEN MEMBER
                      Level 35 - Rockin' Poster
                      • Mar 2009
                      • 5496

                      #12
                      My expectation is that AI will reshape everything, for better or for worse, either directly or indirectly. Lord knows that lawmakers around the world are already shockingly far behind the technological curve and the its only accelerating faster and faster as science fiction is swiftly becoming reality. My hope in humanity is that in this upcoming era of transformation we can arc the consequences of this technology to generate more good than bad, but my realism makes me doubt that substantially.

                      But til then look at all these stephers:

                      Comment

                      • boredjedi
                        Master
                        SPECIAL MEMBER
                        MODERATOR
                        Level 35 - Rockin' Poster
                        • Jun 2007
                        • 6674

                        #13
                        I could have used that AI when I was dabbling in stuff like this



                        Could never get that hair right with the tools I had back then so many years ago now.
                        Then again it is a dream and things look distorted in them.
                        http://eighteenlightyearsago.ytmnd.com/

                        Comment

                        • chuft
                          Stepher
                          SPECIAL MEMBER
                          MODERATOR
                          Level 30 - Stepher
                          • Dec 2007
                          • 2863

                          #14
                          A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

                          Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.


                          A group of industry leaders warned on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.

                          Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.

                          The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

                          Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field. (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)

                          The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

                          Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

                          These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

                          This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to talk about A.I. regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks of advanced A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms.

                          Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.

                          “There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

                          Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

                          But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.

                          In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

                          Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.

                          In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month pause on the development of the largest A.I. models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”
                          That letter, which was organized by another A.I.-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading A.I. labs.

                          The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who shared general concerns about powerful A.I. systems, Mr. Hendrycks said.
                          “We didn’t want to push for a very large menu of 30 potential interventions,” Mr. Hendrycks said. “When that happens, it dilutes the message.”

                          The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.

                          The urgency of A.I. leaders’ warnings has increased as millions of people have turned to A.I. chatbots for entertainment, companionship and increased productivity, and as the underlying technology improves at a rapid clip.

                          “I think if this technology goes wrong, it can go quite wrong,” Mr. Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”


                          https://www.nytimes.com/2023/05/30/t...t-warning.html





                          Click image for larger version

Name:	davronella1.png
Views:	319
Size:	385.4 KB
ID:	188634



                          l i t t l e s t e p h e r s

                          Comment

                          • possessor
                            I like LazyTown.
                            SPECIAL MEMBER
                            Level 30 - Stepher
                            • Oct 2021
                            • 2625

                            #15
                            Originally posted by chuft
                            In your attempt to mock me
                            I'm just saying - kinda stupid how people are scared of these robots that will "take over the world".

                            The only thing they're taking over are resturants as waiters.

                            Comment

                            Working...