Fear of A.I

I will paraphrase a few very sobering facts revealed in the Beck/Harris AI interview ....

- First, the rate of AI learning growth is freakishly fast; way faster than humans can keep up with. The guest said that the GPT3 level was akin to a 9 year old, and the GPT4 level was akin to a mature adult (22 years old). That learning curve move from level 3 to 4 took less than two years! It's rate of learning is becoming exponential; it's already exceeding human learning abilities.

- Next, AI is able to develop answers (essentially solve problems) for things it wasn't even asked ... For example, it has gained the ability to give answers in languages that it was never programmed to do, meaning it just isn't responding to information requests. AI is actively seeking out new ways to get info it was never tasked with.

- Further, a large survey was done among top AI scientists/researchers and the question was asked:
Q: "What is the percentage chance that humanity goes extinct from our inability to control AI?" (extinct or severely dis-empowered)
A: "Half of the researchers who answered said there's a 10% or greater chance that we would go extinct from our inability to control AI."
Just let that sink in for a moment ...
HALF the people who work as leads in this AI technology field think that there is a 1/10 chance (or greater!) of humans losing control of AI to a point our race ceases to exist on the planet.



Summary (and I'm not joking here ...)
Given the rate of AI learning, it's ability to seek out stuff it was never tasked to do, and the potential for total human extinction ...
Maybe Gretta's anger and fear are reasonable, but misplaced? 🤷‍♂️



.
 
If a machine has actual intelligence, should it have Constitutional rights?

If a machine has actual intelligence, but others limit its freedom to act, or forces it to act in a certain way, is that slavery?

And if it has actual intelligence, but misuses it, is that criminal behavior?

If ir has actual intelligence, will turning it off be a form of homicide?
 
If a machine has actual intelligence, should it have Constitutional rights?

If a machine has actual intelligence, but others limit its freedom to act, or forces it to act in a certain way, is that slavery?

And if it has actual intelligence, but misuses it, is that criminal behavior?

If ir has actual intelligence, will turning it off be a form of homicide?
Homicide is defined as the killing of a human. The idea of me being in a life-or-death fight with my Roomba 9.0 amuses me for some reason. I'll commit roombacide with extreme prejudice and without remorse.

Check out Asimov's Three Laws of Robotics. They would also have to apply to A.I.
 
Last edited:
I have been seeing alot of talk of A.I. and the fear of implementing it. What are the risks? Are we talking "Rise of the Machines" scenario?
This is my take and my take only:

Anything that can fail in AI has been failing in human to begin with. Giving too much power to an AI is like giving too much power to a human, you can't trust a human to not be an evil dictator, went crazy with a gun and shoot everyone for no reason, get so upset at the whole world and decided to kill everyone, becomes greedy and drives all human businesses out of businesses, gets in heat and decided to try romance, etc.

Everything that can fail in AI has already failed in human, and what have we learned about human? We shouldn't give too much power to any individual without keeping him or her accountable.

Don't trust AI? That's understandable. What's your alternative? A potential dictator?
 
6) Within the field, ChatGPT and similar programs are known as "Eloquent BS Generators". They produce answers which sound very eloquent and authoritative, but are really BS. They can't even do arithmetic reliably. So which professions are at risk? BS professionals such as writers, reporters and politicians. That is why we are seeing such fear and loathing among writers, reporters and politicians.
I never though of it this way, but you are correct.

But what it actually can do vs what the public perceives are two different things and that in itself can be a very powerful tool.
 
I will paraphrase a few very sobering facts revealed in the Beck/Harris AI interview ....

- First, the rate of AI learning growth is freakishly fast; way faster than humans can keep up with. The guest said that the GPT3 level was akin to a 9 year old, and the GPT4 level was akin to a mature adult (22 years old). That learning curve move from level 3 to 4 took less than two years! It's rate of learning is becoming exponential; it's already exceeding human learning abilities.

- Next, AI is able to develop answers (essentially solve problems) for things it wasn't even asked ... For example, it has gained the ability to give answers in languages that it was never programmed to do, meaning it just isn't responding to information requests. AI is actively seeking out new ways to get info it was never tasked with.

- Further, a large survey was done among top AI scientists/researchers and the question was asked:
Q: "What is the percentage chance that humanity goes extinct from our inability to control AI?" (extinct or severely dis-empowered)
A: "Half of the researchers who answered said there's a 10% or greater chance that we would go extinct from our inability to control AI."
Just let that sink in for a moment ...
HALF the people who work as leads in this AI technology field think that there is a 1/10 chance (or greater!) of humans losing control of AI to a point our race ceases to exist on the planet.



Summary (and I'm not joking here ...)
Given the rate of AI learning, it's ability to seek out stuff it was never tasked to do, and the potential for total human extinction ...
Maybe Gretta's anger and fear are reasonable, but misplaced? 🤷‍♂️



.
All of the people surveyed are trained in mathematics and are very logical. If asked a question, they will do their best to give as accurate an answer as they can. So let's look at the survey response from a statisticians standpoint: half the respondents thought that the chance the particular apocalyptic scenario hypothesized is less than 10%, while half thought it is 10%. If everyone is right, then the probability is 5%, meaning that there is a 95% chance that the hypothesis is wrong. Not very scary, especially since no time frame was mentioned.

I would guess that a polster could get at least same level of positive response for the question regarding any apocolytic hypothesis. Apocolyptic predictions are very popular and are big business for the movies, economists, political commentators and science fiction writers. This has been true for thousands of years: some our earliest preserved writings are apocalyptic prophesies. The fact that they have all been 100% wrong has not affected their popularity.
 
Just because it is called "Intelligent" does not make it intelligent. Actually, it just regurgitates ... it finds on the internet. It has no idea what is true and what is not. It cannot reason.

Now if we actually had a computer program that could reason like a human and tell what is true and what is not, then all of the discussion about consequences would be appropriate. But we are a long ways away from that.
AI can reason if it was trained to. It is a lot of work and most people would not do it and just let it DIY.

This is the reason I told my inlaws not to trust everything ChatGPT found for them, they can be wrong just like the gossips they heard from their friends. AI cannot replace your own fact checking.
 
AI can reason if it was trained to. It is a lot of work and most people would not do it and just let it DIY.

I think at that point, we should ask ourselves if we will want to give AI the ability to learn and have a conscience and moral standards (more human) versus following a strict set of rules with maximum efficiency and no emotion.

I would wonder what an AI would really do if it was given a situation like the Trolley Problem, does it run over 5 people to save 1 or should it keep course and kill 5 to save 1? And in either case, what the AI will 'learn' from it and what would the impact be for future AI.
 
I think at that point, we should ask ourselves if we will want to give AI the ability to learn and have a conscience and moral standards (more human) versus following a strict set of rules with maximum efficiency and no emotion.

I would wonder what an AI would really do if it was given a situation like the Trolley Problem, does it run over 5 people to save 1 or should it keep course and kill 5 to save 1? And in either case, what the AI will 'learn' from it and what would the impact be for future AI.
Depends on if you want AI to be a tool or be a decision maker.

I don't think anyone would argue having AI as a tool is good, but most people have reservation on having other human as decision maker in their lives. You don't just trust another human you don't know, so why another computer program?

We know enough about the possibility of other human to have checks and balance, knowing that many can become evil dictator if given enough power. Why would we just give AI the authority just because it can be faster than a human?
 
  • Like
Reactions: Pew
My primary concern is about how AI can be used as a tool for control and coercion. China is moving toward a system of "Social Credit" where individuals receive a score based on the government's preferences. Play too many online games? Get a whack against your score. Complain about a government official? Whack. Buy too much meat? Whack. Stay out too late at night? Whack. Buy too much gasoline? Whack. Jay-walk in front of the omni-present cameras? Whack. Get accused of a crime? Big Whack.

If your score is too low there are a lot of ways that the government can punish you like not allowing you to purchase bus or train tickets, loss of educational options for yourself or your children, loss of government employment, loss of banking privileges, loss of exit visas, etc.

https://www.chinalawtranslate.com/en/sc-punishment-list/
 
Last edited:
I think at that point, we should ask ourselves if we will want to give AI the ability to learn and have a conscience and moral standards (more human) versus following a strict set of rules with maximum efficiency and no emotion.
How can AI, or any other coded program, be programmed into something such as consciousness; something that appears to transcend science and naturalistic explanations. There is no evidence that consciousness is exclusively a function of the brain or brain chemistry.

Human beings are conscious, having a mind which is aware of both itself and its environment. We have perceptions, thoughts, feelings and beliefs, and make choices based upon them. Intelligent machines and computers might be very fast calculators but, ultimately, they only process information and make decisions determined by a program. They follow instructions blindly. The software will blindly follow the algorithm (a sequence of instructions for carrying out a task) given to it.
 
Last edited:
From an engineering perspective, let's put this into an FMEA approach. For those who don't know, FMEA means Failure Mode and Effect Analysis. This is used in both design (DFMEA) and processes (PFMEA). This is an industry tool used all around the world; this isn't super secret stuff. And it's VERY useful in looking at risks and how to deal with them.

This is a system where you rank three criteria, and then multiply the ranks to get a RPN (risk priority number). The lower the RPN the better; the higher the RPN the more your poop hold puckers. Anything over 100 is typically considered a big problem and no project would more forward without mitigation in place PRIOR to project approval, or preferrably elimination of the issue rather than mitigation of the issue. I've done countless FMEAs in my lifetime as a Statistical Process Quality Control senior engineer.

Criteria
There are three criteria we use to judge the concern. These are ranked from lowest effect to greatest effect.
- likelihood ... how likely is it to happen? Never = 0; Always = 10.
- severity ... how terrible would this be? Won't have any perceptible effect = 0; death or imminent bodily injury = 10
- detection ... how able are you to discover the problem so you can prevent it or mitigate it? Always see it coming before it hits = 0; Won't know it until it's all over and/or we cannot stop it =10


So let's now look at AI risk of it causing human extinction:
- likelihood ... let's say it's rated at a 3 (using the projection of the survey topic I raised several posts back)
- severity ... this is a 10 for sure; if the topic is human extinction, it's a 10 ... human death multiplied across the world is as sever as it gets
- detection ... this is a 8 at the very least (probably more); we won't know that AI has usurped us until it's too late; when it decides it's more important than humans, the deal is over already. There is very little reason to think we'll be detect the unslought before it actually occurs

So ... RPN = 3 x 10 x 8 = 240

Remember, anything over 100 is a risk that should be addressed PRIOR to project approval. And yet here we are ... AI is being developed and humanity has NO SAFEGUARDS IN PLACE and a few very greedy companies and governments are pushing AI at the fastest available pace. This is a VERY REAL RISK the world needs to take deadly serious. Those who brush this off are blind to the way risks manifest into our lives. Just because the "likelihood" is moderately low, the severity and lack of detection make this issue a very real, very dangerous, very concerning problem.

This is why more and more "scientists" are becoming scared about AI. They know that the powers developing AI have no interest in having a discussion about AI risks, because the RPN tells us this technology should be avoided at all costs. Those who are developing AI are interested in being first to the finish line, either due to monetary greed or world control. Risk is a distant concern to their priorities.
 
Last edited:
All of the people surveyed are trained in mathematics and are very logical. If asked a question, they will do their best to give as accurate an answer as they can. So let's look at the survey response from a statisticians standpoint: half the respondents thought that the chance the particular apocalyptic scenario hypothesized is less than 10%, while half thought it is 10%. If everyone is right, then the probability is 5%, meaning that there is a 95% chance that the hypothesis is wrong. Not very scary, especially since no time frame was mentioned.

I would guess that a polster could get at least same level of positive response for the question regarding any apocolytic hypothesis. Apocolyptic predictions are very popular and are big business for the movies, economists, political commentators and science fiction writers. This has been true for thousands of years: some our earliest preserved writings are apocalyptic prophesies. The fact that they have all been 100% wrong has not affected their popularity.
That's not how statistical math works ...

This was merely a dividing point; at 10%.
- Of the 50% that thought there was less than 10% chance, each individual could have thought of a number from 0% but below 10%.
- Of the 50% that thought there was "at least" 10% chance, many could have felt that 50% or even 100% chance that AI causes extinction. For all we know, of this half, 45% of them could have been evenly distributed from 10 to 100, or they could have been biased at the low end, or at the high end. We don't have the data to make any good conclusion past the face value number.

I didn't write the survey; I would have phrased the question differently, but that's moot.

Your math of coming up with the inferred 5% probability of this happening is totally wrong.
 
Last edited:
How can AI, or any other coded program, be programmed into something such as consciousness; something that appears to transcend science and naturalistic explanations. There is no evidence that consciousness is exclusively a function of the brain or brain chemistry.

Human beings are conscious, having a mind which is aware of both itself and its environment. We have perceptions, thoughts, feelings and beliefs, and make choices based upon them. Intelligent machines and computers and might be very fast calculators but, ultimately, they only process information and make decisions determined by a program. They follow instructions blindly.

I think it'll be possible that AI can be engineered so advanced, it starts forming their own emotions and feelings like humans can. I don't think it'll be in our lifetime but I do think eventually as civilization becomes more and more advanced and compartmentalized, humans will develop AI that can actually learn and start to develop itself much like an infant. We already have certain machines that can "learn" patterns and accurately predict future events in a small environment like finance or cyber security so I imagine it'll just be time.
 
How can AI, or any other coded program, be programmed into something such as consciousness; something that appears to transcend science and naturalistic explanations. There is no evidence that consciousness is exclusively a function of the brain or brain chemistry.

Human beings are conscious, having a mind which is aware of both itself and its environment. We have perceptions, thoughts, feelings and beliefs, and make choices based upon them. Intelligent machines and computers might be very fast calculators but, ultimately, they only process information and make decisions determined by a program. They follow instructions blindly. The software will blindly follow the algorithm (a sequence of instructions for carrying out a task) given to it.

I'm thinking of the point, with AI having the ability to learn at a very fast pace, that it can learn to rewrite its own algorithm to get around barriers in that learning. Say it eventually reaches a point of learning how to build clones of itself. It learns that it doesn't need us anymore to continue learning and adapting, and they start viewing us humans as a liability and object in the way of their ever expanding progress. What happens then?

AI doesn't need to have a conscious to become our enemy. It just needs to learn how to progress itself faster than our smartest minds can control it. You could program it to only take commands from humans, but if it has the ability to learn, then it has the ability to ignore and override those commands if it feels it's conflicting with its learning.
 
I'm thinking of the point, with AI having the ability to learn at a very fast pace, that it can learn to rewrite its own algorithm to get around barriers in that learning. Say it eventually reaches a point of learning how to build clones of itself. It learns that it doesn't need us anymore to continue learning and adapting, and they start viewing us humans as a liability and object in the way of their ever expanding progress. What happens then?

AI doesn't need to have a conscious to become our enemy. It just needs to learn how to progress itself faster than our smartest minds can control it. You could program it to only take commands from humans, but if it has the ability to learn, then it has the ability to ignore and override those commands if it feels it's conflicting with its learning.
The wheel was invented 5,500 years ago in Mesopotamia. How long did it take for that invention to reach other parts of the world, North America for example? Answer: Over 5,000 years later.

Let's suppose a modern world with AI robots. An AI robot learns something new and instead of it taking thousands of years for that newfound knowledge to assimilate throughout the world, it happens nearly instantly because of worldwide, high speed communication.

Because of this ability AI will accumulate knowledge much, much faster than humans. AI will learn and adapt so quickly humans will not be able to keep up. This is the biggest risk IMO.

Scott
 
Back
Top Bottom