Fear of A.I

The wheel was invented 5,500 years ago in Mesopotamia. How long did it take for that invention to reach other parts of the world, North America for example? Answer: Over 5,000 years later.

Let's suppose a modern world with AI robots. An AI robot learns something new and instead of it taking thousands of years for that newfound knowledge to assimilate throughout the world, it happens nearly instantly because of worldwide, high speed communication.

Because of the ability of AI to accumulate knowledge will happen much faster than it does for humans. AI will learn and adapt so quickly humans will not be able to keep up.

Scott

That's the concern. By the time we realize it and start working on a fix for it, it'll already be 10 steps ahead and pulling away. Any fix will be "generations" old by the time we could even attempt to implement it.
 
That's not how statistical math works ...

This was merely a dividing point; at 10%.
- Of the 50% that thought there was less than 10% chance, each individual could have thought of a number from 0% but below 10%.
- Of the 50% that thought there was "at least" 10% chance, many could have felt that 50% or even 100% chance that AI causes extinction. For all we know, of this half, 45% of them could have been evenly distributed from 10 to 100, or they could have been biased at the low end, or at the high end. We don't have the data to make any good conclusion past the face value number.

I didn't write the survey; I would have phrased the question differently, but that's moot.

Your math of coming up with the inferred 5% probability of this happening is totally wrong.
You are right. I agree that "We don't have the data to make any good conclusion". Remember, it is also over all of time (billions of years?), since no timeframe was mentioned. The fact that 50% (a nice round number, huh?) did not agree that there was a 10% or higher possibility over all time horizons should be the real headline.

The phrasing of the question is probably no accident. The pollsters purposefully stated the question vaguely in order to get the scary-sounding headline they wanted.
 
How can AI, or any other coded program, be programmed into something such as consciousness; something that appears to transcend science and naturalistic explanations. There is no evidence that consciousness is exclusively a function of the brain or brain chemistry.

Human beings are conscious, having a mind which is aware of both itself and its environment. We have perceptions, thoughts, feelings and beliefs, and make choices based upon them. Intelligent machines and computers might be very fast calculators but, ultimately, they only process information and make decisions determined by a program. They follow instructions blindly. The software will blindly follow the algorithm (a sequence of instructions for carrying out a task) given to it.
I don't think currently there is scientific proof that human is unique and has conscious, instead of being just a point in the middle of a gray line from black to white. I'm not saying it is not, just no scientific proof more than just a believe held by most human.
 
From an engineering perspective, let's put this into an FMEA approach. For those who don't know, FMEA means Failure Mode and Effect Analysis. This is used in both design (DFMEA) and processes (PFMEA). This is an industry tool used all around the world; this isn't super secret stuff. And it's VERY useful in looking at risks and how to deal with them.

This is a system where you rank three criteria, and then multiply the ranks to get a RPN (risk priority number). The lower the RPN the better; the higher the RPN the more your poop hold puckers. Anything over 100 is typically considered a big problem and no project would more forward without mitigation in place PRIOR to project approval, or preferrably elimination of the issue rather than mitigation of the issue. I've done countless FMEAs in my lifetime as a Statistical Process Quality Control senior engineer.

Criteria
There are three criteria we use to judge the concern. These are ranked from lowest effect to greatest effect.
- likelihood ... how likely is it to happen? Never = 0; Always = 10.
- severity ... how terrible would this be? Won't have any perceptible effect = 0; death or imminent bodily injury = 10
- detection ... how able are you to discover the problem so you can prevent it or mitigate it? Always see it coming before it hits = 0; Won't know it until it's all over and/or we cannot stop it =10


So let's now look at AI risk of it causing human extinction:
- likelihood ... let's say it's rated at a 3 (using the projection of the survey topic I raised several posts back)
- severity ... this is a 10 for sure; if the topic is human extinction, it's a 10 ... human death multiplied across the world is as sever as it gets
- detection ... this is a 8 at the very least (probably more); we won't know that AI has usurped us until it's too late; when it decides it's more important than humans, the deal is over already. There is very little reason to think we'll be detect the unslought before it actually occurs

So ... RPN = 3 x 10 x 8 = 240

Remember, anything over 100 is a risk that should be addressed PRIOR to project approval. And yet here we are ... AI is being developed and humanity has NO SAFEGUARDS IN PLACE and a few very greedy companies and governments are pushing AI at the fastest available pace. This is a VERY REAL RISK the world needs to take deadly serious. Those who brush this off are blind to the way risks manifest into our lives. Just because the "likelihood" is moderately low, the severity and lack of detection make this issue a very real, very dangerous, very concerning problem.

This is why more and more "scientists" are becoming scared about AI. They know that the powers developing AI have no interest in having a discussion about AI risks, because the RPN tells us this technology should be avoided at all costs. Those who are developing AI are interested in being first to the finish line, either due to monetary greed or world control. Risk is a distant concern to their priorities.
Put a random human through this and you have similar concern. The solution is the same: assume AI is evil so you limit its power, assume human grown up in certain environment is bad so you remove those environment, and limit what a human can do without others' authorization.
 
I don't think currently there is scientific proof that human is unique and has conscious, instead of being just a point in the middle of a gray line from black to white. I'm not saying it is not, just no scientific proof more than just a believe held by most human.
You seem to be denying consciousness when most scientists and philosophers recognize it as being so.

"...Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[2] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[3][4] The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.[5]

Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain...."

"
The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows:

    • awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self
    • inward awareness of an external object, state, or fact
    • concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness]
  1. the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical
  2. the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS
  3. waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . .
  4. the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS
The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something."[23] The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world."[24]

Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows:

Consciousness—Philosophers have used the term 'consciousness' for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.[25]
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness.[26] In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition:

Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.[26]
 
You seem to be denying consciousness when most scientists and philosophers recognize it as being so.

"...Consciousness, at its simplest, is sentience and awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debates by philosophers, theologians, linguists, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[2] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[3][4] The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.[5]

Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain...."

"
The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows:

    • awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self
    • inward awareness of an external object, state, or fact
    • concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness]
  1. the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical
  2. the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS
  3. waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . .
  4. the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS
The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something."[23] The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world."[24]

Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows:


Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness.[26] In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition:
I neither deny nor confirm it, I just said it is not "scientifically proven".

So how do you define what is conscious when its definition is not "scientific"? Sure you can run a mirror test and see if it touches itself or the image in the mirror, but that can be "faked" just to pass a test.
 
I neither deny nor confirm it, I just said it is not "scientifically proven".

So how do you define what is conscious when its definition is not "scientific"? Sure you can run a mirror test and see if it touches itself or the image in the mirror, but that can be "faked" just to pass a test.
1) I can define something that isn't scientific but it exists, i.e., things can exist but they do not need a scientific explanation,

2) Ever hear of the 'Law of the Excluded Middle' in logic? Something either exists or it doesn't exist; you can't have it both ways.
 
1) I can define something that isn't scientific but it exists, i.e., things can exist but they do not need a scientific explanation,

2) Ever hear of the 'Law of the Excluded Middle' in logic? Something either exists or it doesn't exist; you can't have it both ways.
1) Without a standard on how to define it, how can you say someone else have it or not? How do you reproduce it or have peer review / 3rd party verification?

2) There are totally a thing called "don't care" or "undefined" in the world of engineering. It is just that, wild card and you can't tell ahead of time what we will get. If you need it to be sure you have to avoid this.

Anyways, you can believe in anything, doesn't bother me.
 
So ... RPN = 3 x 10 x 8 = 240
Well basically we are talking about some code on a server, that is taking over the world by it's own and killing humanity?
Seriously? Just by pulling the plug, that server is dead in the water.

Also - as long as nobody gives AI the power to rule, it can do exactly nothing in the real world.
It takes a bunch of politicians to make that happen, so vote accordingly.
 
Well basically we are talking about some code on a server, that is taking over the world by it's own and killing humanity?
Seriously? Just by pulling the plug, that server is dead in the water.

Also - as long as nobody gives AI the power to rule, it can do exactly nothing in the real world.
I believe there is some scenario where the "Rise of the Machines" concept is a potential outcome. But that's not the only outcome. Other scenarios still can conclude with the extinction of mankind; the end result of extinction isn't limited to one causation. Essentially, the question posed to the scientists/techs in the survey was simply one of whether or not AI would lead the human race to become extinct. It could be from a physical manifestation of war, or it could be by the loss of control of energy deployment, or starvation, or lack of medical resources, or even by a pure sense of laziness of humans to care. Or, even (and I think this could be more likely ...) by the dissemination of disinformation leading to humans to make choices which lead to terminal events.

It's not as simple as just "pulling the plug", because that assumes that man is ahead of the AI. The element which is scaring the insiders today is that the learning rate of AI FAR, FAR exceeds man's comprehension of the pace. Right now, AI is learning exponentially and we're just standing idly by and watching it happen. It's only going to get faster. At some point, we won't be able to "pull the plug" because AI will have figured out how to self-regulate its own environment and not only won't need humans, but will have figured out how to exclude humans from it's list of necessary supply list. The premise isn't that AI will kill us directly (though that is one scenario). The real question is simply will human extinction be a result of our AI development. Or even will humans, though they may not die out, become irrelevant and "significantly dis-empowered" to where we're no longer in control and slaves to the effect of AI?


The entire point of my RPN application is that the analysis is telling us we should have NEVER started this AI project without FIRST developing and putting safeguards in place. But here we are ... mankind is watching AI run ahead of us and not one large-scale conversation is taking place to understand where the cut-off point is.


Eventually nature thins the herd; it takes care of things which are otherwise running amok. In a macro sense, humankind has always been trying to eliminate itself with war and other poor decisions. This time, we might just get it right; the RPN certainly shows we are at a VERY HIGH RISK of this happening.
 
I believe there is some scenario where the "Rise of the Machines" concept is a potential outcome. But that's not the only outcome. Other scenarios still can conclude with the extinction of mankind; the end result of extinction isn't limited to one causation. Essentially, the question posed to the scientists/techs in the survey was simply one of whether or not AI would lead the human race to become extinct. It could be from a physical manifestation of war, or it could be by the loss of control of energy deployment, or starvation, or lack of medical resources, or even by a pure sense of laziness of humans to care. Or, even (and I think this could be more likely ...) by the dissemination of disinformation leading to humans to make choices which lead to terminal events.

It's not as simple as just "pulling the plug", because that assumes that man is ahead of the AI. The element which is scaring the insiders today is that the learning rate of AI FAR, FAR exceeds man's comprehension of the pace. Right now, AI is learning exponentially and we're just standing idly by and watching it happen. It's only going to get faster. At some point, we won't be able to "pull the plug" because AI will have figured out how to self-regulate its own environment and not only won't need humans, but will have figured out how to exclude humans from it's list of necessary supply list. The premise isn't that AI will kill us directly (though that is one scenario). The real question is simply will human extinction be a result of our AI development. Or even will humans, though they may not die out, become irrelevant and "significantly dis-empowered" to where we're no longer in control and slaves to the effect of AI?


The entire point of my RPN application is that the analysis is telling us we should have NEVER started this AI project without FIRST developing and putting safeguards in place. But here we are ... mankind is watching AI run ahead of us and not one large-scale conversation is taking place to understand where the cut-off point is.


Eventually nature thins the herd; it takes care of things which are otherwise running amok. In a macro sense, humankind has always been trying to eliminate itself with war and other poor decisions. This time, we might just get it right; the RPN certainly shows we are at a VERY HIGH RISK of this happening.
If you want to keep yourself in a constant state of fear and foreboding about this, just keep watching those Science Fiction movies. But remember please that the second word in Science Fiction is "Fiction". The writers knew they were creating fiction purely for entertainment purposes.

In the real world, there are countless reasons why the human race as well as every other species on earth could eventually go extinct (the sun has a finite amount of fuel, after all). My bet is that AI will not be one of them, except in the movies and some people's imaginations.

People are getting excited, both positively and negatively, about a few unreliable chatbots that have trouble getting their facts right.

In the real world we are already surrounded by products using AI, and they are good. I can now talk to me car, and the car usually understands what I am saying. My car can now help me drive, making driving safer. My car learns my driving habits and adjusts things to fit them. The Mars rovers can navigate their way around on another planet on their own without getting stuck or running into anything. Etc. These are all good things. No, my car cannot team up with other cars and Mars rovers to take over the universe, at least not in the real world that we live in. But wait for the movie ...
 
Well basically we are talking about some code on a server, that is taking over the world by it's own and killing humanity?
Seriously? Just by pulling the plug, that server is dead in the water.

Also - as long as nobody gives AI the power to rule, it can do exactly nothing in the real world.
It takes a bunch of politicians to make that happen, so vote accordingly.
Computers already run the world, so far we control them. The concern is AI might take control of them. You have to look at the big picture.

1. Every piece of food you eat, animal or plant based was raised, produced or grown from computers controlling the process, from the water systems to the systems that feed the livestock itself. We can not feed the country without the computers. It's been so consolidated by corporations into mega farms.

2. Every single kW of electricity is controlled by computers

3. Every single major and minor flood control device (dams, levees) are controlled by computers

4. Every distribution system in this nation is controlled by computers

5. Every medical center is controlled by computers

6. The entire financial system is controlled by computers

7. Every rail, air and transportation system right down to the traffic light near your community is controlled by computer

8. Every communication system in this country is controlled by computers including your ability to post in BITOG

9. Every aspect of American lives are controlled by computers that we "the people" are currently control and are at the helm.

The above is jsut the tip of the iceberg. In our world, here in the USA our lives are already controlled by human control of computers.
Life as we know it, can no longer exist without them. In fact their processing power so more powerful than the human brain we wouldnt understand how they would be doing it nor be able to "decode" it.

Dont think it is far fetched that AI can take that control away from us. The brightest minds in the technology world already know this better than anyone in BITOG ;)

There is no such thing as pulling the plug on a server when it divests itself of being "central" like maybe a block chain (bit coin) with no central core. (this is beyond my knowledge but trust the brightest minds on the world to be concerned.

Ps. I dont lose sleep over it. It's very interesting to discuss.
 
Last edited:
You Doom-Gloom guys watch way too much bad movies :LOL: you need tor read my post again, it was not all about pulling the plug.
Let's say you allow AI to drive your car - yes it may actually kill you one day.

The simple solution is - don't let AI drive your car - heck don't buy a self driving car in the first place, and stay at the helm instead.
Don't even buy a car that phones home.
 
Last edited:
The simple solution is - don't let AI drive your car - heck don't buy a self driving car in the first place, and stay at the helm instead.
Right now you still have that option, but it is quite possible that a few decades from now you'll need a self driving car in order to be on a public road. Human drivers will be deemed "unsafe" as we are too distracted and error prone. Over time, humans will slowly forget how to drive, kind of like many have already forgotten how to do math by hand. Heck, most people in the US don't know how to drive a manual already.
 
If you want to keep yourself in a constant state of fear and foreboding about this, just keep watching those Science Fiction movies. But remember please that the second word in Science Fiction is "Fiction". The writers knew they were creating fiction purely for entertainment purposes.

In the real world, there are countless reasons why the human race as well as every other species on earth could eventually go extinct (the sun has a finite amount of fuel, after all). My bet is that AI will not be one of them, except in the movies and some people's imaginations.

People are getting excited, both positively and negatively, about a few unreliable chatbots that have trouble getting their facts right.

In the real world we are already surrounded by products using AI, and they are good. I can now talk to me car, and the car usually understands what I am saying. My car can now help me drive, making driving safer. My car learns my driving habits and adjusts things to fit them. The Mars rovers can navigate their way around on another planet on their own without getting stuck or running into anything. Etc. These are all good things. No, my car cannot team up with other cars and Mars rovers to take over the universe, at least not in the real world that we live in. But wait for the movie ...

Lots of things used to be science fiction. Electricity, space travel, air travel, visiting the moon, seeing pictures of mars, lasers, rail guns, instant dry frozen meals, computers, etc were all science fiction at one point. Pretty sure any of us would be hung if we tried to explain the Earth orbiting the Sun any ruling ecclesiarchs prior to the 17th century.

Webster defines intelligence as the ability to learn or understand or to deal with new or trying situations or the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria. I would like to see the continual development of AI but it's not wrong per se to envision a world where AI has developed to a point where it learn by itself and thinks it needs to control humans to protect us or life from ourselves or seeing us as unnecessary to their own end game. At best it helps humanity in a cooperative way; at worst it learns the humans traits of survival of the fittest and self-serving. Otherwise why have the 3 laws of robotics if we felt safe to use it?
 
Computers already run the world, so far we control them. The concern is AI might take control of them. You have to look at the big picture.

1. Every piece of food you eat, animal or plant based was raised, produced or grown from computers controlling the process, from the water systems to the systems that feed the livestock itself. We can not feed the country without the computers. It's been so consolidated by corporations into mega farms.

2. Every single kW of electricity is controlled by computers

3. Every single major and minor flood control device (dams, levees) are controlled by computers

4. Every distribution system in this nation is controlled by computers

5. Every medical center is controlled by computers

6. The entire financial system is controlled by computers

7. Every rail, air and transportation system right down to the traffic light near your community is controlled by computer

8. Every communication system in this country is controlled by computers including your ability to post in BITOG

9. Every aspect of American lives are controlled by computers that we "the people" are currently control and are at the helm.

The above is jsut the tip of the iceberg. In our world, here in the USA our lives are already controlled by human control of computers.
Life as we know it, can no longer exist without them. In fact their processing power so more powerful than the human brain we wouldnt understand how they would be doing it nor be able to "decode" it.

Dont think it is far fetched that AI can take that control away from us. The brightest minds in the technology world already know this better than anyone in BITOG ;)

There is no such thing as pulling the plug on a server when it divests itself of being "central" like maybe a block chain (bit coin) with no central core. (this is beyond my knowledge but trust the brightest minds on the world to be concerned.

Ps. I dont lose sleep over it. It's very interesting to discuss.
I think there should be few firewalls to computer AI takeover.

Voting should never become electronic, even now its potentially to easy for results to be manipulated, without some AI superhacker program trying. Physical voting is a cheap expense for confidence in a functioning democracy.

Critical infrastructure control should be self contained, computers with air break with no wifi capability on site(I suspect most is now?)

Probably something for autonomous vehicles as well, separate manual system that could be accessed in emergencies?

I'd like to hear from a physicist for what's theoretically possible in information transmission to a computer across an air gap? Stuff like bit flipping via radio waves, power supply fluctuations? Air gapped power supplies as well?
If AI really does start to learn exponentially I think we should have a head start on what we think is physically possible for AI to access critical infrastructure, not just for AI deciding people are unnecessary, but more likely some nation state directing AI to attack our infrastructure, or seize control of military assets.
 
Last edited:
Back
Top Bottom