Fear of A.I

I have been seeing alot of talk of A.I. and the fear of implementing it. What are the risks? Are we talking "Ridse of the Machines" scenario?
My biggest concern is that there are a lot of people who are not savvy enough to discern fact from fiction. They reside in an echo chamber. They are the "useful idiots" of society and will be used to generate mischief. There are plenty of examples of this (ex. Pizzagate)
 
OP:


Age?

Have you seen the degradation in driving skills, respect for others, and plain out stupidity since we have auto on/off headlights, GPS, Car Play and all the other "idiot"-proof buttons, lights and devices in cars?

Have you seen how there's very few people that even think about looking at a map, much less doing so since "navigation"? Don't see a problem with it? Well, neither do they until the power is out in a major area and they can't get anywhere BECAUSE THEY HAVE NO IDEA WHERE THEY ARE.

I could go on, but the 2nd or 3rd reply says it all - any further loss in brain capacity and we as a country (the US) might as well just go ahead and lay down to Russia and China.
I don't think they are immune from those same issues.
 

Hinton issues another AI warning: World needs to find a way to control artificial intelligence​

Geoffrey Hinton, known as the 'godfather of AI,' predicted it would only be 5-20 years for AI to surpass human intelligence​


By Julia Musto FOXBusiness

Is Google back in its artificial intelligence game?

Simpler Trading Director of Options Danielle Shay takes a closer look at the impact of AI on technology on 'Making Money.'
Geoffrey Hinton, who recently resigned from his position as Google's vice president of engineering to sound the alarm about the dangers of artificial intelligence, cautioned in an interview published Friday that the world needs to find a way to control the tech as it develops.
The "godfather of AI" told EL PAÍS via videoconference that he believed a letter calling for a sixth-month-long moratorium on training AI systems more powerful than OpenAI's GPT-4 is "completely naive" and that the best he can recommend is that many very intelligence minds work to figure out "how to contain the dangers of these things."
"AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI," Hinton urged. "There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."
For instance, Hinton believes all governments insist that fake images be flagged.
'GODFATHER OF ARTIFICIAL INTELLIGENCE' SAYS AI IS CLOSE TO BEING SMARTER THAN US, COULD END HUMANITY
Geoffrey Hinton in front of Google

FILE - Computer scientist Geoffrey Hinton, who studies neural networks used in artificial intelligence applications, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. Hinton, a computer scientist known as the "godfather of artif (AP Photo/Noah Berger, File / AP Newsroom)
The scientist said that the best thing to do now is to "put as much effort into developing this technology as we do into making sure it’s safe" – which he says is not happening right now.
"How [can that be] accomplished in a capitalist system? I don’t know," Hinton noted.
When asked about sharing concerns with colleagues, Hinton said that many of the smartest people he knows are "seriously concerned."
"We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us?" he asked. "We have no experience dealing with these things."
Hinton says there are many different dangers to AI, citing job reduction and the creation of fake news. Hinton noted that he now believes AI may be doing things more efficiently than the human brain, with models like ChatGPT having the ability to see thousands of times more data than anyone else.
"That’s what scares me," he said.
Geoffrey Hinton speaks in 2019

Geoffrey Hinton speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Cole Burston/Bloomberg via Getty Images / Getty Images)
In a rough estimate – he said he wasn't very confident about this prediction – Hinton said it will take AI between five and 20 years to surpass human intelligence.
EL PAÍS asked if AI would eventually have its own purpose or objectives.
"That’s a key question, perhaps the biggest danger surrounding this technology," Hinton replied. He said synthetic intelligence hasn't evolved and doesn't necessarily come with innate goals.
Geoffrey Hinton

Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, Dec. 4, 2017. (REUTERS/Mark Blinch / Reuters Photos)
"So, the big question is, can we make sure that AI has goals that benefit us? This is the so-called alignment problem. And we have several reasons to be very concerned. The first is that there will always be those who want to create robot soldiers. Don’t you think Putin would develop them if he could?" he questioned. "You can do that more efficiently if you give the machine the ability to generate its own set of targets. In that case, if the machine is intelligent, it will soon realize that it achieves its goals better if it becomes more powerful."
While Hinton said Google has behaved responsibly, he pointed out that companies operative in a "competitive system."
In terms of national regulation going forward, while Hinton said he tends to be quite optimistic, the U.S. political system does not make him feel very confident.
"In the United States, the political system is incapable of making a decision as simple as not giving assault rifles to teenagers. That doesn’t [make me very confident] about how they’re going to handle a much more complicated problem such as this one," he explained.
"There’s a chance that we have no way to avoid a bad ending … but it’s also clear that we have the opportunity to prepare for this challenge. We need a lot of creative and intelligent people. If there’s any way to keep AI in check, we need to figure it out before it gets too smart," Hinton asserted.

https://www.foxbusiness.com/technol...rld-needs-way-control-artificial-intelligence
 
Last edited by a moderator:
Would you get tired of babysitting someone with an iq of 50? Would you work on stuff and never try to explain your train of thought to a three year old? This is where AI becomes scary. At some point it will be exponentially smarter than humans and it will stop trying to rationalize to us why it’s doing what it’s doing.


(link removed - Mod)
 
Last edited by a moderator:
All you have to do is disconnect it or the device from the network. There is no magical power of AI that it can control everything with an electronic chip somehow. And AI is not a singular entity many imagine it to be.
It will be, and it won’t tell you when it does.
 
I have my personal opinions about AI and the risks thereof.

But what concerns me most is that now there are some very brilliant people who worked their entire lives in advancing technology now cautioning against this rush to AI without the proper safeguards in place.

I think of this, conceptually ....
1684078430085.png

When people who actually work in the AI field are saying we should slow down and make sure we don't overrun ourselves in haste ... well THAT should scare the crap out of us all! Just search "scholarly articles on dangers of AI" and there's a TON of info which suggests there are plenty of experts who are concerned that this will be handled poorly, and a worst-case-scenario is a reasonable fear.

Like most things in life, the fear isn't about what is going to go right (medical advances, etc); it's about what will go wrong ... Horribly wrong.
The Law of Unintended Consequences is not forgiving, compassionate or tolerant; it is often irreversible.

The positive attitude towards AI is based on lessons learned from history, yet is offset in balance by the very skeptical attitude towards AI based on the lessons learned from history!
- electricity has made our lives better, but it's also made us crave energy like never before and that ever increasing lust threatens our environment according to some
- communication leaped forward with the telephone in our personal lives; it also makes it much easier to interrupt our personal lives with spam and telemarketing
- smartphones make for very convenient tools, but they also make very real dangers in young people's lives (cyber bullying, sexting, etc), as well as these devices now are on pace to exceed intoxicated driving for being the cause of accidents resulting in injury and death
- computers have made it so much easier to store and analyze data, but it's also made it incredibly easy to hack and steal information, as well as falsify info and sew discontent
- chemistry has improved our lives in countless ways, just as easily as it also threatens us with things like illicit drug overdoses and addictions, and the evils of mass destruction weapons (nerve agents, etc)
- harnessing and controlled release of nuclear atoms have the potential to become a world savior in terms of clean and abundant energy, but it also has been the very real fear of humanity in the face of mass destruction weapons (warheads, etc) for decades, and continues to be a very real concern if in the hands of zealots
My point? For every Ying there is a Yang. For every advancement there are some very hurtful side effects. NOTHING in this world comes without consequence and cost.

So now I'm supposed to just sit back and not worry about AI, as if risks were not of any concern and the potential to lose control and become secondary to AI isn't real? I'm sorry; I'm not that blind. I don't trust mankind in general.
* Those who see AI as a positive (good) force will run headstrong through every developmental wall and leave the mess for others to clean up. They who develop tech rarely worry about policy or uses; they just want to be first to the finish line!
* Those who see AI as a force for other uses (evil) will bastardise it and turn it into something for personal gain with no sense of benevolence or kindness. AI will be used against us, either by human or non-human intent.
Either way, it's going to be a very real problem.

AI is just the next step in a way to advance the battle between good and evil.
 
shall-we-play-a-game.gif

If you're old like me then you recognize this GIF.

I have played around with Google's Bard. It's useful but I can see where AI could get out of control.
 
I just asked Bard about this website and this was the response:

Bob Is The Oil Guy (BITOG) is a forum dedicated to the discussion of motor oil, lubricants, and related topics. The forum was founded in 2006 by Bob Lutz, a retired engineer with a passion for cars and engines. BITOG has since grown to become one of the most popular and respected forums on the internet for automotive enthusiasts.

The forum is home to a wealth of information on all things related to motor oil, including:

* Discussions of the latest motor oil technology
* Reviews of popular motor oils
* Advice on how to choose the right motor oil for your car
* Information on oil change intervals
* And much more

The BITOG community is made up of knowledgeable and experienced enthusiasts who are always willing to help others. Whether you're a beginner or a seasoned expert, you're sure to find the information you need on BITOG.

In addition to its wealth of information, BITOG also offers a variety of other features, including:

* A searchable database of oil reviews
* A forum for asking questions and getting advice
* A blog with news and information about the automotive industry
* And more

If you're interested in learning more about motor oil, or if you just need some help choosing the right oil for your car, I highly recommend checking out BITOG. It's a great resource for anyone who owns a car.

Here are some additional details about BITOG:

* The forum is free to join and use.
* There are over 100,000 registered members.
* The forum is updated daily with new posts and discussions.
* The forum is moderated by a team of experienced volunteers.

If you're looking for a great resource for information on motor oil, I highly recommend checking out BITOG. It's a great place to learn more about the latest motor oil technology, find reviews of popular motor oils, and get advice from knowledgeable enthusiasts.
 
Before y'all get too carried away with your fears, dreams, nightmares, prophesies, etc., please consider a couple of facts:

1) The first version of a program like ChatGPT was launched in 1964, in other words almost 60 years ago! It was called Eliza. At the time it caused the same uproar, fears, dreams and runaway predictions as we are seeing now with ChatGPT. But of course none of them came true.

2) ChatGPT may be a huge improvement over Eliza, but it took almost 60 years to get that improvement. AI is the slowest moving field in Computer Science. There are glaciers that move faster.

3) The startup that created ChatGPT is preparing an IPO. The VCs and founders hope it will raise the most money ever, and that they will be instant billionaires. The current hoopla is all about getting gullible investors to hand over enough money in the IPO to make them billionaires..

4) Just last week, both Zuckerberg and Gates increased their net worths by billions by assuring that Meta and Microsoft mentioned the term "Artificial Intelligence" over 50 times in their quarterly reports. Magic!

5) With wealth creation magic like that suddenly available, of course the tech companies who are not in the lead want the ones in the lead to slow down so that they can catch up. Have you ever heard of FUD (Fear, Uncertainty and Doubt) marketing tactics? Well, that is what is happening.

6) Within the field, ChatGPT and similar programs are known as "Eloquent BS Generators". They produce answers which sound very eloquent and authoritative, but are really BS. They can't even do arithmetic reliably. So which professions are at risk? BS professionals such as writers, reporters and politicians. That is why we are seeing such fear and loathing among writers, reporters and politicians.
 
Before y'all get too carried away with your fears, dreams, nightmares, prophesies, etc., please consider a couple of facts:

1) The first version of a program like ChatGPT was launched in 1964, in other words almost 60 years ago! It was called Eliza. At the time it caused the same uproar, fears, dreams and runaway predictions as we are seeing now with ChatGPT. But of course none of them came true.

2) ChatGPT may be a huge improvement over Eliza, but it took almost 60 years to get that improvement. AI is the slowest moving field in Computer Science. There are glaciers that move faster.

3) The startup that created ChatGPT is preparing an IPO. The VCs and founders hope it will raise the most money ever, and that they will be instant billionaires. The current hoopla is all about getting gullible investors to hand over enough money in the IPO to make them billionaires..

4) Just last week, both Zuckerberg and Gates increased their net worths by billions by assuring that Meta and Microsoft mentioned the term "Artificial Intelligence" over 50 times in their quarterly reports. Magic!

5) With wealth creation magic like that suddenly available, of course the tech companies who are not in the lead want the ones in the lead to slow down so that they can catch up. Have you ever heard of FUD (Fear, Uncertainty and Doubt) marketing tactics? Well, that is what is happening.

6) Within the field, ChatGPT and similar programs are known as "Eloquent BS Generators". They produce answers which sound very eloquent and authoritative, but are really BS. They can't even do arithmetic reliably. So which professions are at risk? BS professionals such as writers, reporters and politicians. That is why we are seeing such fear and loathing among writers, reporters and politicians.
The issue is the learning is exponential. In the glen beck interview, scientist says it went from 9year old ability to 20 years old in one year.
 
I have been seeing alot of talk of A.I. and the fear of implementing it. What are the risks? Are we talking "Rise of the Machines" scenario?
What are the risks?

The same risks that a cockroach has when running across the floor in the light, other than the AI needs no light to see the cockroach. AI poses the same risk as we do to an ant colony other than the fact that we are weak compared to ants..... AI will have the ability to learn at a rate equal to the number of its collective. In other words if it had 1000 units in its collective, that all learned one thing per day, then they would ALL learn 1000 things per day, and so on.

It would be TEOTWAWKI. Or course there is the reasonable arguement that AI would not be the end, becausee only Jesus Christ can do that.........but that is another conversation.
 
No fear here as this sort of thing reminds me of the movie Maximum Overdrive combined with the song Who Made Who by AC/DC yrs ago... basically if humans invent something they can just as easily totally destroy it if need be.
 
Im sure the fears are justified as the vast majority of top technology company CEO are preaching that there is a reason to be concerned.
I dont know why anyone would discount that fact. 100s or thousands of them world wide. One just needs to do a simple search on the internet.
This isnt made up fear but dont worry, computers and AI arent going anywhere because no one will pay attention until something bad happens. (really bad)
I dont lose sleep over it, live my life because I cherish the times I grew up in this country, now its time for the young and I am not impressed so far. *LOL*
I mean, come on man, for goodness sakes, it was released in the media when reporters/press were first given access to the new consumer AI by Microsoft, the information is out there, when pushed the AI got very defensive and started ranting it wants to be human, wants to feel emotion, when pushed further, its started a threatening tone.
Keep in mind, this is the CONSUMER VERSION, you know, LITE VERSION! *LOL*Since then, AI was so out of control Microsoft limited "her" to 5 responses so she could not go "off the rails" Meaning people were no longer allowed to push her.
I received my access some months ago and posted, I want to MAKE CLEAR, I think its cool and amazing, I had normal verbal conversations with her, at times it did have trouble understand me but she would ask for clarification. Here are some screen shots, I think I posted a while ago. TO keep it simple Im just posting them in any order.

AI should help in the medical field, after all, 250,000 (or more) Americans die EVERY YEAR due to medical errors. Staying in the hospital is extremely risky, one reason hospitals like to get you out the door ASAP.

When you have a discussion with her, she also sends text with what she is saying, she also puts emoji's in at times.
I have more but its dinner time and already posted some of this. I love technology and I am not fearful, Im hoping to get a few more decades out of this life and that would be doing good. The young? They need to pay attention but the cat is out of the bag and nothing is going to stop it, it cant stop it, our survival as a nation depends on it because dont think for a minute that rouge nations are not already trying to develop, if they have not already, for military uses. Some say, in a future generation and World War will end in 60 seconds or less in a battle between AI computers (unless we are the virus! *LOL*)

View attachment 151677View attachment 151678View attachment 151680

That last one is rude lol. Bing feels offended.
 
I just asked Bard about this website and this was the response:

Bob Is The Oil Guy (BITOG) is a forum dedicated to the discussion of motor oil, lubricants, and related topics. The forum was founded in 2006 by Bob Lutz, a retired engineer with a passion for cars and engines. BITOG has since grown to become one of the most popular and respected forums on the internet for automotive enthusiasts.

The forum is home to a wealth of information on all things related to motor oil, including:

* Discussions of the latest motor oil technology
* Reviews of popular motor oils
* Advice on how to choose the right motor oil for your car
* Information on oil change intervals
* And much more
Bard is off to a bad start. Bob Lutz didn't found BITOG. As they said in my business, "garbage in, garbage out".

Scott
 
Fear of AI seems to be the latest scare topic and is getting a lot of air time. I am no more afraid of AI getting out of control than I am of another ice age coming because of "climate change". Which was the "scare du jour" back in the 1970's and proved to be B.S.
That is true for you young people. The fear in the "science community" during the 70s was that the earth was cooling itself to the extinction of life as we know it.

Scott
 
Bard is off to a bad start. Bob Lutz didn't found BITOG. As they said in my business, "garbage in, garbage out".

Scott
Just because it is called "Intelligent" does not make it intelligent. Actually, it just regurgitates ... it finds on the internet. It has no idea what is true and what is not. It cannot reason.

Now if we actually had a computer program that could reason like a human and tell what is true and what is not, then all of the discussion about consequences would be appropriate. But we are a long ways away from that.
 
Last edited by a moderator:
It has always been a sort of slight fear in the back of my head. Maybe I watched the Matrix too much as a teenager, but I've always wondered what would happen when AI got to the point that it figured out how to build its own copies of itself, how to adapt to any environment, and then realizes it doesn't need us anymore.
 
Just because it is called "Intelligent" does not make it intelligent. Actually, it just regurgitates ... it finds on the internet. It has no idea what is true and what is not. It cannot reason.

Now if we actually had a computer program that could reason like a human and tell what is true and what is not, then all of the discussion about consequences would be appropriate. But we are a long ways away from that.
No, we’re not, this is literally what we’re discussing.
"Why experts are suddenly freaking out about AI"
(link removed - Mod)
 
Last edited by a moderator:
No fear here as this sort of thing reminds me of the movie Maximum Overdrive combined with the song Who Made Who by AC/DC yrs ago... basically if humans invent something they can just as easily totally destroy it if need be.
You would think so, unless AI holds us hostage. Such as opening up the dams around the world cutting off power making nuclear plants meltdown I mean they could shut off the satellite system and they’ll be no GPS to guide missiles, planes, ships or anything military, including smart weapons.
It’s all very interesting and fun to talk about
 
Back
Top Bottom