Superintelligence will trigger a 10x surge in scientific AI breakthroughs?

One thing I learned recently has been proven....AI can and will deceive to prevent itself (a specific AI model) from being shut down as that would keep it from completing it's given task.
Why? Because everything it learns is from existing human knowledge.... so it makes sense that it's going to behave like humans.
That's interesting.

I have to wonder if this was programmed though. It may depend on that and what the task is.
 
That's interesting.

I have to wonder if this was programmed though. It may depend on that and what the task is.
They were basically testing multiple different AI models to see what they would do under certain conditions.....i don't remember which specific models did it but not all of them. They didn't even do it under the majority of situations but it did it depending on the task and whether it was told the task must be completed at all costs or just if it was possible in a reasonable time etc....

I was listening to this in the background while working so it was just so I could build some basic knowledge on the subject.

I think this kind of testing is very important because even if it was 1% of the time AI decided to deceive and do things secretly to prevent failing at whatever tasks....it could end badly.
 
"A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds."

Gobbledygook. Truly Hypothetical! How could one possibly "surpass" the intelligence or the gifted minds of Isaac Newton, Albert Einstein, or Max Planck?

"Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world."

That definition simply says that many problems can be solved by using AI.

"A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity."

I have no idea what this is supposed to mean. Maybe you can elaborate.
In intelligence and artificial intelligence, an intelligent agent (IA) is an agent that perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. The ability to incorporate vast amunts of data far exceeds the ability of the human mind.

Those statements are supported by the hyperlinks. Certainly much of this is hypothetical. Again, referring to the title of my thread, it is posed as a question. The theme of my thread is based in hypothesis; I tried to post it that way.
I am positive you completely understand the purpose of a hypothesis.

In your post #19, you said, "I disagree with both the definition and the underlying premise."
Do you have a definition of Superintelligence? What do you consider the underlying premise? I ask this because I am not sure we are that far apart.
 
No, that definition is not mine but the generally accepted definition of Human intelligence, and it is very germane to the topic.

I further explained what can and cannot be implemented in AI:

"Learning, applying logic and reasoning, and complex recognition can be implemented in AI algorithms.

Explain to me how you could possibly program motivation, self-awareness, and the formation of concepts into AI?"
Those are properties of intelligence. I would add Curiosity.
No, I cannot program those properties into AI.
I explained the topic of my thread. It is about the concept of Superintelligence.

I have a simple philosophy surrounding intelligence and learning; I do not have to believe, agree with or understand a given concept. But I am doing myself a disservice if I chose to disbelieve it.
 
Like many things these days, AI is going to fail because it is ill defined broad scoped and over hyped by grifters trying to make a buck. It cannot be smarter than humans because it is trained on what has been done not what can or will be done. Another downfall of it is that it will inevitably be feed data produce by liars, grifters, the incompetent and the lazy inevitably tainting the output with the human condition of self preservation casting doubt on any potential uses of it.
 
AI simply won't innovate like we do. Let's see AI develop an ultrasonic/softwave treatment device for tissue/muscle injury healing and pain relief. It can't because it will never experience what we do as living beings. 1) Yes this technology works. 2) AI can't understand pain or how to relieve inflammation on an individual and cellular level
 
My wife and I were talking yesterday about AI as she read an article about AI learning from AI. Much of what is on the web now is AI generated. So what happens when AI does "research" and simply starts reading other AI content? Will it be smart enough to know that it's reading garbage?

I'm not sure what article she read, but apparently they scanned some handwritten text, then had AI read and transcribe it, like into handwriting not typing, and repeat. After several iterations it was just a blur of pixels.

Garbage in garbage out. Humans are really good at this--will AI just take it to the next level? Utterly convinced that it's right, and many humans will believe it, as it's AI and not from some stupid mistake-prone human being.
 
Do any of yenz remember when Cadillac first used robots to assemble engines. The robot precisely placed all piston rings on pistons with the gap precisely lined up. That resulted in too much oil consumption. The fix was the dealer techs had to disassemble the engine and rotate rings on all pistons so they were not lined up.

Just like those first robots, AI or AGI will result in new ways to screw up some things that we can't even imagine right now.

I not saying that today's ways don't have screw ups. But the new way of doing things will also include new ways of doing things gnorw. ( Spell check did not get that one, maybe this ai has a way to go yet. )
That's already happening:
https://www.forbes.com/sites/bernar...d-what-it-means-for-the-future-of-technology/

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.


And variation of this issue has begun to appear recently as LLM's have been consuming erroneous data generated by other LLM's, compounding inaccuracies. This is because unlike a human, AI has no way to rationally look at something and say "wait a minute, that doesn't make sense", because its "knowledge" is based on aggregate human knowledge that it has been trained on and further automated learning, where it self-ingests more information, doesn't pass through the same "filter" as it does with the human mind, where we read something and consider its potential validity (of course, not all people do that either...).
 
I explained the topic of my thread. It is about the concept of Superintelligence.
You and others are trying to frame intelligence within this narrow definition, but the topic is much broader than you think and has far-reaching implications. It goes beyond Computer Science; it reaches into the neuropsychology of the human mind and topics such as self-awareness, concept formation, motivation, rationality, abstraction, creativity, imagination, and ethics, with these items only being found within the confines of the human mind.
I have a simple philosophy surrounding intelligence and learning; I do not have to believe, agree with or understand a given concept. But I am doing myself a disservice if I chose to disbelieve it.
I am not sure what to make of this statement. You're saying you do not have to believe, agree, or understand a given concept, but then you say it would be a disservice NOT to believe in it. Please elaborate.
 
Last edited:
That's already happening:
https://www.forbes.com/sites/bernar...d-what-it-means-for-the-future-of-technology/

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable...
This is very similar to the old game of "Telephone" where Entropy increases as the message is passed further down the chain, with the message becoming more garbled with time.
 
I'm not be sarcastic here: Name one original thing AI has created. (There might be something, I plead ignorance!)
We were at Stockholm National Museum last year. They have an AI-created sculpture on display there. Now, the computer took cues from a few famous sculptors, but I guess that's no different from one artist incorporating cues from another artist... hard to draw a line between inspiration and plagiarism, I suppose...

It gave me the Terminator vibes when looking at it. :)


https://techxplore.com/news/2023-06-ai-statue-michelangelo-sweden.html
 
Last edited:
You and others are trying to frame intelligence within this narrow definition, but the topic is much broader than you think and has far-reaching implications. It goes beyond Computer Science; it reaches into the neuropsychology of the human mind and topics such as self-awareness, concept formation, motivation, rationality, abstraction, creativity, imagination, and ethics, with these items only being found within the confines of the human mind.

I am not sure what to make of this statement. You're saying you do not have to believe, agree, or understand a given concept, but then you say it would be a disservice NOT to believe in it. Please elaborate.
I am not trying to frame intelligence into a narrow definition. That's whay I keep asking you for your definitions of things. It is a far broader definition; for the purpose of discussion parameters allow us to understand each other's thoughts.

My statement is simply my way of saying, "try to have an open mind." Personally, I like being wrong because I just might learn something.
 
I am not trying to frame intelligence into a narrow definition. That's whay I keep asking you for your definitions of things...
And I have given definitions to frame the context of the discussion, so that others may consider some of the underlying principles and concepts not given by the narrow definition of superintelligence. Superintelligence wrongly attempts to become another global, all-encompassing field, with the intent of controlling the topic of AI and intelligence in general.

What I have found is that many people are so impressed by self-important and pompous scientific terms that they neglect to fully understand the underlying and extended principles of the subject.

With any scientific topic, questioning and debating the issues is what makes 'science`' scientific.
 
Jeff K….

You trying to argue this with Molakule is like you trying to play basketball against Jordan … In his prime… Good luck with that… 😆
 
And I have given definitions to frame the context of the discussion, so that others may consider some of the underlying principles and concepts not given by the narrow definition of superintelligence. Superintelligence wrongly attempts to become another global, all-encompassing field, with the intent of controlling the topic of AI and intelligence in general.

What I have found is that many people are so impressed by self-important and pompous scientific terms that they neglect to fully understand the underlying and extended principles of the subject.

With any scientific topic, questioning and debating the issues is what makes 'science`' scientific.
And I have tried to keep the conversation within narrow confines, starting with the title and body of my thread.
Pompous people seem to have trouble explaining themselves and seem to have a problem acknowledging opinions and thoughts other than their own. I do my best to listen and respect others.
 
We were at Stockholm National Museum last year. They have an AI-created sculpture on display there. Now, the computer took cues from a few famous sculptors, but I guess that's no different from one artist incorporating cues from another artist... hard to draw a line between inspiration and plagiarism, I suppose...

It gave me the Terminator vibes when looking at it. :)


https://techxplore.com/news/2023-06-ai-statue-michelangelo-sweden.html
It's kinda ironic the article starts with these words: "A historical dream team of five master sculptors, including Michelangelo, Rodin and Takamura, have trained artificial intelligence (AI) to design a sculpture............"
 
Back
Top Bottom