Superintelligence will trigger a 10x surge in scientific AI breakthroughs?

Jeff K….

You trying to argue this with Molakule is like you trying to play basketball against Jordan … In his prime… Good luck with that… 😆
Trust me; I have had conversations with some of Silicon Valley's best. I love a good discussion. I am not afraid to be wrong, as long as I am being objective and truthful. I realize I only know a little.
I stand by my thread and have tried to clarify where asked.

I am not arguing with @MolaKule, rather I am trying to narrow definitions so that we may communicate. I don't think we are that far off, but neither do I agree with everything he has posted.

I respectfully disagree with the idea that AI is not perhaps the biggest game changer in our lifetimes. It is not hoopla. I see it everywhere. AI is an emerging science and resulting technology.

Let me give you an example. My career was in Semiconductor Manufacturing Equipment. These are the tools that process wafers into chips. Dep, Etch, Metrology and more. We are talking sub 5 nanometer nodes... Some of the machines are placing cameras into the process chambers to gather data, used to predict potential failure via AI observation.

“Without observability, you’re flying blind.” is an AI concept. Today, AI copilots can handle repetitive tasks that are simplified and automated, but AI is not up for reasoning, planning and a high degree of sophistication. Yet.
 
Last edited:
I respectfully disagree with the idea that AI is not perhaps the biggest game changer in our lifetimes. It is not hoopla. I see it everywhere. AI is an emerging science and resulting technology.
Well, other emerging technologies are Quantum Computing, Biometrics, Carbon NanoTube FET's, Quantum Cryptography, 3D Integrated Circuits, Regenerative Medicine, and a whole list of others.

I am more impressed with Regenerative Medicine and T-cell substitution than AI but each to his own.
Let me give you an example. My career was in Semiconductor Manufacturing Equipment. These are the tools that process wafers into chips. Dep, Etch, Metrology and more. We are talking sub 5 nanometer nodes... Some of the machines are placing cameras into the process chambers to gather data, used to predict potential failure via AI observation.
Before Boeing, I helped design and characterize CMOS on Saphire IC's for Mil-STD-1553 processors and NSA satellites. My main job was to study the physics of defects and report findings to the processing group in order to make improvements in the processing. I also radiated IC's using various isotopes to determine radiation damage thresholds.
 
Well, other emerging technologies are Quantum Computing, Biometrics, Carbon NanoTube FET's, Quantum Cryptography, 3D Integrated Circuits, Regenerative Medicine, and a whole list of others.

I am more impressed with Regenerative Medicine and T-cell substitution than AI but each to his own.

Before Boeing, I helped design and characterize CMOS on Saphire IC's for Mil-STD-1553 processors and NSA satellites. My main job was to study the physics of defects and report findings to the processing group in order to make improvements in the processing. I also radiated IC's using various isotopes to determine radiation damage thresholds.
I was responsible for keeping and generating revenue reporting numbers to the SEC for a $2B SEMI company. SAB101 and then 104.
The CEO and CFO's liberty was at stake. You better believe I had to explain (world wide) inputs, calculations, deferred revenue and revenue, and short term forecast by our guidance rules. I was an insider in the company. Let's just say they took care of me. The reason it worked was because I challenged everything and everyone. If I didn't understand the flow of revenue from payments through deferred revenue by BU by Product by Region, how the heck could I tell a computer to do it? The best part? My numbers proved the high paid finance analysts wrong. Again and again. A similar company had to restate revenue for this very reason. A restate is one of the biggest black eyes a major company can make in Financial Reporting.

I also had to prove dB security and system fail over. Lotta earthquakes in Silicon Valley. I built a SQL Server fail over with replication in Tualitan Oregon. Intel, TSMC, Samsung and the others held us to 15 minute fail over time for critical systems. SAP failed miserably and my system passed with flying colors. My budget was small and my team was small. Me.
 
That's already happening:
https://www.forbes.com/sites/bernar...d-what-it-means-for-the-future-of-technology/

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.
Back to the topic.

That paper brings out some scary details about AI.

Let's use an Economics example in a hypothetical situation.

Let's say historical data was input to an AI economic's (investing) module that went all the way back before, and to, and through the Wall Street collapse in the late twenties and thirties, up to today's trading activities.

The target stated within the AI module, is this: "Determine the economic sectors and companies in which to invest for maximum return."

Hopefully, no Wall Street investment firm uses anything like this, but do you really think the outputted suggestions could be used as a valid guideline for investing?
 
Back to the topic.

That paper brings out some scary details about AI.

Let's use an Economics example in a hypothetical situation.

Let's say historical data was input to an AI economic's (investing) module that went all the way back before, and to, and through the Wall Street collapse in the late twenties and thirties, up to today's trading activities.

The target stated within the AI module, is this: "Determine the economic sectors and companies in which to invest for maximum return."

Hopefully, no Wall Street investment firm uses anything like this, but do you really think the outputted suggestions could be used as a valid guideline for investing?
I suggest that depends on the model and its use. Investing is an important part of Economics.
Many investment houses use AI to improve their operations and investment decisions.
AI is used in investing.

Does it work? Well, if it really did, everyone would be doing it, just like any strategy. I would say that today, AI is a tool in the investment arsenal. Predicting the future is tricky... People have been trying to game the market since day 1.

My guess is, there is a LOT of AI development going on behind the scenes in the investment world.
Here's Deloitte's take.
 
Last edited:
This is a good article. I gave up on perfect data a long time ago. Cleansing data, verifying data, deciding which data is appropriate is a huge part of predictive analytics and systems in general. It never ends.

And of course, the world is in a state of flux. Time changes everything. Data and algorithms are no exception.
In one model I built, because of the rapid growth of the company, using data over 1 year old was inappropriate. I had to apply smoothing logic to certain regions and products to make their data meaningful. Or disregard it because it swayed the numbers too much. Outliers...

I used to tell the big shots, "It's right until it's wrong." That means even if it is 100% correct in what we have agreed on now, tomorrow is another story.
 
I was responsible for keeping and generating revenue reporting numbers to the SEC for a $2B SEMI company. SAB101 and then 104.
The CEO and CFO's liberty was at stake. You better believe I had to explain (world wide) inputs, calculations, deferred revenue and revenue, and short term forecast by our guidance rules. I was an insider in the company. Let's just say they took care of me. The reason it worked was because I challenged everything and everyone. If I didn't understand the flow of revenue from payments through deferred revenue by BU by Product by Region, how the heck could I tell a computer to do it? The best part? My numbers proved the high paid finance analysts wrong. Again and again. A similar company had to restate revenue for this very reason. A restate is one of the biggest black eyes a major company can make in Financial Reporting.

I also had to prove dB security and system fail over. Lotta earthquakes in Silicon Valley. I built a SQL Server fail over with replication in Tualitan Oregon. Intel, TSMC, Samsung and the others held us to 15 minute fail over time for critical systems. SAP failed miserably and my system passed with flying colors. My budget was small and my team was small. Me.
I slept at a Holiday Inn.
 
Last edited:
Back to the topic.

That paper brings out some scary details about AI.

Let's use an Economics example in a hypothetical situation.

Let's say historical data was input to an AI economic's (investing) module that went all the way back before, and to, and through the Wall Street collapse in the late twenties and thirties, up to today's trading activities.

The target stated within the AI module, is this: "Determine the economic sectors and companies in which to invest for maximum return."

Hopefully, no Wall Street investment firm uses anything like this, but do you really think the outputted suggestions could be used as a valid guideline for investing?
I think that sounds like a disaster waiting to happen.
 
This is a good article. I gave up on perfect data a long time ago. Cleansing data, verifying data, deciding which data is appropriate is a huge part of predictive analytics and systems in general. It never ends.

And of course, the world is in a state of flux. Time changes everything. Data and algorithms are no exception.
In one model I built, because of the rapid growth of the company, using data over 1 year old was inappropriate. I had to apply smoothing logic to certain regions and products to make their data meaningful. Or disregard it because it swayed the numbers too much. Outliers...

I used to tell the big shots, "It's right until it's wrong." That means even if it is 100% correct in what we have agreed on now, tomorrow is another story.
I think the important thing is to keep AI as a tool for humans to use....but not let AI make humans the tool.
Everything needs to be checked and verified.
 
I think the important thing is to keep AI as a tool for humans to use....but not let AI make humans the tool.
Everything needs to be checked and verified.
As @MolaKule teaches us, there is so much mis-information surrounding AI. Is it a Deity or a Devil? Sheesh...
It is patterned after the human mind, so at its best (and worst) it will make mistakes; get things wrong.

My take is, this Science and resulting technology is in its infancy. I look forward to the future.
 
...My guess is, there is a LOT of AI development going on behind the scenes in the investment world.
Here's Deloitte's take.
"...AI can provide excellent analysis but human insight and oversight is essential to make the right investment decisions..."

For sure, AI and other tools can provide analysis but human oversight is crucial.

There are many other software tools that provide analysis and risk management:

https://www.capterra.com/investment...rk=o&msclkid=f5299e6f8b2b172e6780f9008521e068
 
"...AI can provide excellent analysis but human insight and oversight is essential to make the right investment decisions..."

For sure, AI and other tools can provide analysis but human oversight is crucial.

There are many other software tools that provide analysis and risk management:

https://www.capterra.com/investment...rk=o&msclkid=f5299e6f8b2b172e6780f9008521e068
I have been with the Schwab Wealth Advisory for many years. They know far more and understand far more than I ever will; it's what they do. As you say, Models are but one of their tools. I would never trust myself to use a model myself; I know I do not have the requisite background. Plus I wouldn't trust it. If it worked, everyone would use it. Of course, humans can offer pretty bad advice as well. That's why I fired Fidelity.

In my work, building Forecast Systems (Actuals + Forecast Blend) and Predictive Analytics, you HAVE TO step back and look at the results. Human overrides, adjustments, etc. make a Model sing. Kinda like predicting the weather; before you deliver your report, stick your head outside to see if it is raining.
 
Trust me; I have had conversations with some of Silicon Valley's best. I love a good discussion. I am not afraid to be wrong, as long as I am being objective and truthful. I realize I only know a little.
I stand by my thread and have tried to clarify where asked.

I am not arguing with @MolaKule, rather I am trying to narrow definitions so that we may communicate. I don't think we are that far off, but neither do I agree with everything he has posted.

I respectfully disagree with the idea that AI is not perhaps the biggest game changer in our lifetimes. It is not hoopla. I see it everywhere. AI is an emerging science and resulting technology.

Let me give you an example. My career was in Semiconductor Manufacturing Equipment. These are the tools that process wafers into chips. Dep, Etch, Metrology and more. We are talking sub 5 nanometer nodes... Some of the machines are placing cameras into the process chambers to gather data, used to predict potential failure via AI observation.

“Without observability, you’re flying blind.” is an AI concept. Today, AI copilots can handle repetitive tasks that are simplified and automated, but AI is not up for reasoning, planning and a high degree of sophistication. Yet.

I respect you a lot for trying to have this discussion.

I don’t disagree with you on this topic regarding certain aspects. No other than Glenn Beck has been taking about this emerging tech for over 6 years now.

Still … You are no where near his level.

And without any question or doubt neither am I…
 
Love Sabine! She is spot on about programmers overly trusting their code, saying it's ready only to have it fall flat on its face. I've seen so much of this in my career; programmers have more than earned thier bad reputations. Over confidence is certainly not unique to any one platform. This is why I am a stickler about terms and definitions; you have to be. One would imagine trusting natural language programming would require far more scrutiny.

I not so sure about her "more data" comment. I would be more concerned with appropriate and quality data. You have to objectively challenge your data. Facebook is a perfect example; there is a plethora of data constantly being added, which is hugely valuable. "If you aren't paying for the product you are the product."
But there's a catch... On FB I can be 6"1, 180 lbs and bench 400 lbs. Or not... We know a certain amount of the data is bogus; you cannot control every source. In my own work developing the corporate forecast, using data from the varied geographic regions, we knew some executives would claim big numbers while others would sandbag until hard orders came in. You have to qualify the data, in many cases by weighing the sources. Bad data is highly valuable in that it shows us what is not true; it is important in decision making.

The good news is, advancements like 3D NAND will allow for even more capacity used in AI algorithms in Vector Databases. I remember Samsung, SK and others developing the architecture and the Tungsten interconnects.

At the 4:50 Mark of the 1st video Sabine mentions her thoughts on the future of AI. Time will tell. But certainly progress does not occur in a straight line. Extraordinary is not easy.

Kinda funny; Sabine's comments sounds like many IT projects. Big promises to get started, blown timelines, setbacks, more money, over emphasizing small progress, not to mention lies... Finally delivering something barely resembling the original promises or even project failure.

But the awesome power of today's systems cannot be denied. NVIDIA has announced nine new supercomputers worldwide are using NVIDIA Grace Hopper Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.

It is important to remember, as @MolaKule teaches us, humans will not and cannot be out of the loop.

Good conversation... Interesting times ahead.
 
Last edited:
Jensen Huang recently boasted that ChatGPT in its current form could read and produce a summary of Moby-**** in under 60 seconds. He was positively gleeful. However, he's leaving something out, something important. Moby-**** is a work of literary art. Reading it is an aesthetic experience. To say that a summary of Moby-**** is the equivalent of reading Moby-**** is like saying that reading a recipe for dinner is the same as eating dinner. I think not, gentlemen.

AI looks like a new, more powerful way for the billionaire tech class to enslave humanity. I respectfully suggest that you check your technology narcosis. It may be that what you love is not so good for you.
 
Jensen Huang recently boasted that ChatGPT in its current form could read and produce a summary of Moby-**** in under 60 seconds. He was positively gleeful. However, he's leaving something out, something important. Moby-**** is a work of literary art. Reading it is an aesthetic experience. To say that a summary of Moby-**** is the equivalent of reading Moby-**** is like saying that reading a recipe for dinner is the same as eating dinner. I think not, gentlemen.

AI looks like a new, more powerful way for the billionaire tech class to enslave humanity. I respectfully suggest that you check your technology narcosis. It may be that what you love is not so good for you.
I love tech. I look forward to the advancements in medicine in particular. It's a tool; use the tool correctly and you drive the nail. Use it incorrectly and you smash your thumb.

Heck, could we even be on this forum without technology?
It's coming whether we like it or not. By the way, technology has been very good to me. Silicon Valley is a magical place. Brutal, but magical just the same.
 
I love tech. I look forward to the advancements in medicine in particular. It's a tool; use the tool correctly and you drive the nail. Use it incorrectly and you smash your thumb.

Heck, could we even be on this forum without technology?
It's coming whether we like it or not. By the way, technology has been very good to me. Silicon Valley is a magical place. Brutal, but magical just the same.


A tool of unprecedented and destabilizing power in the hands of a very few. It's magical. Like Harry Potter. God help democracy.
 
I love tech. I look forward to the advancements in medicine in particular. It's a tool; use the tool correctly and you drive the nail. Use it incorrectly and you smash your thumb.

Heck, could we even be on this forum without technology?
It's coming whether we like it or not. By the way, technology has been very good to me. Silicon Valley is a magical place. Brutal, but magical just the same.
You never ask the right questions!! Your overlords know the answers, but you never ask. They love that.
 
Back
Top Bottom