What happened to Chat GPT?

If only it would do that instead of sometimes providing a false answer. Completely negates its strengths when it fabricates answers.
You're speaking about public AI. There are many private proprietary AI models being built. They'll be tested and validated.
 
You're speaking about public AI. There are many private proprietary AI models being built. They'll be tested and validated.
We'll see. Private AI has less checks and balances.

And so far, that has been one of the biggest problems.....keeping the accuracy and corporate $$$ interests in check.
 
Last edited:
In some fields, it's going to take an extremely long time, if ever, for AI to be totally trustworthy.

Auto maintance is just one example (and BTW a problem in AI auto maintance will be that vehicle designs change with time). Medical is another good example.

I said it before, one of the best EE that I ever worked with once told me that one of the most useful tools in engineering is signal to noise ratio. That was many years ago, and something very valuable to know. As I have said in other threads, over the years, I have found that, that applies to many other fields besides EE. When one of my nephews was graduating high-school I told him about the importance of good signal to noise ratio, I also told him now that your going to be entering into the real world here's something you need to know. In school you are given a test and you have to provide answers even if you don't know the answer, you guess because that has a better chance of getting you a better grade than not answering. In real life if a wise person does not know the answer, they admit that they don't know the answer, go find out what the answer is, and come back and tell the right answer. That nephew has told me of some occasions where that advice served him well.

Writers of AI should be doing severe signal to noise test, and admitting that the AI does not have a valid answer should be an option.

---------------

AI gets a lot of it's data from the internet. And anyone who can use the internet can put anything they want out there. So, there's a decent amount of garbage out there to be included when AI gathers information. And even something produced as a joke that any intelligent human in whatever field it is in would recognize as not being valid data, AI might consider it valid data. GIGO (garbage in, garbage out) applies to data mining also.
 
Last edited:
There are lots of LLMs. There's a big future in aspirational LLMs that can be used to make people get motivated and to work more efficiently towards specific goals.
 
Luckily my 4 adult kids have careers that AI can’t replace.

Still need a live, human, person to do these jobs.
 
In some fields, it's going to take an extremely long time, if ever, for AI to be totally trustworthy.

Auto maintance is just one example (and BTW a problem in AI auto maintance will be that vehicle designs change with time). Medical is another good example.

I said it before, one of the best EE that I ever worked with once told me that one of the most useful tools in engineering is signal to noise ratio. That was many years ago, and something very valuable to know. As I have said in other threads, over the years, I have found that, that applies to many other fields besides EE. When one of my nephews was graduating high-school I told him about the importance of good signal to noise ratio, I also told him now that your going to be entering into the real world here's something you need to know. In school you are given a test and you have to provide answers even if you don't know the answer, you guess because that has a better chance of getting you a better grade than not answering. In real life if a wise person does not know the answer, they admit that they don't know the answer, go find out what the answer is, and come back and tell the right answer. That nephew has told me of some occasions where that advice served him well.

Writers of AI should be doing severe signal to noise test, and admitting that the AI does not have a valid answer should be an option.

---------------

AI gets a lot of it's data from the internet. And anyone who can use the internet can put anything they want out there. So, there's a decent amount of garbage out there to be included when AI gathers information. And even something produced as a joke that any intelligent human in whatever field it is in would recognize as not being valid data, AI might consider it valid data. GIGO (garbage in, garbage out) applies to data mining also.
I think you are discounting that AI will be using verified, certified sources of information. Not doing a search as we know it on the Internet.
The benefits to the human race regarding medical will be profound with medical networks(hospitals, doctors, personal) sharing information on procedures, treatments, equipment and drugs around the entire globe with any doctor able to access this information for his patient and their specific illness. That doctor will have a host of information for problem cases in a matter of minutes or even seconds.

Same with auto maintenance. AI is not going to comb the forums of BITOG, it is going to lets say you have a BMW issue, world wide it will search for solutions from verified other BMW places with certified access.
 
AI gets a lot of it's data from the internet. And anyone who can use the internet can put anything they want out there. So, there's a decent amount of garbage out there to be included when AI gathers information. And even something produced as a joke that any intelligent human in whatever field it is in would recognize as not being valid data, AI might consider it valid data. GIGO (garbage in, garbage out) applies to data mining also.
Yep; bad data is nothing new. It is part of analysis and resulting algorithms. In fact, it can be useful in analytics. For example, you can use it to root out systematic problems and causes to improve data quality.
 
OK, you think it’s so hot? Ask it the best oil to run in a ‘16 F150. If it’s anything close to human it’ll start a 3 page forum thread, where that question won’t really be answered but it’ll discuss the weather, what it had a few nights ago for supper, and a couple of kids getting mauled by a black bear.
 
OK, you think it’s so hot? Ask it the best oil to run in a ‘16 F150. If it’s anything close to human it’ll start a 3 page forum thread, where that question won’t really be answered but it’ll discuss the weather, what it had a few nights ago for supper, and a couple of kids getting mauled by a black bear.
Will it start a thick vs. thin debate with itself? It would get stuck in an endless loop.
 
On a present-day, practical level, ChatGPT and similar tools help me at work by writing Powershell scripts for complex tasks.

Sometimes they’re basically ready and other times they need more editing. Sometimes they’ll suggest an old or deprecated set of commands and I have to point it in the right direction.

Either way, I’d never run a script without reading through it and understanding what it will do. Starting from something existing is much quicker than doing it from scratch and I learn just as well from reviewing it.
 
But you want other, peripheral information to contribute to the results. The beauty of AI is the ability to recognize patterns far beyond human capability. This is the basis for learning.
Yes ask it how many letter a for example in a word with multiple of same letters and it’s wrong on first try….it guesses and people blindly take result as true on more complex data.
 
Back
Top Bottom