In some fields, it's going to take an extremely long time, if ever, for AI to be totally trustworthy.
Auto maintance is just one example (and BTW a problem in AI auto maintance will be that vehicle designs change with time). Medical is another good example.
I said it before, one of the best EE that I ever worked with once told me that one of the most useful tools in engineering is signal to noise ratio. That was many years ago, and something very valuable to know. As I have said in other threads, over the years, I have found that, that applies to many other fields besides EE. When one of my nephews was graduating high-school I told him about the importance of good signal to noise ratio, I also told him now that your going to be entering into the real world here's something you need to know. In school you are given a test and you have to provide answers even if you don't know the answer, you guess because that has a better chance of getting you a better grade than not answering. In real life if a wise person does not know the answer, they admit that they don't know the answer, go find out what the answer is, and come back and tell the right answer. That nephew has told me of some occasions where that advice served him well.
Writers of AI should be doing severe signal to noise test, and admitting that the AI does not have a valid answer should be an option.
---------------
AI gets a lot of it's data from the internet. And anyone who can use the internet can put anything they want out there. So, there's a decent amount of garbage out there to be included when AI gathers information. And even something produced as a joke that any intelligent human in whatever field it is in would recognize as not being valid data, AI might consider it valid data. GIGO (garbage in, garbage out) applies to data mining also.