I keep seeing posts where people denigrate buying oil that meets a spec higher than the minimum standard in the owners manual. Has someone seen studies that give support to this position. To me it seems dubious that the higher standards for wear, deposits, corrosion and lubricant stability between say API SJ and D1G3 are effectively worthless in the real world. Any input on this?
I wouldn't say they are worthless at all and the "bare minimum" for many engines has proven to be wholly inadequate at the manufacturer's dictated intervals and then this gets blamed on too long of OCI, so the OEM is wrong on the OCI duration but right on the lube choice?
Design choices made during engine development play a huge role. Sump size, oil temperature, VCT system, power density...etc. These all play a role. Toyota and GM (Saturn) both designed engines with poorly designed pistons with respect to oil control and drainback and both suffered massive oil consumption and ring sticking. Would a more robust oil with a greater focus on keeping the ring pack area clean have been beneficial in these applications? One could reasonably conclude yes. Same with the Toyota sludge monsters, ones run on "overkill" lubricants didn't suffer the same issues.
I've recently brought up the Honda VCM V6 in a couple of similar discussion because of both
@The Critic and
@Trav's experience with it. It is ridiculously hard on oil and if you run the factory interval with the factory spec lube, the one head is going to turn into varnish and sludge city. Trav had excellent experience with a Euro 0w-40 in these engines, which are clearly designed for A40, MB 229.5 and other long drain, extremely demanding specifications, and, not surprisingly, these oils held up.
What we are really looking at here is an oil's ability to cope with the conditions present inside a given application and for what duration. How much testing did the OEM perform and how did they go about determining what the minimum requirement was? Is it based on inference from the API test data or do they actually run extensive in-house engine testing? We know with certain engine designs, clearly, the in-house testing was insufficient, and I suspect this has to do with a lot of computer modelling to fast track getting the engine to market. IIRC, that was the cause with the Toyota sludge situation, it depended heavily on modelling.
There are two ways to get around an oil/OCI combo being inadequate. Run a better oil or run a shorter interval. If the actual wear performance of the lubricant is insufficient, the latter isn't going to help, however, if it is just the additive package; if it just the detergents and dispersants unable to deal with the contaminants and break-down in that application, short changing the cheap lube should be sufficient, but then you have to determine what the cut-off point is, and how do you go about doing that? And even with a "better" oil, there's no guarantee that you'll be able to run much longer drains, that's also application specific.
And then we have of course that the API/ILSAC approvals are the minimum required performance for the product. There's nothing preventing a blender from significantly exceeding the performance requirements. This is why Supertech and Mobil 1 EP aren't going to provide identical performance in all applications that call for SP/GF-6 just because they have the same approvals. Unfortunately, due to the nature of; the low bar set by these approvals, we don't really have a higher benchmark like we do with many of the Euro ones where it generally is accepted that two oils with the same approvals will perform the same in service (A40 for example).