dnewton3
Staff member
I would agree, Zee.
My point is that most of today's filters are fairly decently rated, and all similar. They are NOT the same; I'd admit that. But it's easy, today, to get a filter than is anywhere from 95-99% at 20um. And that difference between 95% and 99% isn't manifesting into great wear-data differences. The typical "normal" wear variation is FAR greater than what little disparity may show up contrasting one filter to the other. Now if you want to contrast a 99% filter and a 50% filter, I'd completely agree with you; there will be some disparity (although I cannot state how much; no study data I'm aware of). Referring back to my study data:
https://bobistheoilguy.com/used-oil-analysis-how-to-decide-what-is-normal/
see the micro-analysis example.
After a steady diet of Mobil 1 and a Pure-One filter, there was a shift to MC5K and a Puro white can. However the OCIs never varied; 5k miles. The wear data showed absolutely no statistical difference overall, despite the change in filter and lube. The input variables made no difference in output results. Hence, filtration (and lube base stock) were NOT the controlling factors of wear control. That leaves the OCI duration and (as debated) TCB film. If there was an ability of the Pure-One filter to make a difference, it should have been able to do so with nearly 150k miles of exposure. But it did not. So the "better" filtration never manifested into "better" wear protection in real life. The OCI was flushing out contamination prior to enough particulate accumulating to a level of making a difference. And the add-pack was keeping particulate separated and controlled. And the TCB was in play. But the change in filter NEVER made any difference.
The GM filter study is a ruse; it's so far away from daily life that the information just does not translate into something useful. It proves that tighter filtration can make a difference ONLY under the specific lab conditions. Those include:
- contamination loading equivalent to one OCI of 570k miles
- PSID allowed to hit 20psi; a condition that no normal filter would ever see because the internal relief would be fully open WAY sooner than that
- Add-pack horribly compromised from that one ABSURDLY LONG oil change interval
- a starting baseline of 98% eff @ 40um; about 2x more porous than any typical filter off the shelf today
Simply put, GM had to grossly ignore "normal" maintenance routines and create an insane amount of sump contamination to make the disparity in filter performance great enough to be measurable.
quoting GM:
"Used oil analysis from engines in the field will not typically show such a clear correlation since wear metals generated between oil changes will be at much lower concentrations."
I am not saying filtration isn't important; it most certainly is. But there is a law of diminishing returns that one cannot ignore. Once filtration is "good enough" (reaches a level that can sustain decent wear rates), then "more" filtration (ever tighter) does not pay any dividend in a normal OCI. We can see that in many UOAs here where someone is running a BP filter system, where others do not. And yet the wear data is essentially the "same as". We can also see that were some folks use a FU, and others a Wix, and yet the data is always within "normal" variation. If the variable (here the topic is filtration efficiency) cannot produce repeatable results that distinguish a disparity in performance, then the variable is NOT a controlling factor. You cannot have causation without correlation; it is impossible. GM admitted there will be no correlation between filter wear reduction and normal OCIs.
This is the inherent problem of ALTs; many times they prove things that don't matter in the normal world. And the resulting danger is that the uninformed latch onto "facts" that have no significance in our daily lives.
My point is that most of today's filters are fairly decently rated, and all similar. They are NOT the same; I'd admit that. But it's easy, today, to get a filter than is anywhere from 95-99% at 20um. And that difference between 95% and 99% isn't manifesting into great wear-data differences. The typical "normal" wear variation is FAR greater than what little disparity may show up contrasting one filter to the other. Now if you want to contrast a 99% filter and a 50% filter, I'd completely agree with you; there will be some disparity (although I cannot state how much; no study data I'm aware of). Referring back to my study data:
https://bobistheoilguy.com/used-oil-analysis-how-to-decide-what-is-normal/
see the micro-analysis example.
After a steady diet of Mobil 1 and a Pure-One filter, there was a shift to MC5K and a Puro white can. However the OCIs never varied; 5k miles. The wear data showed absolutely no statistical difference overall, despite the change in filter and lube. The input variables made no difference in output results. Hence, filtration (and lube base stock) were NOT the controlling factors of wear control. That leaves the OCI duration and (as debated) TCB film. If there was an ability of the Pure-One filter to make a difference, it should have been able to do so with nearly 150k miles of exposure. But it did not. So the "better" filtration never manifested into "better" wear protection in real life. The OCI was flushing out contamination prior to enough particulate accumulating to a level of making a difference. And the add-pack was keeping particulate separated and controlled. And the TCB was in play. But the change in filter NEVER made any difference.
The GM filter study is a ruse; it's so far away from daily life that the information just does not translate into something useful. It proves that tighter filtration can make a difference ONLY under the specific lab conditions. Those include:
- contamination loading equivalent to one OCI of 570k miles
- PSID allowed to hit 20psi; a condition that no normal filter would ever see because the internal relief would be fully open WAY sooner than that
- Add-pack horribly compromised from that one ABSURDLY LONG oil change interval
- a starting baseline of 98% eff @ 40um; about 2x more porous than any typical filter off the shelf today
Simply put, GM had to grossly ignore "normal" maintenance routines and create an insane amount of sump contamination to make the disparity in filter performance great enough to be measurable.
quoting GM:
"Used oil analysis from engines in the field will not typically show such a clear correlation since wear metals generated between oil changes will be at much lower concentrations."
I am not saying filtration isn't important; it most certainly is. But there is a law of diminishing returns that one cannot ignore. Once filtration is "good enough" (reaches a level that can sustain decent wear rates), then "more" filtration (ever tighter) does not pay any dividend in a normal OCI. We can see that in many UOAs here where someone is running a BP filter system, where others do not. And yet the wear data is essentially the "same as". We can also see that were some folks use a FU, and others a Wix, and yet the data is always within "normal" variation. If the variable (here the topic is filtration efficiency) cannot produce repeatable results that distinguish a disparity in performance, then the variable is NOT a controlling factor. You cannot have causation without correlation; it is impossible. GM admitted there will be no correlation between filter wear reduction and normal OCIs.
This is the inherent problem of ALTs; many times they prove things that don't matter in the normal world. And the resulting danger is that the uninformed latch onto "facts" that have no significance in our daily lives.
Last edited: