2nd Fram Ultra in a row with holes in the crimp

Status
Not open for further replies.
All I asked for was recent empirical test data. You linked to a 25-page thread from years ago. Buried within it is one Ascent graph with test data showing Royal Purple more efficient at certain sizes than the Boss around 2.5 yr ago. You could have posted it here, but you didn’t. Instead, you repeatedly ignored what I was actually saying/asking and posted irrelevant arguments as if I was arguing that the Boss was efficient, which I never once contended.
Empirical data seekers generally conduct their own research and due diligence to an extend instead of relying on others.
 
All I asked for was recent empirical test data. You linked to a 25-page thread from years ago. Buried within it is one Ascent graph with test data showing Royal Purple more efficient at certain sizes than the Boss around 2.5 yr ago. You could have posted it here, but you didn’t. Instead, you repeatedly ignored what I was actually saying/asking and posted irrelevant arguments as if I was arguing that the Boss was efficient, which I never once contended.
If you want constant "up to date real world test data", then you're going to have go looking for it yourself at this point. Or spend many thousands of dollars for Ascent to test all the current filters you are interested in.

And don't be so fast to discount the Ascent data that was done back then, because all of the filters in that test still claim the same efficiency as they did back then. You think a filter manufacture is going to greatly increase the efficiency of their filters and not update their efficiency claims on their end?
 
Why do you think no legitimate filter company reports empirical “real world test data”?

It’s a bad road being an “empiricist”. People ascribe all sorts of useless things to “real world” tests, nearly all of which are unwarranted.
Mostly if they believe test data that may be considered "empirical" because a test of some kind was done, but not tied to any actual official accepted test procedure, which includes special calibrated equipment.

Doing tests to "rank" similar products might have some value just for a ranking purpose of "bad to good", but can't really take the resulting values at face value, like the actual viscosity used (without actually measuring it) or the flow rate through a filter without proper instrumentation.
 
Soooo... If I snag anything Fram but an Endurance off of the shelf at my local Walmart, is there any reason to be concerned? I lost track of the main conversation. Something about Brazil-made bad but limited, maybe?
 
Soooo... If I snag anything Fram but an Endurance off of the shelf at my local Walmart, is there any reason to be concerned? I lost track of the main conversation. Something about Brazil-made bad but limited, maybe?
From the sounds of it, seems some of the cartridge filters have the "press bonded" seam like discussed. Not sure about spin-on filters at this point. I'd have to think that what wwillson saw was a manufacturing issue, and not a design issue. If it was a design issue, I think we would have seen way more reports of a seam failure like wwillson is showing here.
 
I know that the YT tester is using a crude flow meter, which I believe is bought calibrated for water at room temperature, not calibrated for different oil viscosities at different temperatures.
FYI, I just looked up the flow meter they're using. The standard models are rated +-2% accuracy up to 110 cST (though you wouldn't be able to eye-ball the reading that accurately), and calibrated at 6.7 cST and 0.873 sg. Correction factors are required different densities, but not viscosity. Should be pretty accurate for the tests with the thinner fluid at least.
 
FYI, I just looked up the flow meter they're using. The standard models are rated +-2% accuracy up to 110 cST (though you wouldn't be able to eye-ball the reading that accurately), and calibrated at 6.7 cST and 0.873 sg. Correction factors are required different densities, but not viscosity. Should be pretty accurate for the tests with the thinner fluid at least.
I think the first test rig had the flow meter calibrated for water.
 
Mostly if they believe test data that may be considered "empirical" because a test of some kind was done, but not tied to any actual official accepted test procedure, which includes special calibrated equipment.

Doing tests to "rank" similar products might have some value just for a ranking purpose of "bad to good", but can't really take the resulting values at face value, like the actual viscosity used (without actually measuring it) or the flow rate through a filter without proper instrumentation.
The real value to standardized tests is the ability to properly compare results within the published reproducibility of the test. ASTM or ISO has performed a proper statistical analysis of the results and gives reporting tolerances. Something that’s always missing from ad hoc tests but is critical to proper reporting of results.
 
You don’t know all the variables until you test them.
That's why such test standards as ISO 4548 exists ... which was adopted internationally in 1999, and is used by the filter industry around the world. If it wasn't any good to provide a controlled repeatable test standard to compare filter performance, it wouldn't have survived as long as it has.
 
The point is that it is impossible to know all variables for any complex system, in general, until tests approach real world conditions.

Empirical testing with unknown, uncontrolled variables has merit because it tells you if your hypothetical model accounts for all relevant variables.

Case in point, the circumspect testing results prompted more digging, revealing contradictory reporting of efficiency results from the factory.

81Y1WzE7Q0L._AC_UF1000,1000_QL80_.jpg

The manufacturer has discrepancies in its efficiency reporting and/or between skus and/or over time. The validity of ISO 4548 is irrelevant if the results are misreported. The above is for p/n PBL10241, used in the questioned empirical test, showing 99% at 20 micron. Could it be that this particular sku has better efficiency than others?

The merit of such empirical test was therefore in prompting further investigation to explain unexpected results. Your explanation was a test error… certainly a good potential explanation, but not proven and based on the assumption that all Boss filter SKUs have had the same efficiency since 2021.

Other potential explanations include:
  • A change in the product.
  • A misreported test result from the manufacturer.
  • Differences between skus within the same product line.
  • Differential decreases in efficiency between filters as the test progressed.
My opinion is that other recent empirical testing would be ideal, as it would reduce the effect of misreporting or product changes, the which is why I was asking if it was available.
 
Last edited:
The above is for p/n PBL10241, used in the questioned empirical test, showing 99% at 20 micron. Could it be that this particular sku has better efficiency than others?
I highly doubt that Boss filter model is what that claim shows. You can email Purolator and ask for the Spec Sheet for that specific Boss filter like the Spec Sheets shown in the thread linked below, and prove it to yourself.

The Boss isn't as efficient as the PureONE, regardless of what filter number it is.

Thread '99% at 17 microns! Purolator ONE PL14006' https://bobistheoilguy.com/forums/threads/99-at-17-microns-purolator-one-pl14006.375905/
 
Last edited:
You’re making the claim, so that onus is on you. Prove your assumptions.
You're the one claiming that filter they tested and the clipped table supposedly claiming the efficiency for that "unicorn" Boss is most likely why it tested like it did. Prove that efficiency table is true, verify and back it up with the official Spec Sheet from Purolator/M+H. You brought up that efficiency table.

1700031301122.jpeg


Here's a recent Spec Sheet from Purolator/M+H for the huge Boss PBL30001, which shows the ISO 4548-12 efficiency as 99% >46u, and 50% 22u. So critical thinking says that some "unicorn" Boss would be a stretch to be 99% @ 20u efficient.

1700031249842.png
 
Last edited:
You're the one claiming that filter they tested and the clipped table supposedly claiming the efficiency for that "unicorn" Boss is most likely why it tested like it did.
I absolutely did not make that claim.

I stated their testing showed a thing. The manufacturer table was simply to demonstrate that there are discrepancies in reporting and to question your assumptions on your claim of testing invalidity on the basis of accurately reported values, no changes over time, and consistency between skus. That’s it.

Even the thread you linked confirms there are discrepancies.

I have claimed nothing this entire time.
 
Last edited:
I absolutely did not make that claim.

I stated their testing showed a thing. That’s it.

The above is for p/n PBL10241, used in the questioned empirical test, showing 99% at 20 micron. Could it be that this particular sku has better efficiency than others?
You seem to believe that specific Boss they tested on YouTube could really be 99% @ 20u and that's why it tested better ("a thing") than what Ascent showed for a Boss vs Royal Purple in an official ISO 4548-12 test. Even though not one source of information (not even the official Spec Sheet for their efficiency reference Boss shown in post 142), except that table showing the efficiency (which is most likely inaccurate) showing the Boss at 99% @ 20u, is the only thing not aligning with anything else on the efficiency of the Boss line. Until that table you posted can be backed up with a Spec Sheet from Purolator/M+H on any Boss, I'm going to say that no Boss has an efficiency of 99% @ 20u.
 
Last edited:
It is one hypothetical way to explain the result. I do not claim it is the reason.

You however did make a claim about the reason.
I don't think it's the reason it tested as good in the YT test as other known high efficiency filters (99% @ 20u), and I will maintain that no Boss is 99% @ 20u based on all the sources I see, until someone can prove otherwise. And that's not by some test done on YouTube.
 
lol. It’s this attitude that leads to bridges collapsing, rocket boosters blowing up, nuclear power plants built on faults, 737 Max crashes, etc. Nope. Empiricism dominates whether you like it or not.

You don’t know all the variables until you test them. Tests performed in complex systems with multiple uncontrolled variables (as opposed to tightly controlled stochastic testing) still give results that can have meaning. Unexpected results either reveal unexpected truths or errors in testing methodology - both merit follow-up and contribute to knowledge. Heck, without this philosophy we’d have far fewer therapeutic medicines as this is the basis for drug testing.
Yes a popular sentiment from people who have never done actual testing. Sounds nice and reasonable doesn’t it? But it’s not how it’s done here with automotive filter efficiency testing. You’re just muddying the water and trying to look as though you understand it when you do not. This post and this notion adds nothing, only to obscure the comparison of actual facts.
 
Well, this has turned into an epic bickering match between two members. Posts that are off topic, or bickering, or personal have been deleted.

I think the point of this thread is that a particular construction method has resulted in three defective filters purchased by one person over the span of several months.

That is a problem. Anecdotal, perhaps, but three in a row is enough to be concerned about a trend.

It devolved into bickering over efficiency, which wasn’t the point, and went downhill from there.
 
Last edited:
Status
Not open for further replies.
Back
Top