Brand Ranks particle count and efficiency "normalized" to ISO 4548-12

Joined
Mar 29, 2016
Messages
1,786
Location
CA
A while back, I made an estimate of what the Brand Ranks testing would look like in efficiency terms. At the time, there was one factor that was unresolved because I didn't have the data and couldn't get my head around how it would affect the calculation. That factor was the ratio of test dust used to test fluid. At the time, I limited what I shared to the high efficiency filters because those would be the ones least impacted by calculation differences.

Since then, I've been able to source the missing information and take a deeper read of ISO 4548-12 to better normalize the Brand Ranks numbers. Below you will find adjusted particle counts and efficiency numbers for all the filters Brand Ranks tested.

Since these numbers have been adjusted, it is important to share what I have done and why;

1) Brand Ranks actually followed a lot of the ISO testing and many of their deviations would not be material. But the one big fatal flaw of what they did was only use 2L of fluid. Andrew on the other hand I estimate used 12.5L. This means the particle count results that Brand Ranks presented are overstated by a factor of 6.25.

2) Andrew added test dust at the rate of 2.1g every 5 minutes which is 0.42g every minute. Brand Ranks added test dust at a rate of 0.55g every minute. This difference has been accounted for in the efficiency calculation ie the estimate of Brand Rank's upstream particle count has been adjusted for this factor.

3) Regardless of calculations and normalization, there will be no way of explaining the Purolator Boss performance vs what we know from elsewhere. However, almost all the other results in the table below make sense both in particle counts and efficiency;
  • The impact of an Extra Guard leaking but still reaching capacity makes much more sense now that the particle count has been reduced by the 6.25 factor. The Brand Ranks provided numbers were contradictory in terms of implied leak rate and capacity being reached.
  • The WIX XP and Napa Platinum test similar to where Andrew had the WIX XP. The fact that they were actually better by 10% suggests they were non leakers unlike Andrew's.
  • The Napa Gold had a sticky bypass valve and lost 10% efficiency vs the WIX which was identical
  • Toyota & Baldwin are shown to be rock catchers as expected
  • The Bosch high efficiency filter comes in at 98.9%
4) These efficiency numbers may still be out by several percentage points since there is nothing as good as following the ISO procedure to the letter, but the adjustment to particle counts for a valid reason based on a single difference in decision on test methodology ie 2L fluid instead of 50% of the flow rate / minimum of 6L, brings the particle counts to believable numbers.



Filter​
Capacity gramsParticles 21-38Particles 38-70Particles 70+Efficiency % 21-38
Extra Guard​
4.5​
250.132.316.2
70.0%​
Tough Guard​
5.891.20.30.189.1%
Fram Ultra​
6.65.40.40.199.3%
Fram Endurance​
4.42.90.30.099.7%
WIX XP​
8.6129.11.60.284.5%
Royal Purple​
4.69.20.00.098.9%
Napa Platinum​
8.8156.60.40.081.2%
Purolator Boss​
95.10.00.099.4%
Amsoil​
4.41.10.00.099.9%
K&N​
4.542.30.10.094.9%
Mann​
6.624.60.10.097.1%
Mobil 1​
5.640.53.60.295.1%
WIX​
7.398.80.40.088.1%
Napa Gold​
7.9176.614.20.878.8%
STP XL​
4.464.20.80.192.3%
Baldwin​
13.2413.314.84.450.4%
Motorcraft​
772.14.82.491.3%
ACDelco​
4.621.51.90.597.4%
Toyota​
13.4443.858.725.446.8%
Bosch Premium​
4.490.80.2
98.9%​
 
Last edited:
I don't recall, but did BR take a PC sample on the test fluid every time they "cleaned it up" with the high efficiency clean-up filter? If it wasn't cleaned up to the same level every time that could skew the end of test PC. Maybe that's the reason they didn't show the three lower ranges (4u, 6u and 14u) of ISO 4406 data.
 
Even if we knew the initial concentration of dust in the BR test, it's impossible to calculate the filtration efficiency with any accuracy, because we don't know how many passes the oil makes through the filter. The ISO 4548-12 standard is called a multipass test, but it measures the efficiency of the filter for a single pass of the oil. It does this by continuously measuring particle counts both upstream and downstream of the filter and comparing them.

The BR test only takes one particle count measurement downstream of the filter, after the oil has circulated through the filter an indeterminate number of times. With each subsequent pass through the filter, the particle count can be reduced by up to a factor of 100. We don't know how many passes are made, and it's not clear if BR even tries to control for this factor. The BOSS may have had a good result just because the test was allowed to run a bit longer.

With the ISO test method, it doesn't really matter how many passes the oil makes through the filter, what the exact concentration of dust is, or if the particle size distribution of the dust doesn't perfectly match ISO A2 dust, since it compares pre-filter and post-filter PCs. With the BR method, these things can introduce huge amount of error.
 
The ISO 4548-12 standard is called a multipass test, but it measures the efficiency of the filter for a single pass of the oil. It does this by continuously measuring particle counts both upstream and downstream of the filter and comparing them.
Exactly ... most people don't realize that. With that setup, you can see how a filter's efficiency decreases as the dP increases. Some of that might be caused by internal leaks going on, along with media debris sloughing. A test of a known non-leaker compared to a known leaker would have to be ISO 4548-12 tested to know the efficiency decrease cause break-down. The OG Ultra in Ascent's test held its efficiency very well, and we know that OG Ultra had the fiber ring seal on the end cap.
 
I don't recall, but did BR take a PC sample on the test fluid every time they "cleaned it up" with the high efficiency clean-up filter? If it wasn't cleaned up to the same level every time that could skew the end of test PC. Maybe that's the reason they didn't show the three lower ranges (4u, 6u and 14u) of ISO 4406 data.

No I don't think they did a PC on the test fluid.

They said it ran for 30 minutes though.

The filter they used is a Lenz CP-752-10. 14 grams capacity. 550sqin media. 50% at 9 microns, 95% at 22 microns, 98.7% at 24 microns.

I initially thought it might be responsible for skewing results but seeing that they ran it for 30 minutes and no new contaminants were added, it should have cleaned things up unless they didn't change it often enough.
 
Maybe that's the reason they didn't show the three lower ranges (4u, 6u and 14u) of ISO 4406 data.
I'm thinking that the particle counts for smaller particles might just be too high for a lab to measure. Some of their PCs come in at ISO 18 or 19 for 21u particles, which might equate to ISO 30+ for 4u particles. The highest standard ISO code is 24, and I'd assume the test equipment would start to be a limitation much above that.
 
Even if we knew the initial concentration of dust in the BR test, it's impossible to calculate the filtration efficiency with any accuracy, because we don't know how many passes the oil makes through the filter. The ISO 4548-12 standard is called a multipass test, but it measures the efficiency of the filter for a single pass of the oil. It does this by continuously measuring particle counts both upstream and downstream of the filter and comparing them.

The BR test only takes one particle count measurement downstream of the filter, after the oil has circulated through the filter an indeterminate number of times. With each subsequent pass through the filter, the particle count can be reduced by up to a factor of 100. We don't know how many passes are made, and it's not clear if BR even tries to control for this factor. The BOSS may have had a good result just because the test was allowed to run a bit longer.

With the ISO test method, it doesn't really matter how many passes the oil makes through the filter, what the exact concentration of dust is, or if the particle size distribution of the dust doesn't perfectly match ISO A2 dust, since it compares pre-filter and post-filter PCs. With the BR method, these things can introduce huge amount of error.

Yes this is one of the major differences in how they tested.

I watched all their videos to learn what they did. They put 1.1 grams of the test dust into 500ml of test fluid and emptied it 250ml a minute. They did 4 batches. So they ran 4.4 grams in 8 minutes.

What that means is that they did the continual injection of the contaminant and they should have stopped when the last 250ml emptied.

The amount of particles they introduced in the same timeframe was approx 31% more than Andrew in 6.25 times less fluid but they did it continuously like Andrew would have done. Since they based so much off the ISO test, then I would assume they ran the flow at 25 liters per minute.

Yes they then only have a single PC reading at the end and you are right that if there was a time delay and the fluid was allowed to circulate it would have gotten cleaner. So we have to assume that given they got most of the procedures right, they knew to stop the test and take a downstream sample when the last 250ml was emptied.

There is also one other consideration. The stop at 4.4g means they got the particle count at a relatively efficient part of the filters life, so the results may be overstated. Previously I made an adjustment for that but not in the above.

We cannot get ISO like accuracy on the Brand Ranks testing but we can now see some particle count numbers that are more sensible.
 
Last edited:
Thanks for the calculations!! IMO he skipped to many controls but I’m certainly not an expert. The TG, PH, Boss, and Napa Gold seem to be the outliers like you mentioned. Boss being better than iso results. TG, PH, and Gold with problems.
 
Last edited:
Whip city found light spots in the media showing under a microscope, quite a lot, in Toyota, Baldwin, and Denso. If I remember correctly. I know those particular companies are not stupid. I ordered a Pentius ufxl from Amazon and the sticker was peeling up. They put a sticker on a black can that looks very similar to other black cans from Asia. Is it really made in Korea? The sticker says so.
In summary, doesn’t anyone get oil filter burn out here?
 
We cannot get ISO like accuracy on the Brand Ranks testing but we can now see some particle count numbers that are more sensible.

Agreed. I think BR's testing is excellent for comparing filters since the test is more or less standardized. Sure, it may not ISO level of testing but it's also free. He put a lot of effort into updating his testing rig based on everyone's whinging, and he has even humorously toned down his presentation style (which was very annoying but that's the YouTube game).

Thanks for compiling this data!
 
Whip city found light spots in the media showing under a microscope, quite a lot, in Toyota, Baldwin, and Denso.

I've been compiling screenshots of WCW's 100x magnification photos and comparing them to BR's results. The Baldwin filter media didn't seem that bad, almost similar to Fram EG. Denso FTF seems worse than Toyota. Not sure how representative the sampled images are of the entire filter media though.

1727536944018.webp

Baldwin B1402

1727537070510.webp

Denso

1727537132527.webp

Toyota
 
I've been compiling screenshots of WCW's 100x magnification photos and comparing them to BR's results. The Baldwin filter media didn't seem that bad, almost similar to Fram EG. Denso FTF seems worse than Toyota. Not sure how representative the sampled images are of the entire filter media though.

Baldwin B1402


Denso


Toyota
There was another Baldwin with a lot of light spots I believe, and a German filter.
Found it model B37.
 
Last edited:
Exactly ... most people don't realize that. With that setup, you can see how a filter's efficiency decreases as the dP increases. Some of that might be caused by internal leaks going on, along with media debris sloughing. A test of a known non-leaker compared to a known leaker would have to be ISO 4548-12 tested to know the efficiency decrease cause break-down. The OG Ultra in Ascent's test held its efficiency very well, and we know that OG Ultra had the fiber ring seal on the end cap.

Could this explain why the Frame Ultra with the leaky Ruffles bypass spring is still within 1% of the Endurance with a smooth spring?
 
Could this explain why the Frame Ultra with the leaky Ruffles bypass spring is still within 1% of the Endurance with a smooth spring?
I was only pointing out in post 4 that in an official ISO 4548-12 test, the particle counters are constantly measuring debris upstream and down stream of the filter, so it has the ability to see in real time during the test how the efficiency of the filter changes as it loads up with debris. It doesn't really have any connection to what BR saw, or how they ranked their efficiency testing, because they only used a final particle count of the test fluid after they terminated each test.

Did BR show the condition of each leaf spring stamping and check for leak gaps on all the filters they tested? I don't think they did ... only focused on that one Fram PH if I recall correctly that had an obvious flaw that could have been a big leak path. We don't really have any good direct test data correlation evidence between leak paths and the effect on efficiency. As mentioned in another thread, an ISO test lab did take a low efficiency filter, tested it to verify it came in low, then opened it to seal all possible leak paths except for the media and then re-tested it. The efficiency went up substantially, so the leak path(s) must have a pretty big impact on the ISO efficiency.
 
Last edited:
No I don't think they did a PC on the test fluid.

They said it ran for 30 minutes though.

The filter they used is a Lenz CP-752-10. 14 grams capacity. 550sqin media. 50% at 9 microns, 95% at 22 microns, 98.7% at 24 microns.
I thought the "clean-up" filter was better than 98.7% at 24u. That efficiency rating is worse than some of the filters they tested, lol. If it only filtered down to that level, then I would have thought they would have at least PC tested the cleaned up oil a couple of times to determine how clean it got from the clean-up procedure they were using.
 
A while back, I made an estimate of what the Brand Ranks testing would look like in efficiency terms. At the time, there was one factor that was unresolved because I didn't have the data and couldn't get my head around how it would affect the calculation. That factor was the ratio of test dust used to test fluid. At the time, I limited what I shared to the high efficiency filters because those would be the ones least impacted by calculation differences.

Since then, I've been able to source the missing information and take a deeper read of ISO 4548-12 to better normalize the Brand Ranks numbers. Below you will find adjusted particle counts and efficiency numbers for all the filters Brand Ranks tested.

Since these numbers have been adjusted, it is important to share what I have done and why;

1) Brand Ranks actually followed a lot of the ISO testing and many of their deviations would not be material. But the one big fatal flaw of what they did was only use 2L of fluid. Andrew on the other hand I estimate used 12.5L. This means the particle count results that Brand Ranks presented are overstated by a factor of 6.25.

2) Andrew added test dust at the rate of 2.1g every 5 minutes which is 0.42g every minute. Brand Ranks added test dust at a rate of 0.55g every minute. This difference has been accounted for in the efficiency calculation ie the estimate of Brand Rank's upstream particle count has been adjusted for this factor.

3) Regardless of calculations and normalization, there will be no way of explaining the Purolator Boss performance vs what we know from elsewhere. However, almost all the other results in the table below make sense both in particle counts and efficiency;
  • The impact of an Extra Guard leaking but still reaching capacity makes much more sense now that the particle count has been reduced by the 6.25 factor. The Brand Ranks provided numbers were contradictory in terms of implied leak rate and capacity being reached.
  • The WIX XP and Napa Platinum test similar to where Andrew had the WIX XP. The fact that they were actually better by 10% suggests they were non leakers unlike Andrew's.
  • The Napa Gold had a sticky bypass valve and lost 10% efficiency vs the WIX which was identical
  • Toyota & Baldwin are shown to be rock catchers as expected
  • The Bosch high efficiency filter comes in at 98.9%
4) These efficiency numbers may still be out by several percentage points since there is nothing as good as following the ISO procedure to the letter, but the adjustment to particle counts for a valid reason based on a single difference in decision on test methodology ie 2L fluid instead of 50% of the flow rate / minimum of 6L, brings the particle counts to believable numbers.



Filter​
Capacity gramsParticles 21-38Particles 38-70Particles 70+Efficiency % 21-38
Extra Guard​
4.5​
250.132.316.2
70.0%​
Tough Guard​
5.891.20.30.189.1%
Fram Ultra​
6.65.40.40.199.3%
Fram Endurance​
4.42.90.30.099.7%
WIX XP​
8.6129.11.60.284.5%
Royal Purple​
4.69.20.00.098.9%
Napa Platinum​
8.8156.60.40.081.2%
Purolator Boss​
95.10.00.099.4%
Amsoil​
4.41.10.00.099.9%
K&N​
4.542.30.10.094.9%
Mann​
6.624.60.10.097.1%
Mobil 1​
5.640.53.60.295.1%
WIX​
7.398.80.40.088.1%
Napa Gold​
7.9176.614.20.878.8%
STP XL​
4.464.20.80.192.3%
Baldwin​
13.2413.314.84.450.4%
Motorcraft​
772.14.82.491.3%
ACDelco​
4.621.51.90.597.4%
Toyota​
13.4443.858.725.446.8%
Bosch Premium​
4.490.80.2
98.9%​
Looks like Endurance and all its “clones”..Amsoil … RP… ( and Ultra to a great degree) still fair well in your analysis.
What are the odds of ALL in that group being non leakers? 🙄
 
Back
Top Bottom