Not exactly. Run oil and filter, like any that are named rock catcher, for 1-5000 miles, take sample and get data.
So you know the sump particulates at mile "zero" in terms of a starting point for the upcoming test. This is the "baseline" contamination load.
Fair enough said:
Leaving oil in, install N1, highest rated oil filter in efficiency. Run 100 miles. Losing some small contents of the used oil due to changing filter. Sample and get data.
So now N1 has run 100 miles on that "baseline" load, and you sample and get PC data.
No N2, 3. Start another test from the beginning.
This is where it goes awry. Once you start over with a new sump load, the results are no longer comparable. You've changed the sump load. You'll run another Xxxx miles, but there's no assurance you have the same contamination. You can have the sump tested again, but it's highly unlikely you'd have the exact same particulate loading in the sump. What happens when the PC comes back and the numbers are different? Your "baseline" isn't a baseline any more.
Although never thought of N2 etc. That starts to lessen the oil pool.
Sampling and testing would be as consistent as possible, but everything has errors.
Your method is the origin of the errors. But yes, everything has errors. The goal should be to eliminate as many as possible, and mitigate those you can't eliminate.
Using rock catcher versus consensus best could be one comparison.
as wcw says, gonna duke it out could get ugly folks on his oil filter cut opens.
So the question remains, why not?
There's no reason not to do so, if the fun of experimentation is the only goal. But there's a good reason not to trust the results.
***********************
Let’s start with some well-established facts; these are generalities which have been discussed here many times and accepted as true, backed up with study data from places like the SAE, ISO and other credible entities
- New filters are typically most efficient from the onset of their life, and as they mature through the OCI, they lose efficiency
- The rate of contamination for any one engine is assumed to be constant in your model, but must be proven so with constant sampling. The reality is that contamination rates are not always linear or steady.
- There is some manner of error in all measurement; typically referred to as Gauge R&R; this has to be at an acceptable level before the results of any measurements can be deemed trustworthy. You've said nothing about establishing any R&R.
- Statistical analysis deems that a minimum of 30 samples be done for each variable measured. This is a BIG problem for your testing accuracy.
That first point, the fact that new filters are most efficient they typically will ever be, is one reason why your methodology isn't going to be trustworthy. Even if you ran the test with N1 being another "rock catcher", it would still come out as more efficient because it's new. ANY new filter installed is going to improve the sump load. The question is, by how much? You see, if you install N1 as a high-efficiency filter, you don't know how much of the improvement would be due to the actual efficiency gain, versus it just being "new". You have two characteristics in play and you have no ability to separate the effects from the whole. Which leads to the real underlying problem with your plan ...
The real fly in your ointment is the sample size:
As I already mentioned, sample sizes of 1 are nothing but anecdotal mathematical debauchery. Any self-respecting analyst knows that small sample-sets are woefully lacking in reliable data. It takes a bare minimum of 30 samples for each variable to get a good understanding of the normal variation of the subsets. This has to do with the goal of reducing the error of standard deviation calculations; small sample-sets have wildly inaccurate tendencies in stdev. Hence, you cannot have good faith in your expectation of normality. Any first semester statistician or SixSigma trained analyst knows this.
And let’s not forget that this forum is littered with examples from all brands of filters which have manufacturing defects in them. Yet another reason you’d want to run 30 of each would be to statistically normalize the data from production variation. Tears in media; gaps in component mating surfaces; etc. These manufacturing abnormalities would all contribute to PC results being skewed if only 1 or 2 samples were taken, and so the only way to normalize the data is to use large sample-sets. What happens if your N1 ends up having a hole in the media? What about a leaf-spring with ripples that leaks past the inner seal? These are but a few reasons why small sample-sets are completely untrustworthy.
To be fair, you’re not the first person to wander into the land of useless testing. Many have gone before you, and will come after you. There's nothing "wrong" with doing this for the fun of playing around in the garage. But you have not established any credibility for deciding "
which filter filters best", not by a long shot.
.
Perhaps go read this, so that you understand the basis of expectations for "normality". Though it's about UOAs, the same concepts apply to any topic; PCs would just as easily be the subject of statistical processing and establishment of averages, trends, standard deviation, etc.
https://bobistheoilguy.com/used-oil-analysis-how-to-decide-what-is-normal/