That graph shows efficiency vs time/dirt loading. While it does look to be pretty linear, if you were to plot efficiency vs dP instead, it would look very different, since dP increases exponentially with time/dirt loading.
Maybe you missed this link because I added it later to my last post - linked again below. You would somehow have to look at how the efficiency decreased with the dP held constant at different dP levels to see that impact on the efficiency loss. The incoming flow debris concentration rate is constant in the ISO test, so it's possible that as the total amount of loading increases in the media, that the shedding rate also increases pretty linearly mainly because the capture rate for any given efficiency filter would also basically be linear. The dP difference between 6 PSI and 14 PSI may not be enough to see any notible affect the shed rate, and it would take much more of a dP difference to make the efficient loss curve non-linear. Bottom like is that debris shedding does increase with the amount of loading, and so does the dP.
Using the raw PC data, one could plot the efficiency loss for the specific particle sizes measured, and have a curve similar to the referenced Purolator/M+H curve shown in the Ascent ISO testing thread.
If an already loaded filter was setup to monitor the shedding particles on the downstream side as clean oil was flowing through it, I would suspect there would be some correlation between the dP across the media vs the level of shedding. I'd think that more flow and dP through the media should break more debris loose from the media.
Here are the two graphs that show that the Ultra hardly lost any efficiency right up to where Andrew stopped the test - the point during the test run where he decided the delta-p and capacity hit the limit he set. That's why some filters could go longer in the test than others while being hit with the same level of test dust loading. Some filters have more holding capacity, so it takes longer for them to reach the defined end of test parameters.
I posted the efficiency of AC Delco dropping off with time, basically immediately before a significant change in delta P.
Here's another...
The way I see it, all filters are shedders, it's just that the sizes of particles that are prone to shedding will be larger for a lower efficiency filter. For particles similar in size to the filter's absolute micron rating it won't shed much, but around the nominal micron rating size, it will be prone to shedding a lot.
That basically just reflects what the ISO 4548-12 shows. That more efficient filters don't shed as much as lower efficiency filter, regardless of the exact mechanism causing the shedding. Those filters simply couldn't rate that high in ISO efficiency if they were a big shedder. As seen in Ascent's ISO testing, all the more efficient filters had much less debris shedding as they loaded up. What's the definition of "nominal", as some think it means different things - did you mean absolute micron rating?
So if you're only looking at particles of a specific size, say 20 micron, a filter like the BOSS will look be a "shedder" at that size, since 20 micron is a small particle for that filter. A more efficient filter should behave similarly, but maybe for particles that are around 10 micron.
When you look at the top 3 filters in Ascent's test, all with pretty high efficiency, they all hold high efficiency down to 15u, so they aren't shedding much of anything down to 15u. The Wix XP and Boss lost efficiency fast, and they shed debris like crazy compared to the 3 efficient filters. It would have been nice to see the curves go down to 5u, and Andrew did say his particle counters could go down to 4u in that thread, but he elected to only measure down to 15u.