Consumer reports 5 worst car brands.

After Consumer Reports was caught altering their long-established handling testing procedure for the Isuzu Trooper and Suzuki Samurai, to achieve the bad result they reaaaaaaaaaaaally wanted to publish, I lost any respect for their 'test' results. They proved to be biased hacks with an agenda, and no valid opinion or results.
When I was first getting into cars, I remember reading a report from them on a Camaro Z28. They marked it down for having a loud exhaust that could be heard inside the car. I thought to myself "isn't that the point of a Camaro with a V8" and immediately tossed the magazine and swore they didn't know what they were talking about.
 
I’ve never look at consumer reports before purchasing something. In general people are horrible about self reporting and compartmentalizing issues. How do you weed out the guy who thinks his favorite brand can do no wrong or the Karen who just likes to complain to complain?

Go look at a review of a product on Walmart or Amazon and you’ll find people will give it one star because shipping company put it on their neighbors doorstep, and the neighbor took it. What the heck does that have to do with the quality of the toaster you bought? And then you’ll also find the five star review guy who says he’s been using XYZ product for 50 years and has their logo tattooed on his backside.

Neither of those folks should be taken seriously IMO.
I just took their annual survey a few minutes ago...first time in years (its rather long).
Note that the survey is primarily used for three purposes (from my observation of the questions and being a subscriber):
1) General satisfaction with the product
2) Reliability experienced
3) How you use the product...for example did you use certain features (or not use)...did you buy an extended warranty

Not sure how many members fill out the survey, but they claim 6 million members. I would suspect it has to be in the 100s of thousands.
Note that their reviews are performed by their staff, not by members. The membership survey is used seperately from their review to give you a general idea of expected reliability, and user satisfaction.

Like others, I find their reviews to sometimes be a mixed bag. I have been disappointed with the reliablity or satisfaction with some of their highly recommended products, but most times I am satisfied with my experience mirroring their review.

For most products, their are dozens of manufacturers and models, I do find their reviews helpful to narrow my search down to a handful, based on the features they point out as being useful, their review of its performance, and expected reliablity from users.

Lets face it, for example a car battery, we can't test every brand and model, but CR tests most. They point out the good ones and the turds. Even highly respected battery manufacturers have poor performing versions in some sizes...CR testing shows that. And they test several samples purchased from different locations around the USA. Really, no one else tests like they do, with large sample sizes...not even Project Farm. :)
 
That sounds nice.

But what they actually did, if you read is that the vehicles performed just fine in the standard test they had used from the beginning. But they wanted to achieve a result. So they altered the standard testing procedure they had historically used for those two specific vehicles, so they could achieve the 'result' they wanted to 'report'.

You can call it what you want, but that describes disingenuous, biased, hack journalism to me.

For the record, I've never owned an Isuzu trooper, or Suzuki Samurai, nor did I have a vested interest in either brand.
Your post also sounds nice.
Only problem is that CR had no reason to single out these two vehicles for any special testing sp I can only surmise that they merely explored a problem that they had found.
They also had no vested interest in either brand and have no advertisers to encourage bias.
 
When I was first getting into cars, I remember reading a report from them on a Camaro Z28. They marked it down for having a loud exhaust that could be heard inside the car. I thought to myself "isn't that the point of a Camaro with a V8" and immediately tossed the magazine and swore they didn't know what they were talking about.
That must have been decades ago. That has changed....now they test performance cars occasionally and highly rated a Porsche a few years ago.

I just looked and both the 2025 Porsche Boxster, Taycan and Chevrolet Corvette are on their "recommended" list. So is the Toyota Supra, Mustang and BMW Z4.
 
Last edited:
Your post also sounds nice.
Only problem is that CR had no reason to single out these two vehicles for any special testing sp I can only surmise that they merely explored a problem that they had found.
They also had no vested interest in either brand and have no advertisers to encourage bias.

They had no reason to single those two out, except they did. IMO, exposing their own bias and fervent desire to create a result. Like I said, the vehicles passed their long established tests just fine. That didn't jive with what they clearly wanted to report, so they altered the tests for those two until they could achieve their goal.

I have some sportcars which are known for their handling. But I could crash those too, if I tried really hard with extreme inputs.

It exposed CR for the biased hacks they are, and 30 years later some haven't forgotten.

Clearly I'm not going to convince you otherwise, and you won't convince me they had unbiased intentions. So to continue is a waste of time.
 
That must have been decades ago. That has changed....now they test performance cars occasionally and highly rated a Porsche a few years ago.

I just looked and both the 2025 Porsche Boxster, Taycan and Chevrolet Corvette are on their "recommended" list. So is the Toyota Supra, Mustang and BMW Z4.
Years ago the automotive testers were extremely nerdy. Two examples:
1. When the Dodge Omni Charger 2.2 was tested it was pointed out that the "loud" exhaust made some of their test drivers nervous.
2. In their test of a 1984 Mustang GT the writer questioned whether any car needed 175 horsepower.
 
Are you being serious or poking fun?
I can tell you anyone who understands proper statistical analytics will tell you CR is anything but statistically correct.

Start with Sample Bias, which occurs when the survey participants are not representative of the entire population being studied per the topic. The survey population is CR subscribers!

My pet peeve is Questioning Bias, or way questions are phrased can influence responses, potentially swaying answers towards a specific outcome or a too general conclusion.

There is so much more... CR's methodology and the interpretation/reporting of the data is bunk.

Take it as a data point.
I've had uni level stats classes, and I suspect that many here have as well, so that dog doesn't really hunt.
The thing that I always point out in replying to the posts you make about the validity of CR's rankings is that they are the best tool we have.
Absent access to a large fleet database or manufacture's or third party service plan provider data, we have no better source.
Partial sight beats total blindness.
 
I've had uni level stats classes, and I suspect that many here have as well, so that dog doesn't really hunt.
The thing that I always point out in replying to the posts you make about the validity of CR's rankings is that they are the best tool we have.
Absent access to a large fleet database or manufacture's or third party service plan provider data, we have no better source.
Partial sight beats total blindness.
I said CR is a datapoint. Their surveys do not follow proper statistical procedures. The best part of CR rankings, etc, is they don't take advertising like most others. I pointed out obvious flaws.

I developed predictive analytics models over years for a multi-billion dollar Silicon Valley company. The devil is in the details. Generalities can mean anything you want. Meaningful conclusions require rigorous deep dives into the data. Question everything and hone the model.

Here's how CR define reliability. Do you agree with this methodology? Why or why not?
All good; I respect your opinion.
 
Last edited:
I said CR is a datapoint. Their surveys do not follow proper statistical procedures. The best part of CR rankings, etc, is they don't take advertising like most others. I pointed out obvious flaws.

I developed predictive analytics models over years for a multi-billion dollar Silicon Valley company. The devil is in the details. Generalities can mean anything you want. Meaningful conclusions require rigorous deep dives into the data. Question everything and hone the model.

Here's how CR define reliability. Do you agree with this methodology? Why or why not?
All good; I respect your opinion.
When CR is the best data you have, then you use it.
Yeah, it is a data point, but probably of more value then the anecdotes we exchange here or see elsewhere.
With the models you worked with, the devil is never in the details. It would have been in the assumptions that all models require.
 
With the models you worked with, the devil is never in the details. It would have been in the assumptions that all models require.
Wrong. You constantly challenge the model, including the data, objectively and hone it as you learn more. A statistical model is a living, evolving object. It is never done. And is never correct because everything is in a state of flux. You glean information and continue to challenge it.

I can give you a key example of "devil in the details": outliers.
 
Wrong. You constantly challenge the model, including the data, objectively and hone it as you learn more. A statistical model is a living, evolving object. It is never done. And is never correct because everything is in a state of flux. You glean information and continue to challenge it.
Any model relies upon assumptions.
If you worked with them you must know this.
 
Any model relies upon assumptions.
If you worked with them you must know this.
My career was in predictive analytics.
I developed models to be used by the C-Level staff to drive the company. You constantly challenge assumptions. If you don't you are talking about a static model, which is, at best, a starting point. It is wrong.

If you do not objectively challenge assumptions, you are inherently bias.

I have supported all my points. You?
 
Last edited:
My career was in predictive analytics.
I developed models to be used by the C-Level staff to drive the company. You constantly challenge assumptions. If you don't you are talking about a static model, which is, at best, a starting point. It is wrong.
IOW, all models require that assumptions are made.
You would typically show outcomes with varying assumptions.
Models are driven by assumed states, just as I wrote.
Not sure what we're arguing about here.
 
IOW, all models require that assumptions are made.
You would typically show outcomes with varying assumptions.
Models are driven by assumed states, just as I wrote.
Not sure what we're arguing about here.
I simply pointed out the fallacies in the CR reporting.
And I did not know we were arguing; we were discussiog.

Regarding assumptions, you have to challenge them. See where the model leads you. And then make it better.
In my experience, models take years to develop and have to change to be of real value.
My only assumption was, everything is wrong.
 
Yes, it is not a random sampling for the survey group. But it is a very, very large group, and this tends to correct for the fact that it is made up only of CR subscribers, and only those CR members that voluntarily complete their surveys.

CR is pretty good about not including data, where they don't feel the sample size is large enough to be statistically sound.
So if someone has been a CR subscriber for decades and completes the surveys religiously (like me) the survey results will be totally valid for my purchases. That's good.

I do have confidence in CR survey results. At least they're valid for "my population".

I have to admit they're not very good on toasters. Their top rated toasters are expensive and work beautifully when they're new but they crap out after a few years. Cheap toasters don't do a very good job but they go on forever. You pays your money and you takes your chances.
 
Back
Top Bottom