Lion Air crash

Status
Not open for further replies.
I voted for Todd Insler, and I had not heard him say that.

I am, well, appalled by the way that sounds.

I think his points were:

1. We should've known this was an issue
2. We should be trained in the proper response.

I think his language in trying to make that second point was a bit of hyperbole, which he probably regrets now.

I'm certain that lots of pilots agree with you and I on the way that sounds.
 
It's just so unfortunate all the way around but at least it doesn't appear to be on the pilots really. Yes, maybe in a perfect world they should've saved it. But never having been told that regime exists......Boeing better batten down.
 
Originally Posted by AdmdeVilleneuve
Here is an article written by The Seattle Times' ... https://www.seattletimes.com/busine...t-boeings-737-max-flight-control-system/
Great article.

Sure the crew should have simply turned off the automatic stabilizer pitch trim by flipping the trim CutOut switch & manually trimming after that. Asking pilots to do this when things go wacky in the cockpit in real-time has always been allowed in the aviation world, and that's why Boeing control algorithm designers allowed one lone single funky bad AoA vane to activate nose down trim.

It is tradition to throw it back into the pilots' lap. .. I don't like that tradition given the capability a modern airplane has to monitor those twin AoA vanes using the simple basic Alpha = Theta - Gamma equation derived from dual pitot-static tubes & inertial triplex sensors. {Alpha is angle of attack, Theta is pitch angle, and Gamma is flight path angle; sorry, we engineers love Greek letters.}

In addition to that, a fourth level of redundancy for monitoring {monitoring here means finding the bad sensor of the two AoA outputs} can be put in the software that applies a reasonableness test similar to what a pilot would do to say "Hey, I can't have a high AoA now because I'm not pulling G's while flying steady at a good airspeed, so I know which AoA sensor is lying to me." --> you can call it Artificial Intelligence to sound hip, yet it is really just clever software that can easily do that. Remember alpha rate = pitch rate - (G's accelerometer / speed), and you can even drop the accelerometer part of the equation in a complementary or Kalman filter for trend monitoring. Lots of tricks up our digital-signal-processing sleeves in our toolbox, to mix metaphors in a wanton manner.
 
Last edited:
Even though I think Boeing never should have let a single AoA vane muck up the nose trim like that, I see the "fault distribution" % blame in this crash as:

40% Lion Air maintenance crews/managment: They failed to diagnose and repair a bad AoA vane and/or related bus or wiring on the flight prior to the crash. In fact, it kinda looks like they replaced the wrong AoA vane on the previous flight. Don't know that of course. Or, maybe they said they replaced it and really did not(???). We'll find out later.

39% Lion Air crew: Why didn't they just simply pull the trim Cutout switch? Turns it all off, and they could have flown manually after that.

20% Boeing training folks: They never put the MCAS description in the training manual for the new MAX type 737. They know some operators like Lion Air have a spotty training-safety record, so they needed to really push training on this MCAS with known Asian carriers who have demonstrated bad airmanship over many years now. Its obvious to all of us, so must be obvious to them, right? Chickens, they are.

1% Boeing Systems Engineers: They didn't do a proper Human Factors study to see how crews might react to the MCAS stall protection. Pilot workload was increased with the nose down dangerous trim induced by MCAS. Tradition doesn't cut it as a reason in my opinion.
 
This type of accident is really sad.
As with AF 447, if only the crews had been trained to recognize what was going on and how to deal with it, hundreds of people would be walking around today.
Nothing critical to the operation of any aircraft should be vulnerable to a single point failure and that is embedded in the FARs, yet that appears often to be the case.
Maybe we've come full circle and we need more sim time for ATPs in recognizing and dealing with failures in the various aspects or the various implementations of envelope protection?
The crew may no longer be able to crash the airplane, but the software can do that for them as happened here and with AF 447.
 
I was reading the pilot report on one of the new Embraer bizjets today and they've got four AOA/Air Data sensors on the nose. I don't know how that's mechanized but clearly two are not enough (how does the FMS *know* which one is right). The reason a triple redundant IRS *is* tripled - they vote. We once designed a 6 IRU in the same box inertial system that would accept, I think, three RLG failures and still be fail-op. The goal was to make an inertial system that, once installed, never would be removed from the aircraft, 30+ years (the sensors were OK but the gating reliability item became the power supplies).

But back to Lion Air .... I think the crew thought they had "solved" the problem each time they were given back control from this secret safety system. We humans are optimists. This was an intermittent problem to them, caused by a system they didn't know was there. Maybe we'll find out later that they did try to disengage the trim, who knows. I'll tell you one thing though, a flight engineer with access to a whole breaker panel would have known what to do.
 
Originally Posted by DeepFriar
I was reading the pilot report on one of the new Embraer bizjets today and they've got four AOA/Air Data sensors on the nose.
I wouldn't put 4 on the nose due to a long history of common-mode failures on closely clustered sensors out there, such as freezing on vanes/tubes or debris clogging the pitot ports, etc. .... There are other better ways to create reliability and sensor fault dection, as I posted above. Another way to add redundancy to AoA readings is to use the B-2 bomber style of AoA sensing (pressure sensors only), which has been around for 30 years now in case someone (from Boeing!) accuses me of being too new-fangled.
Originally Posted by DeepFriar
I don't know how that's mechanized but clearly two are not enough (how does the FMS *know* which one is right).
In my post above, I outline some basics of using the other sensors from other systems to cross check the AoA vanes independently to break a tie between 2 AoA vanes. .... You mentioned "FMS". Isn't it the FCC in most airplanes that handles flight control, or have some bizjets combined navigation, optimization, & critical flight control software in one "FMS" box these days?

Originally Posted by DeepFriar
But back to Lion Air .... a flight engineer with access to a whole breaker panel would have known what to do.
Or easily written software would cover it.

.... Bottom line is Boeing's traditional conservative old fashioned view of Human Factors which tells them it's OK to throw a little confusion into the cockpit to tired & stressed out pilots when 1, only 1, sensor fails. Let the pilots sort it out if the failure rate is 1 in 10,000 per hour, they say. .... That part of the industry, and intelligent modern thought, is changing, and they are old & slow to change. Until forced to change by aircraft diving into the ocean like this.
 
I guess I'm not sure where the flight control software is resident. As an idea I was grouping "flight control" into one "thing" to be able to discuss it.

The sensors on the Embraer were not all of the same type but they weren't very specific about it. It wasn-t just four blade sensors for instance. The air data sensors may have been the dual sensing sort doing double duty but, again, I can't really tell.

I still think there should be triple redundancy for the reasons stated. Even if there was no voting and they were just backup for backups to facilitate dispatch if you're in Lagos or some other garden spot. But voting is the right answer to be safe IMHO.
 
How does something like this even get into a plane? A safety system that crashes the plane and kills everybody if a sensor malfunctions? Seems very arbitrary.

What if I made a smoke detector that burns your house down if it malfunctions? Or a an alarm system that calls armed robbers instead of the police when it detects a problem?

Why does this system exist anyway? A pilot is entering a stall. Stick starts shaking. Ok, so we don't trust the pilot to put the nose down. So hypothetically, we have a pilot we don't trust with this duty. So we put a system in that we have to trust the pilot to disable in order to avoid a crash, but we don't trust him to put the nose down when the stall warning starts?

What am I missing here? What was wrong with the plane that doesn't try to crash itself?

I'm no pilot or aviation engineer, but this sounds like some sort of sick trap made up by some sociopath. How did nobody say, "This system just commits murder/suicide when it reads wrong? How about we don't do this?".
 
I have only one question: Is the final investigation report completed and reviewed?
 
Originally Posted by fdcg27
This type of accident is really sad.
As with AF 447, if only the crews had been trained to recognize what was going on and how to deal with it, hundreds of people would be walking around today.
Nothing critical to the operation of any aircraft should be vulnerable to a single point failure and that is embedded in the FARs, yet that appears often to be the case.
Maybe we've come full circle and we need more sim time for ATPs in recognizing and dealing with failures in the various aspects or the various implementations of envelope protection?
The crew may no longer be able to crash the airplane, but the software can do that for them as happened here and with AF 447.


That's really the issue isn't it?

That "the crew may no longer be able to crash the airplane" means that we've given enough authority to the systems that now the systems can crash it.

In this case, the systems had enough authority to crash the airplane by overpowering the crew.

Air France was more subtle and complex, but the Airbus logic, starting with the A-320, has been "the flight control system has full authority to keep the pilots from being stupid".

And yet, airplanes with systems designed using that logic have still crashed when the system itself failed.
 
Originally Posted by DoubleWasp
How does something like this even get into a plane? A safety system that crashes the plane and kills everybody if a sensor malfunctions? Seems very arbitrary.

What if I made a smoke detector that burns your house down if it malfunctions? Or a an alarm system that calls armed robbers instead of the police when it detects a problem?

Why does this system exist anyway? A pilot is entering a stall. Stick starts shaking. Ok, so we don't trust the pilot to put the nose down. So hypothetically, we have a pilot we don't trust with this duty. So we put a system in that we have to trust the pilot to disable in order to avoid a crash, but we don't trust him to put the nose down when the stall warning starts?

What am I missing here? What was wrong with the plane that doesn't try to crash itself?

I'm no pilot or aviation engineer, but this sounds like some sort of sick trap made up by some sociopath. How did nobody say, "This system just commits murder/suicide when it reads wrong? How about we don't do this?".


Most crashes are blamed on pilot error.

So, engineers are trying to save us from pilots.

This is the result.
 
Originally Posted by oil_film_movies
Another way to add redundancy to AoA readings is to use the B-2 bomber style of AoA sensing (pressure sensors only), which has been around for 30 years now in case someone (from Boeing!) accuses me of being too new-fangled.


You mean the system, which, when subjected to tropical moisture, caused the most expensive airplane crash in history?

Not sure I can get behind that idea...

https://en.wikipedia.org/wiki/2008_Andersen_Air_Force_Base_B-2_accident
 
Originally Posted by Astro14


Most crashes are blamed on pilot error.

So, engineers are trying to save us from pilots.

This is the result.


Nice.

Are there currently any examples on record of these types of systems saving a plane where a pilot would have crashed it? Anything well-documented?

Would it actually be a given that any such incidents would become record, or is there a way for the crew to "keep it to themselves"?

I'm just trying to get an idea if these systems have provided a net gain, or if they are basically hundreds of human lives "in the red" as far as being an asset or liability.
 
Originally Posted by DoubleWasp
Originally Posted by Astro14


Most crashes are blamed on pilot error.

So, engineers are trying to save us from pilots.

This is the result.


Nice.

Are there currently any examples on record of these types of systems saving a plane where a pilot would have crashed it? Anything well-documented?

Would it actually be a given that any such incidents would become record, or is there a way for the crew to "keep it to themselves"?

I'm just trying to get an idea if these systems have provided a net gain, or if they are basically hundreds of human lives "in the red" as far as being an asset or liability.


Fascinating question. I don't have the answer.

When you look at some crashes, like Colgan Air 3407 ( https://en.wikipedia.org/wiki/Colgan_Air_Flight_3407 ) the warning systems were ignored by the pilots. Now, this captain had a history of poor performance in the simulator, and responded poorly in real life, too, resulting in the death of everyone on board. So, it's my contention that engineers look at this, and by corporate mandate are told to improve the system.

The result?

Well, if a pilot ignores the stall warning system, as the captain of Colgan Air did, then you have to build a system that will respond to that warning without pilot input.

So, you get MCAS, where a stall warning is acted upon by the airplane, which forces the nose down, in case those pilots don't do the right thing.

And that's what happened here with Lion Air - a newly-designed, more forceful system to protect from pilot error. The system response over-rode the pilots, it responded to a stall warning, and the pilots tried to keep the nose up, and because the system was looking at a false warning, it over-rode the pilots and forced the nose down. The pilots were seeing the real picture. Now, they should shut off the trim, but...

So, you ask an excellent question, and I can think of two ways that a system "save" would get back to agencies (or manufacturers); pilots report it, or a data-collection system captures it.

The first is, frankly, unlikely, for a couple of reasons. The pilots would have to know that the system saved them, and the situational awareness (SA) to know that was the case was probably lacking in the first place, or the system wouldn't have been needed. Further, many pilots operate in an environment where any error can cause a loss of their job and/or a fine. So, there really isn't an incentive to report, and in fact, fear of repercussions really suppresses reporting.

The second is something that we've had at United for a long time - data recording. And that is protected information at United, for a lot of good reasons. But the data recording to which we (ALPA/United) agreed can only be used for analysis, not repercussions. This is not the case around the world. In China, for example, data-logging is used to punish pilots. Exceed 30 degrees angle of bank by one degree? $200 fine comes out of your paycheck. 5 degrees? $500. You could lose your whole paycheck in one flight! No wonder some airline pilots are slaves to the autopilot...

The data from the data-logging is captured by the airline. So, perhaps they share, and perhaps they don't. We share. It's called FOQA, and we've identified threats and trends before they become crashes, and the identification of those areas allows us to focus, in training, and in flying, on managing those common threats. There are huge potential safety benefits to this forward looking approach, which identifies threats and errors before they become crashes, when compared with conventional accident analysis which kicks through the wreckage...

In fact, to incentivize reporting that the data logging would've missed, the FAA and UAL entered into an agreement in which the incident on which pilots file a report cannot be further used by the FAA or the airline for certificate (license) action* so there is a growing volume of pilot reporting under this program (known as FSAP, and adopted by other airlines). But the data is protected. I've filed several FSAP reports. Each identifying a safety concern that can be added to the database.

Want a really fascinating read into the intersection of pilot performance and aircraft systems engineering? Read about the A-320 crash at Toulouse**. An Airbus test pilot, who knew the flight control logic intimately, was conducting a demonstration of the fail-safe nature of the Airbus flight control logic with respect to stall during a sales flight. So complete was his faith in the system that he was puzzled as the airplane crashed into the trees, killing everyone on board.

He had not remembered that the system was disabled below 50 feet, or you wouldn't be able to land the airplane, and so, the demo didn't go as planned. In this case, the system was given full authority, the airplane was stall-proof...and the very experienced, fully aware test pilot managed to stall and crash anyway.


*Certificate action is administrative in nature, like a photo-radar ticket. Unlike photo radar, the pilot is presumed guilty of the infraction, and is typically fined $10,000 and out of work, because their license is suspended until he/she can prove their innocence.

** https://en.wikipedia.org/wiki/Airbus_Industrie_Flight_129 I would read the original report, if you have time.
 
Originally Posted by DoubleWasp
I'll read through the whole thing tonight.

It sounds like we are getting close to having that robot-wheel-autopilot from the Disney movie Wall-E.


I think that we are...

That, or close to the old joke about the pilot and the dog in the cockpit....
 
Status
Not open for further replies.
Back
Top