When mass death is cheaper than a mollyguard / by Bryce Hidysmith

Pixel sort of a photo of an electrical box in Hunter’s Point I took in 2015.

Pixel sort of a photo of an electrical box in Hunter’s Point I took in 2015.

< Soundtrack: Alash Ensemble - Buura >

Today, the Trump administration grounded the remaining Boeing 737 MAX aircraft under the control of United States airlines, following the lead of many foreign regulatory bodies. The Boeing 737 MAX has had two crashes with no survivors due to an automation system with no killswitch. The first one was Lion Air Flight 610, which crashed into the Java Sea thirteen minutes after takeoff from Jakarta on October 29, 2018. The second was Ethiopian Airlines Flight 302, which crashed approximately six minutes after takeoff from Addis Ababa.

The problem with the 737 MAX seems to have been related to its automation. Ars Technica has the best story that I’ve found that isn’t behind a paywall. Quoting from the Ars Technica story:

A stall occurs when an aircraft's angle of attack (AOA)—the relative angle of the aircraft's wing surfaces to the flow of air across them—reaches the point where the wing can no longer generate enough lift to sustain flight. Usually, this happens in a climb with insufficient air speed. Automatic control systems such as MCAS try to solve this problem by pushing the nose of the aircraft down—putting the aircraft into a descent and increasing airspeed and relative airflow across the wings. MCAS relies on an AOA sensor to determine whether this is required. If the AOA sensor is faulty, it could create a false signal of a stall—which is what happened in the case of Lion Air Flight 610 and may have been the issue with the Ethiopian Airlines flight.

The trouble is not just that the sensor may detect a non-existent stall and caused the plane to plunge into the surface of the earth, but that the design of the pilot’s interface is not set up to counteract this. The Ars Technica article attempts to blame the government shutdown for slowing down the FAA for fixing the software problem, but I’m going to hypothesize that the problem is likely more systematic in the way that automation is rolled out in modern aviation. For instance, this other Ars Technica article, quotes from Reuters and Bloomberg that pilots we’re never even told about the changes to the 737 MAX’s anti-stall automation, and thus had no chance to correct the malfunctioning system.

From a few things I’ve read and heard around this subject, the main reason that automation might not be divulged is to avoid costs in retraining pilots with new interfaces. This means that the projected cost for actually telling pilots that this type of potentially disastrous automation is active in the plane and implementing an interface to enable their agency over the automation in the plane exceeded the cost of simply putting the automation in and hoping that it would work without any noticeable errors. Regardless of whether or not this was the actual chain of events—the truth will likely never come to light—the division of labor was still the same, and the design of the division of labor is to blame. The engineers were forced by the executives at Boeing to bet that they had done a perfect job that would require no improvisation on the part of the people actually piloting the aircraft. This disempowered the situational adaptability of the pilots.

I must conjecture that there is no good solution without using situational adaptability and planning for antifragility under unforeseen circumstance. The solution to this is to identify sources of potentially catastrophic automation and enable pilots to be able to switch them off if they have been triggered and the conditions required for their trigger are not clearly met, as empirically judged by the pilot themselves. This issue is as simple as a mollyguard over a big red button that turns off the sensor in question, along with a bank of lights showing the activation state of each automated subsystem that might cause a catastrophic failure. This certainly would put a demand on the pilot and/or copilot’s attention, but given the similar failures of both the Lion Air and Ethiopian Airlines flights, it seems altogether necessary if the design of modern airplanes requires automation systems that are active during potentially catastrophic sequences such as takeoff and landing that could likely be corrected through a combination of manual control and the intentional triggering of automation routines by the pilot.

This is rather close to my heart, as I’ve spent a great deal of time discussing stop button problems in AI safety with a number of friends. The horrifying thing is that we don’t seem to have a sensible enough doctrine around non-intelligent automation to even put stop buttons on finite-state homeostats. The combination of improper regulation, perverse economic incentives, and just frankly bad design philosophy seems to be driving the detrimental effects of automation tech much more than the philosophical and technical problems typically studied by AI safety researchers. Given this, the 737 MAX case seems like a good justification to begin an altogether divergent research agenda in the study of automation safety, as the power centers of industrial civilization seem to completely lack a viable doctrine of autonomous tool use.