This picture has just come across my WhatsApp…
… and the pilot scaled that.
Researchers emphasize there are very few circumstances in which you can do two things at once without cost (relative to doing each on its own). Yet some drivers sneak a look at their phone while on the road, and some students have the television playing while they complete an assignment.
Why? One possibility is that they don’t understand the cost of multi-tasking very well. A new study (Finley, Benjamin, and McCarley, 2014) investigated that possibility.
Subjects initially practiced a tracking task: a small target moved erratically on a computer screen and the subject was to try to keep a mouse cursor atop it.
Interleaved with practice on the tracking task, subjects practiced a standard auditory N-back task: they heard a series of digits (one every 2.4 seconds) and were asked to say whether the digits matched the one spoken 2 digits earlier (or in other versions of the task, 1 digit or 3 digits earlier).
After a total of 3 phases of practice for each task, subjects were told that they would try to do both tasks at the same time. They were told to prioritize the tracking task; just as a driver must keep the car in the lane, they should do their best to keep the cursor near the target, but they should do their best on the N-back task.
Then subjects got feedback on their performance on the three phases of tracking task (expressed as percent time they had the cursor on the target) and they were asked to predict their performance on the tracking task when simultaneously doing the N-back task.
The results showed a significant drop in tracking performance when subjects had to do the N-back task at the same time. What did subjects predict?
Subjects did predict a decrement. What they could not do was predict the size.
Read more at http://www.danielwillingham.com/1/post/2014/03/what-people-know-about-the-cost-of-multitasking.html
As someone who think that the maritime industry still is an error-inducing, blame-attributing system, I recommend this link:
In 1994, the American scholar Charles Perrow wrote an article named “Accidents in High-Risk Systems”, in which he reviews his theory of “normal accidents”.
On pages 14 and 15, he writes:
“Another interesting systemic factor that influences the number of accidents and their prevention is the matter of close proximity of elites to operating systems. (…) Thus, the nature of the victims im contact with the system should have some effect upon the safety of that system.”
This can be useful to understand why airplane hijackings are usually treated so differently from ship hijackings and why the aerospace industry is error-avoiding, while the maritime industry is error-prone, for example. The elite may be involved with shipping, but is committed to flying.
This article of James Kwak was published on 2013.05.09 in The Atlantic. Although it does not deal with the maritime industry, much of it seems to be applicable in our case. Consider the Deepwater Horizon…
When you get on a plane, you would prefer that it not catch fire in mid-air, right? You would feel better knowing that someone had checked out the plane’s designs to make sure that it wouldn’t spontaneously combust, yes? And you would hope that the person doing the checking was not working for the plane’s manufacturer, yeah?
Well, that’s not what happened with the Boeing 787 “Dreamliner.” Instead, as reported in The Wall Street Journal:
“Only about two dozen FAA officials were assigned to oversee certification of the 787. FAA manager Steve Boyd told the NTSB last month that the team started with scant knowledge of the plane’s advanced battery technology. Then it allowed FAA-designated industry experts from Boeing and its suppliers to run all tests and conduct final safety reviews ‘with confidence that they [would] make the right call,’ he said.”
There’s a reason for this. Modern airplanes are unbelievably complicated, and the FAA has nowhere enough staff with the necessary expertise in the latest technologies. That leaves the FAA with little choice but to depend on Boeing employees and contractors to review and test the systems they designed and built themselves.
This should sound familiar. There is perhaps no area in which increasing complexity outruns regulatory capacity more than in the financial services industry. Prior to the financial crisis, regulatory changes allowed large, supposedly sophisticated banks to calculate their own capital requirements using their own risk management systems–the thinking being that they understood the risks they faced much better than any poor federal bureaucrat could hope to. Even today, after those banks blew up the financial system, it is conventional wisdom in many circles that the only people who can understand derivatives — and who therefore should be allowed to weigh in on derivatives regulation — are Wall Street traders (and their lawyers).
What is the solution? For some, it is more of the same:
“Clay Jones, who is retiring as chief executive of aircraft-parts-maker Rockwell Collins Inc., recently said industry should receive a bigger role in vetting new planes because the gap between the technical expertise of regulators and manufacturers has widened over the past decade.”
But as an air traveler, is that really what you want? And as a human being, do you really want JPMorgan or Bank of America deciding whether or not they pose a risk to the financial system?
Lack of regulatory capacity is a real problem. We want the FAA to be able to evaluate whether a new battery technology is safe, just like we want the SEC and the CFTC to understand the derivatives markets that they oversee. If they can’t, that’s bad.
But the problem of incentives in the private sector is at least as bad. We know that bankers were willing to structure transactions that would later blow up their own banks because of the fees they would generate in the short term. Outsourcing risk evaluation to private third parties doesn’t work, either (can you say “credit rating agency”?).
There’s no way to fix the problem of private incentives. Executives at private firms will want to minimize or ignore risks, whether because they are trying to beat competitors to market, they are under pressure from shareholders to deliver in the short term, they are unwilling to admit that their pet project was a bad idea, or they suffer (like all of us) from optimism bias.
The only good way to make sure that our planes–and our financial system–are safe is to make sure we have enough regulators with enough skills and enough motivation to do the job. That means that regulatory agencies have to be able to hire more people than now, at near-private sector salaries, and keep them for long periods of time (with a rule preventing them from working at firms they regulated for several years). Only then can we have product safety reviews that aren’t tainted by obvious conflicts of interest.
Regulatory capacity is expensive. There are two ways to pay for it, either of which is fine with me. One way is through general government revenues, meaning taxes. Since that means individual income taxes in practice,* this would be somewhat progressive.
The other way is by levying sharply higher fees on regulated firms. These higher fees would, of course, be passed on to customers in the form of higher prices. But that’s only appropriate, since proper safety reviews are a cost that the market should take into account when deciding whether or not a new airplane (or a new derivative) increases social welfare.
Either way, we have to pay for it. But as a taxpayer or as a consumer, I would be happy to pay more if it means that the airplane I’m getting on was reviewed and tested by someone qualified who wasn’t being paid by its manufacturer.
* Payroll taxes are dedicated to specific programs; corporate income taxes and other taxes are only a small part of the overall tax system.