US Air Force Training

MATCHING HUMANS TO MACHINES

“There’s no such thing as a natural-born pilot.” - Chuck Yeager


Good airplane pilots are made, not born. The United States’ armed services have been making good pilots ever since 1909, when the Army Signal Corps opened the first military training base for pilots in College Park, Maryland. As aircraft have become faster, more maneuverable, and more powerful, so the military’s techniques for selecting and training pilots have become more rigorous and scientific. These techniques - as methods of matching human skills to the operations of machines - helped to make the airplane into the premier example of human-machine interaction, and the ultimate symbol of American technological prowess, in the years before the atomic bomb and the digital computer. But while these procedures for selection and training were developed throughout the military and even in commercial aviation, the story of how and why these methods were created is closely linked to the history of the U.S. Air Force. After decades in the shadow of the Army and Navy, the Air Force emerged as an elite military service during World War II, becoming an independent service branch in 1947. For the Air Force, “matching men to machines” has proved essential to avoiding accidents, achieving its missions, and shaping its distinctive military culture.


THE INVENTION OF THE AIR CRAFT PILOT

In 1903, the Wright brothers, Wilbur and Orville, became the first Americans to pilot an aircraft for a controlled, sustained, and engine-powered flight. They spent the next decade improving their airplane designs, fighting to protect their patents from rivals, and trying to sell their inventions to governments and businesses around the world. When the U.S. Army Signal Corps agreed to purchase their airplanes, the Wright brothers offered to teach military men how to fly. In 1909, the Army Signal Corps leased an airstrip in College Park, Maryland as a location for Wilbur Wright to train the first two Army lieutenants to become pilots, Frederic Humphreys and Frank Lahm. The Wright brothers went on to train over 100 pilots at their aviation school in Dayton, Ohio, including the man who would later become the head of the U.S. Army’s Air Forces in the Second World War, Henry “Hap” Arnold.[1]


Throughout these early years of military aviation, and indeed for years after, the armed services weathered tragedy as many pilots crashed their airplanes, often with lethal consequences. These accidents could be caused by human error, mechanical faults, weather conditions, or an unfortunate combination of these factors. One strategy for reducing accidents, which the Army Signal Corps quickly adopted, was to select officers for pilot training who had already distinguished themselves by their mental and physical abilities. Volunteers for pilot training were vetted for their courage, athleticism, intellect and ability to estimate distances by sight - usually on the basis of their superiors’ personal judgements.[2] The heroic characteristics of early pilots only added to the sense of tragedy when a spate of accidents in 1912-1913 killed seven promising Army pilots. The mounting casualty rate inspired the Signal Corps to take up another strategy: to encourage aircraft designers to make new planes that were easier to control and safer to fly.[3]


WORLD WAR I: Pilots Prove their Heroism Prior to World War I, the U.S. Army Signal Corps had backed the Wright Brothers’ new experimental technology largely because it believed that aviation would be a tool of reconnaissance. Army officers, who were used to thinking about war as a conflict on the ground, saw airplanes as a way to fly over enemy forces, observe their formations and resources, and then report this information to Army commanders (a similar perspective held sway in the Navy). However in World War I, the Signal Corps’ pilots showed that they could fight battles in the air by mounting guns on their aircraft and engaging in aerial “dogfights” with enemy aircraft. Airplanes were also used to drop small bombs on enemy ground forces and ships, though this was a strategy used more by the British, French and Germans than by American flyers. The Army’s growing recognition that aviation could be very useful in warfare led to more pilots being trained during World War I. In April 1917, the Aviation Section of the Signal Corps had less than 100 pilots; by the end of the war, it had almost 1400.[4] During the war, the Army sent its would-be pilots to university campuses around the country for the first phase of their training. There cadets learned military discipline, took classes on the principles of flight, and were taught skills necessary for flying, such as how to navigate by “dead reckoning” and how to operate a radio for communication. Those who graduated from this “ground training” would be sent to flight schools, either in the U.S. or in Europe, to learn how to fly aircraft.[5]



However, only about 25% of the volunteers for the Aviation Section made it to flight school.[6] During World War I, the Army instituted strict controls on who entered ground training and who graduated on to flight school. This was because accidents in training simply cost too

much in money, time, and soldiers’ lives. Thus, the Army officers who oversaw ground training were keen to “wash out” men who lacked good judgement or did not behave in the disciplined manner of a pilot. One officer later explained, “We were constantly enjoined to remember that the flying officer was not to be an ‘aerial chauffeur,’ but a ‘twentieth century cavalry officer mounted on Pegasus.”[7] In 1917 medical examinations were established as a way to screen recruits for their physical fitness. These examinations included tests of vision, hearing and motor control, and general health. They also involved a test of “nervous stability” that gauged a person’s reaction to the sound of a gun being fired, and a test of equilibrium that involved being strapped into a chair that spun in three dimensions - what is known as a Barany chair, as it named after its inventor, the Hungarian physiologist Róbert Bárány.


These physiological examinations fascinated other scientists, who saw in them the possibility of scientifically matching human skill to the needs of machine operation. In the summer of 1917, two experimental psychologists undertook a study of whether these examinations were actually successful in predicting who became successful pilots. By the end of 1918, their research was supported by the National Research Council (a part of the National Academy of Sciences) and they had the assistance of several eminent psychologists. More importantly, they had identified the techniques that seemed to be most predictive of piloting skill - among them, the tests of nervous stability and equilibrium. However, before their experiments could achieve conclusive results, the war ended and the funding for their research ran out. The question of which tests worked best for selecting pilots would go unanswered until the Second World War. [8]


THE INTERWAR YEARS


World War I had revealed the utility of airplanes in war, but high ranking military commanders in the Army and Navy still saw aviation as secondary to their forces on the ground and at sea. Within the Army’s newly constituted Air Service, leaders like Hap Arnold and General William “Billy” Mitchell bridled at the short-sightedness of Army commanders

who preferred to give status and

resources to ground troops rather than to air men. General Mitchell would be court-martialed in 1925 for publicly criticizing Army and Navy leaders, but his predictions would prove true that air power was necessary for strategic warfare and defense.[9]

The interwar years saw new innovations in cockpit instrumentation, enabling pilots to more easily keep track of their plane’s altitude, attitude, speed and bearing. A skilled pilot could use these new instruments to fly “blind”, that is, without being to see outside his cockpit, and still safely reach his destination. This was a remarkable development, as previously it was only safe to fly when there was maximum visibility: during the day and in clear weather. However, the skills of blind flying - also known as “instrument flying” - were challenging to learn, and were dangerous for a novice to practice in the air. The solution, it was quickly realized, was to familiarize pilots with the new cockpit instruments while they were still on the ground.[10]


In 1929, Edwin Albert Link Jr., an amateur pilot, invented the “Link trainer,” using a small airplane fuselage and motors and bellows from Link’s father’s piano and organ company in Binghamton, NY. He initially designed his machine to teach students basic control maneuvers, but he soon added knobs and dials to simulate the new instruments. The Link Trainer was used widely throughout the 1930s and into the 1950s - even by the Japanese, Germans and Russians - to teach instrument flying. As the first mechanical flight simulator, it is

an example of a machine designed specifically for new pilots to prepare them to operate more sophisticated machines in dangerous environments.[11]


The Army’s emphasis on medical examinations during the First World War had led to the creation of a new profession within the Army, the “flight surgeon.” Flight surgeons were specially trained doctors who could assess potential pilots for mental and physiological defects that would affect their flying, and who saw to the medical needs of aviators. After the war, the Army’s General Theodore C. Lyster established an institution to train flight surgeons, the School for Aviation Medicine, at Mineola, Long Island (the school later moved to Brooks Field near San Antonio, Texas and then to Randolph Field). The School of Aviation Medicine was also initially a site for medical research, and under Lyster its flight surgeons performed experiments on the effects of high altitude using a low-pressure chamber. By the time the school had moved to Texas, it had ceased its research and focused only on training. The low-pressure chamber was used to introduce cadets to the performance-diminishing effects of being at high altitude without an breathing apparatus and oxygen tank.[12]


Medical research for Army aviation was not revived until 1935, when Capt. Harry G. Armstrong became the director of a new Aeromedical Research Laboratory at Wright Field in Dayton, Ohio. Armstrong and his staff conducted experiments on the effects of cold, lack of oxygen, acceleration, and cabin pressurization on the human body, and also developed new equipment to help aviators perform better at high altitudes. Under Armstrong’s direction, by the end of World War II, both the Aeromedical Research Lab and the School of Aviation Medicine would be world-leading scientific institutions for the study of human physiology and psychology in flight.[13] In the wake of World War I, military strategists like Billy Mitchell and Giulio Douhet predicted that in the future airplanes should be used for the strategic bombing of military targets.[14] Indeed, after the bitter and prolonged trench warfare of World War I, the idea of strategically bombing supply lines, weapons factories and military headquarters seemed like an almost “humane” way to end a war without extensive casualties. This seemingly ideal form of war was mainly theoretical until the Spanish Civil War in the 1930s, when Hitler’s Luftwaffe and the Italian Royal Air Force, working with Spanish Nationalist forces, bombed Spanish cities, killing thousands of civilians in their homes. People around the world, and many Americans, were shocked at the barbarity of aerial bombardment.[15]



But air strategists argued that strategic bombing would be civilized, and still be effective, if bomber planes did not target civilians. The invention of sophisticated “bombsights” - precision calculators for timing the release of bombs - would enable American air forces to surgically strike at enemy forces from the air, without attacking civilian areas. Each bombsight was essentially a telescope combined with a mechanical computer that could calculate the precise moment to drop bombs on a target given the

plane's altitude, azimuth, ground speed, and the cross winds that would affect the course of the bomb in the air. This emerging doctrine for air power, known as “High Altitude Daylight Precision Bombing,” would become the stated mission of the Army’s Air Forces during World War II.[16]


WORLD WAR II: Selecting aviators becomes scientific

As conflict brewed in Europe and Asia in the late 1930s, the American military services debated what their strategy should be when war broke out. By then, the Army’s aviators had gained some autonomy from the Army ground forces, but they were still dependent on Army leadership for funding and material. Yet the idea that aviation could be an independent force within the American military was taking hold. In the summer of 1941 the War Department officially approved the strategic doctrine of precision bombing, and a plan to expand the Army’s air arm to include 30,000 new pilots. [17] Reflecting this new status, the Army’s aviation section was renamed the Army Air Forces (AAF).