What Role Should Edtech and AI Play in Military Training?

By Henry Kronk
September 25, 2018

Last week at the Air Force Association’s Air, Space & Cyber Conference, the head of Air Force training Lt. Gen. Steven Kwast spoke at length about some new ways cadets are onboarding into the cockpits of U.S. aircraft. A large aspect of that training in the future will likely occur at the hands of AI-powered simulators. As the commander of Air Education and Training Command told, a pilot program testing out the use of adaptive learning algorithms in military flight training has shown promising results. He believes it will be able to push the mental limits of pilots before they set foot in an aircraft.

“The data is very promising that we can accentuate the adult human brain to learn faster, better and, I’ll say, more sticky, meaning when you learn something longer and better,” Kwast said.

Edtech in the Military

As Kwast explained, an algorithm can track future pilots as they go through the fundamentals of flight.

“Let’s take a loop: A pilot has to do a loop, and the artificial intelligence is watching you do that loop,” Kwast said. “And, as you pull back on the stick, it can tell what you are doing and says, ‘Hey, you are pulling back too much. … Hey, your nose is starting to drift to the right a little bit. Keep your eye on the horizon,'” Kwast said. “So the artificial intelligence is watching you. It’s learning from you, and it’s learning how you are learning, and it’s giving you advice that’s helping you real-time adjust your learning, so you aren’t making mistakes and not even knowing you are making a mistake.”

Tech involvement in military use has become an increasingly controversial topic. This spring, Gizmodo revealed Google’s involvement in Project Maven, a Department of Defense initiative that harnesses adaptive algorithms to sift through the huge volume of footage and images taken by U.S. drones in order to identify potential targets.

Many Google employees responded in anger, voicing opposition to the role their company played in U.S. military efforts.

As former Google Executive Chairman Eric Schmidt said during a 2017 speech, “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly.”

Google has since pulled out of Project Maven. The company also pledged not to develop AI weaponry and created a guiding document of ethical principles regarding AI use.

U.S. Armed Forces Rely on Third Parties for Emerging Tech

While less directly-linked to military actions overseas, any edtech company contracting with the U.S. military will need to undergo a similar reckoning.

Lt. Gen. Kwast did not say which group developed the AI-aid to flight simulation, but it likely came from a third party. According to Greg Allen, co-author of a report on military use of AI compiled on behalf of the U.S. Intelligence Advanced Research Activity, there’s not much in-house development of AI in the U.S. military. As he wrote for the Bulletin of Atomic Scientists, “For years, the Defense Department’s most senior leadership has lamented the fact that US military and spy agencies, where artificial intelligence (AI) technology is concerned, lag far behind state-of-the-art commercial technology. Though US companies and universities lead the world in advanced AI research and commercialization, the US military still performs many activities in a style that would be familiar to the military of World War II.”

And, “Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI.”

Military use of edtech may not be as directly related to U.S. actions overseas, but it certainly plays a role. In a sector clamoring amidst fierce competition for VC investment, military contracts likely appear enticing to say the least. But as with any business, how your product is used must always be an issue of concern.

Featured Image: Richard R Schünemann, Unplash.