U.S. adversaries are ramping up investments in artificially intelligent combat systems. These emerging tools operate at a speed and on a scale that surpasses conventional weapons, but when left unchecked by human intervention, they can wreak havoc by creating unintended collateral damage.
The U.S. Department of Defense is analyzing how best to implement new AI technologies in combat, but to date, there is not a prescribed path forward. A lengthy delay could cause our Nation to fall behind in the new AI arms race.
In a recent C4ISRNET opinion piece entitled, “What war elephants can teach us about the future of AI in combat,” ASRC Federal’s Eric Velte and Aaron Dant introduced an original concept for how to engage the swiftness and scale of artificially intelligent weapon systems within the parameters of our national moral principles and international guidelines of ethical warfare.
Based on the corresponding white paper penned by Dant and ASRC Federal’s Dr. Phil Feldman, along with highly regarded Marine Corps ethicist Dr. Harry Draney, experts at ASRC Federal believe this emerging technology requires a new “AI operator” role within the military ranks.
In a similar fashion to how cyber warriors answered the call of cyberattacks on our U.S. government, the AI operator role would require certain military personnel to be highly skilled masters of the AI systems — helping to train the tools and fully understand whether the AI is working properly at any point in its deployment.
Taking inspiration from the historical relationship between humans and war elephants, ASRC Federal experts suggest that we can develop a similar partnership between military personnel and AI. We can embed the innate human advantages of judgment and context into the governance and behavior of intelligent combat systems. ASRC Federal believes this complementary approach to AI that emphasizes the strengths of both humans and machines can accomplish speed and accuracy at scale for critical government missions.
According to Aaron Dant, expert data analyst at ASRC Federal, “By nurturing the synergy between human operators and AI systems, we can transform our commitment to ethical values from a perceived limitation into a strategic advantage.”
AI combat systems will require diverse operators and diverse models to be effective. For example, if an AI operator sees that one “war elephant” is not functioning properly or understands that it has a unique vulnerability that the enemy is able to exploit, she can swap out one surveillance and detection computer program for another in real-time combat situations.
Eric Velte, chief technology officer at ASRC Federal, posits that in the U.S., we already have the framework for this: “Our diverse culture gives us a distinct advantage. At ASRC Federal, we believe that diversity in thought and backgrounds is a strength in any domain, and this is an example of our values in action.”
ASRC Federal’s involvement with AI/ML goes beyond the defense environment. ASRC Federal teams are operating original AI tools with NASA, NOAA and USGS to solve complex problems and accurately predict outcomes.
Velte continues, “As government contractors, we have a responsibility to develop and steward federal AI solutions with care and discipline. We are developing an organic corporate AI ethics policy to govern our corporate work with AI, and we will provide AI ethics training for all employees who may interact with our innovative AI tools or existing customer AI systems.”
Learn more about ASRC Federal’s AI/ML capability and experts.