security

SecDef Austin Commits US To 'Responsible AI' – Breaking Defense Breaking Defense – Defense industry news, analysis and commentary – Breaking Defense


SecDef Austin speaks on artificial intelligence

WASHINGTON: In a clear sign of the fundamental importance of ethics and human control to the coming age of artificial intelligence in the US military, Defense Secretary Lloyd Austin declared his department will “do it the right way,” even as competitors like China use AI to better monitor and suppress their citizens.

“In the AI realm, as in many others, we understand that China is our pacing challenge. We’re going to compete to win, but we’re going to do it the right way,” Austin told a day-long conference of the National Security Commission on Artificial Intelligence (NACAI). “So our use of AI must reinforce our democratic values, protect our rights, ensure our safety, and defend our privacy. Of course, we understand the pressures and the tensions. And we know that evaluations of the legal and ethical implications of novel tech can take time.”

One of the most difficult challenges experts have identified with AI-directed weapons is the temptation of accepting their immense speed of response as an advantage regardless of the consequences. “AI is going to change many things about military operations, but nothing is going to change America’s commitment to the laws of war and the principles of our democracy,” Austin said.

As Breaking Defense readers know, AI is central to the new American way of war, All Domain Operations. Austin offered a nod to the concepts guiding US development of these systems: “Used right, AI capabilities can play a critical role in all four areas of the Joint Warfighting Concept that I approved this spring, including joint fires, Joint All Domain Command and Control, contested logistics, and information advantage.”

In addition to underpinning the use of AI with ethical policies, the civilian leader of the US military pointed to the need to be able to trust that AI systems will behave as their human creators expect.

“So we have established core principles for responsible AI. Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable. We’re going to use AI for clearly defined purposes. We’re not going to put up with unintended bias from AI,” he said. “We’re going to watch out for unintended consequences. And we’re going to immediately adjust, improve, or even disable any AI system that isn’t behaving the way that we intend.”

Since the United States always goes to war alongside allies and partners, ensuring we all are grounded in the same ethics and standards is also critical, something Austin addressed today.

“We’re working together with our like-minded friends to advance global norms grounded in our shared values. The department and 15 of our allied and partner countries are meeting several times a year in the AI Partnership for Defense,” Austin noted.

Finally, Austin addressed a fundamental problem the military faces in software, cyber and AI: finding and keeping highly educated and accomplished talent. “And that means creating new career paths and new incentives. It means including tech skills as a part of basic-training programs. And it means a significant shift in the way this institution thinks about tech,” he said.

“Some of our troops leave homes that are decked out in state-of-the-art personal tech and then spend their workday on virtually obsolete laptops. And we still see college graduates and newly minted PhDs who would never think about a career in the department. So we have to do better,” he said.

But the best line of the conference came from Bob Work, former Deputy Defense Secretary and vice chairman of the National Security Commission on Artificial Intelligence, about the recently cancelled $10 billion cloud contract: “It looks like you’ve finally fixed JEDI. Man, I never thought it would end.” Austin said nothing. His smile spoke for itself.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.