Artificial Intelligence in Defence and Security Industry

Artificial Intelligence in Defence and Security Industry


Artificial intelligence has gained serious traction within the technology community as a solution for complex problems, though in January 2015, Bill Gates observed robotics and artificial intelligence are entering a period of rapid advances. For example, Google, Apple, and Microsoft are competing to transform vehicle transport with self-driving vehicles. In manufacturing, autonomous production enables companies to adapt products to diverse consumer markets and many more. technology has had a significant impact on warfare since the Iraq war began in 2001. Unmanned drones provide sustained surveillance and swift attacks on targets, and small robots are used to disarm improvised explosive devices. The military is currently funding research to produce more autonomous and self-aware robots to diminish the need for human soldiers to risk their lives.

In the last few decades, one of the largest sources of funding for AI research came from the Defense Advanced Research Project Agency (DARPA), which is agency of the Department of Defense of the United States of America responsible for the development of new technologies for use by the military.

For example,the DARPA VMR system aids intelligence analysts in searching, filtering, and exploring visual media through the use of advanced computer vision and reasoning techniques. “The goal of DARPA’s VMR program is to extract mission-relevant information, such as the who, what, where and when, from visual media captured from our adversaries and to turn unstructured, ad hoc photos and video into true visual intelligence,” Dr. Jeff Hansberger said.

“When we look at what’s happening with artificial intelligence, we see something that is very, very powerful, very valuable for military applications, but we also see a technology that is still quite fundamentally limited,” DARPA Director Arati Prabhakar said at the Atlantic Council on May 2.

For Prabhakar, deploying AI will have to be in the right place at the right time. “We have to be clear about where we’re going to use the technology and where it’s not ready for primetime, where it’s not really ready for us to trust it,” she told GCN after the event.

Artificial intelligence, for example, can be useful if it immediately provides a jamming profile to military pilots who encounter a new radar signal, she explained. However, a self-driving car making AI-based determinations might be “imperfect in some dangerous ways.”

“I think it’s just important to be clear eyed about what machine learning can and can’t do,” she said.

At the same time, future of AI in defense industry looks not so promising. In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter warns that artificial intelligence can potentially be more dangerous than nuclear weapons. Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

Cyber Security

Cyber security industry give more serious prospects for artificial intelligence. Cyber security is a major challenge in today’s world, as government agencies, corporations and individuals have increasingly become victims of cyber attacks that are so rapidly finding new ways to threaten the Internet that it’s hard for good guys to keep up with them. With global cyber spending expected to reach $170 billion by 2020, eyes are on the cybersecurity industry to see who can offer the best solutions.

A group of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are working with machine-learning startup PatternEx to develop a line of defense against such cyber threats. The team has already developed an Artificial Intelligence system that can detect 85 percent of attacks by reviewing data from more than 3.6 Billion lines of log files each day and informs anything suspicious. The new system does not just rely on the artificial intelligence (AI), but also on human input, which researchers call Analyst Intuition (AI), which is why it has been given the name of Artificial Intelligence Squared or AI2.

In May 2016 IBM announced Watson for Cyber Security, a new cloud-based version of the company’s cognitive technology trained on the language of security as part of a year-long research project. Watson will be learning the nuances of security research findings and discovering patterns and evidence of hidden cyber attacks and threats that could otherwise be missed. To further scale the system, company plans to collaborate with eight universities to greatly expand the collection of security data IBM has trained the cognitive system with.

While the rest of the industry gets up to speed, a number of cyber security startups are using AI to tackle this threat. We decide to pick up 5


Darktrace is inspired by the self-learning intelligence of the human immune system. Using machine learning techniques inspired by the self-learning intelligence of the human immune system, UK-based startup Darktrace tackles the challenge of detecting previously unidentifiable cyber threats in real time, and allows them to be eradicated more quickly than traditional approaches.


JASK, a San Francisco-based startup, is building what it calls “the world’s first, predictive security operations center” for enterprise-level cybersecurity. The system aims to assist enterprises of all sizes keep ahead of sophisticated cyberattackers by moving past the limitations of existing solutions with proactive A.I security measures.

Deep Instinct

Launched in November 2015, this Tel Aviv-based startup is using sophisticated deep learning algorithms to improve cybersecurity in the banking, financial, and government spheres in the U.S and Israel. The Deep Instinct engine is modeled on the human brain’s ability to learn. Once a brain learns to identify an object, it can identify it again in the future instinctively.

At analytics replicate the processes of top security researchers: searching for changes in behavior of users, key business systems and applications caused by targeted cyber attacks. has successfully applied AI-based algorithms to learn the business value of critical documents across an organization, with an industry-first ability to detect and stop data breaches from targeted attacks and insider threat before data is stolen.


Startup PatternEx tasked itself with securing enterprise data by using a different approach: mimicking the intuition of human security analysts in real time and at scale using a new-gen artificial intelligence platform. PatternEx can be deployed on premises, in the cloud or in a private cloud. It includes a big data platform designed for large data volumes and real time response, an ensemble of algorithms designed to detect rare behaviors with the goal of identifying new attacks, an active learning feedback loop that continuously improves detection rates over time and a repository of threat intelligence that can be shared among enterprises.

Governments wants to spend billions prepping for the next stage in warfare that it believes will be defined by advances in artificial intelligence and autonomy. What is a vision of the military future of AI? Motherships of drones releasing little baby drones from the air and the sea, infantrymen and women sporting exoskeletons and wearable electronics loaded up with combat apps, and lone mission commanders directing swarms of unmanned vessels to carry out operations. The future of cyber-security looks part human and part machine, according to MIT’s Computer Science and Artificial Intelligence Laboratory.

Author: AI.Business

If you like our articles, please subscribe to our monthly newsletter: