AWS Community Day CPH - Three problems of Terraform
AI cybersecurity
1.
2. Striving Toward an Intelligent World
With the accumulation of big data, dramatic improvements in computing power
and continuous innovation in Machine Learning (ML) methods, Artificial
Intelligence (AI) technologies such as image recognition, voice recognition, and
natural language processing have become ubiquitous.
More and more companies will increase their investment in AI research and
deploy AI in their products. According to Huawei’s Global Industry Vision (GIV),
by 2025, 100 billion connections will be achieved globally, covering 77 percent
of the population; 85 percent of enterprise applications will be deployed on the
cloud; smart home robots will enter 12 percent of all families, forming a billion-
dollar market.
3. Five Challenges to AI Security
• AI has great potential to build a better, smarter world, but at the same time
faces severe security risks. Due to the lack of security consideration at the early
development of AI algorithms, attackers are able to manipulate the inference
results in ways that lead to misjudgment. In critical domains such as healthcare,
transportation, and surveillance, security risks can be devastating. Successful
attacks on AI systems can result in property loss or endanger personal safety.
• AI security risks exist not only in theoretical analyses but also in AI
deployments. For instance, attackers can craft files to bypass AI-based
detection tools or add noise to smart home voice control command to invoke
malicious applications. Attackers can also tamper with data returned by a
terminal or deliberately engage in malicious dialogs with a chat robot to cause a
prediction error in the backend AI system. It is even possible to apply small
stickers on traffic signs or vehicles that cause false inferences by autonomous
vehicles.
4. Five Security Challenges (Cont..):
• To mitigate these AI security risks, AI system design must
overcome five security challenges:
1. Software and hardware security:
2. Data integrity:
6. • Black-box security testing refers to a method of software security
testing in which the security controls, defences and design of an
application are tested from the outside-in, with little or no prior knowledge
of the application’s internal workings
• Organizations invest in many security-related exercises to ensure that
its technical infrastructure is secure and protected. One such exercise is
black box testing wherein the testers investigate a system just like an
attacker would do with minimal or no knowledge about the internal
architecture or configuration of the system. The testers use many tools
for detecting possible attack surfaces to build an idea about the system.
In this way, information is gathered about the system to carefully
plan and launch an attack.
Blackbox Testing in AI Cybersecurity
12. Adversarial training
• Fast Gradient Sign Method (FGSM), is one of the main approach to
produce adversarial examples for various learning tasks and threat
perturbation constraints. They employ finite differences, a zeroth-order
optimization means, to estimate the gradient and then use it to design a
gradient-based attack. While this approach successfully generates
adversarial examples, it is expensive in how many times the model is
queried.
• The work, our group researched upon was about contrasts with the general
approach of above works in two ways:
13. The Gradient Estimation Problem
The gradient estimation problem is central to the process of generating
adversarial examples. The majority of approach rely on estimating both the
magnitude and sign of the gradient. This paper suggests a weaker version of the
gradient estimation problem (estimating only its sign) and proposes an algorithm
that adaptively constructs queries to recover the sign. The proposed approach out
performs many state-of-the-art black-box attack methods in terms of query
complexity.
We exploit two concepts:
14. CONT...
• The work, our group researched upon, presents a novel black-box
adversarial attack algorithm. It exploits a sign-based, rather than magnitude-
based, gradient estimation approach that shifts the gradient estimation from
continuous to binary black-box optimization. It adaptively constructs queries
to estimate the gradient, one query relying upon the previous, rather than re-
estimating the gradient each step with random query construction.
• Further, its theoretical performance is guaranteed and it can characterize
adversarial subspaces better than white-box gradient-aligned subspaces. On
two public black-box attack challenges and a model robustly trained against
transfer attacks, the algorithm’s evasion rates surpass all submitted attacks.
• For a suite of published models, the algorithm is 3.8× less failure-prone
while spending 2.5× fewer queries versus the best combination of state of
art algorithms. For example, it evades a standard MNIST model using just 12
queries on average. Similar performance is observed on a standard
IMAGENET model with an average of 579 queries