SlideShare una empresa de Scribd logo
HOW TO CONTROL
RISKS IN
Artificial
intelligence
Prof. Hernan Huwyler MBA CPA
Executive Education
Governance, Compliance, and Cyber Security
RISK PROCESS for AI
• Perform an initial risk
assessment and
prioritize AI assets for
further review
• Identify opportunities
for targeted
governance to reduce
risks, optimize costs,
and enhance the value
derived from AI
resources
Management
• Identify all AI assets
and libraries including
shadow AI
implementations
• Establish a searchable
database of AI assets
• Foster a user
community to
promote collaboration
and the adoption of AI
best practices
Discovery
• Develop a policies and
procedures with AI-
specific controls
• Incorporate AI
applications into the IT,
risk and compliance
policies
• Train compliance with
AI-related controls
• Test the AI-specific
controls
Compliance
AI Model Risk Management
• Assess data,
algorithmic,
performance,
computational
feasibility, and vendor
risks
• Enhance controls for
change management,
targeted model
reviews, and ongoing
monitoring
Implement
• Embed into decision-
making: innovation
plan, product
approvals, third-party
sourcing, and end-user
computing of AI-based
software
• Improve risk criteria to
cover the materiality,
customer and societal
impacts and
complexity
Design
• Keep senior
management updated
on AI model
development, review,
and usage
• Report the testing and
outcomes analysis for AI
models
• Hold developers and
model owners
accountable for safe
deploying
Communicate
OBJECTIVES AT RISK
• Deployment systems
• Core systems
• Infrastructure
• Edge systems
Security
• Training data
• Use data
• Feedback data
• Model code
DATA
• Third-parties
• Dependencies
• User experience
PERFORMANCE
• Fairness
• Infrastructure
• Incorrect feedback
Explanability
• AI and privacy
regulations
• Contracts
• Insurance
Compliance
• Budgeted resources
• Time objectives
COSTS
Risk and COMPLIANCE
• Implement risk management system
to proactively mitigate liabilities
• Implement granular scenario analysis
to address AI-specific threats
• Evaluate the potential human rights
impacts when AI interacts with users
as intended by the model (bias,
privacy, explainability, robustness)
EU Artificial
Intelligence Law
• Demonstrate controls on privacy, human
rights, data, model and cyber security
risks
• Model threats, attacks and surface to
assess software risks (NIST 800 218 PW.1.1)
• Analyze vulnerabilities to gather risk data
and plan responses (NIST 800 218 RV.2.1)
• Have a qualified independent person to
review that the design addresses the
identified risks (NIST 800 218 RW.2.1)
• Analyze the risk of applicable technology
stacks
US Executive Orders 13859 on
AI and 14028 on cyber Sec
New SKILLS for Risk Managers
Understand
• AI specific threat
vectors
• Performance versus
interpretability trade-
off
• Degradation and
flagging
SCENARIOS
Understand
• AI scope reviews
• Bias testing
• Explainability reports
• Data features and
testing
• Statistical data checks
• Model output reviews
• Bayesian hypothesis
testing
Practices
Understand
• Roles of data modelers,
and analytics
• Model and use
documentation
• Escalation mechanisms
and workflows
• AI use cases
Structure
AI
Computer
power
Probabilities DATA
Probabilistic model for
predictions
Beyond machine learning
Curated
Classified
Complete
Clean
Consistent
DATA-
Related
Risks
Inadequate Data Representativeness
Uneven training data from skewed
sampling may lead to erroneous AI model
predictions
Risk
CONTROLS
• Inadequate data preprocessing
• Biased data collection methods
• Unrepresentative training
datasets
• Selection biases in sampling
• Data augmentation mishandling
• Lack of diversity in input sources
• Data drift over time
Factors
• Verify data representativeness
• Conduct sensitivity analysis
• Implement advanced modeling
techniques to mitigate selection
bias
• Integrate fairness-aware AI
algorithms to identify and rectify
bias
low Data quality
Incomplete, inconsistent, or incorrect
data without detection processes may
lead to inaccurate AI model predictions
Risk
CONTROLS
• Lack of data quality standards
• Absence of standardized data
improvement processes
• Data privacy and ownership
issues
• Weak data fitness for the use
case
• High data customization
• Reuse of prior model-derived
data
Factors
• Establish data quality rules
• Enhance remediation processes
• Centralize data management
• Test the data formatting,
metadata completeness, and
indexing
• Address root causes of errors
Data scarcity
Inadequate selection, size and relevance
of data sets may lead to inaccurate AI
model predictions
Risk
CONTROLS
• Privacy, data source, and
logistical constraints
• Limited available data due to the
unique nature of the domain
• Lack of resources to process
large volume of data
• Need for long-term historical
data
• Use of third-party data
Factors
• Define standard identifiers and
consistent definitions
• Integrate physics-informed
machine learning
• Review training, synthetic and
augmented data for accuracy and
relevance
Insufficient External Data Quality
A data vendor's inadequate lineage
information, components, and processes
may compromise data quality and model
performance
Risk
CONTROLS
• Lack of vendor due diligence and
requirements
• Insufficient quality control
measures
• Data volume exceeds
infrastructure capacity
• Inadequate response time
optimization
Factors
• Enforce standards of model risk
management
• Set service level agreements with
provenance data
• Monitor aggregation
• Validate inputs and reliability
• Track response times
Model-
Related
Risks
Misjudged AI Risk Ratings
Inadequate risk assessments on model
pattern recognition, probability theory,
and engineering may lead to
vulnerabilities and implementation issues
Risk
CONTROLS
• Lack of traditional statistical
foundation for AI modeling
• Complex AI model patterns
• Probability-based decision-
making
• Model engineering nuances
Factors
• Integrate standards into modeling
processes, approvals, third-party
sourcing, and IT
• Maintain an AI model inventory
with metadata
• Adapt design reviews to model-
specific risks
Missing Business Requirements
Inadequate review of use case,
operationalization, and consumption the
AI model development may overlook
business requirements
Risk
CONTROLS
• Incomplete use case analysis
• Lack of operationalization clarity
• Insufficient consumption
strategy
Factors
• Assess AI's unique behavior
potential beyond documented
requirements
• Identify and address capability gaps
across conceptualization, pilot, and
operationalization phases before
project initiation
Model Usage MISSUNDESTANDING
Inadequate understanding of intended
uses of the model may hinder informed
decision-making, leading to potential
human right harm and compliance losses
Risk
CONTROLS
• Lack of context understanding
• Unrecognized impact on
different groups
• Absence of continuous
assessment plan
• Failure to consider the broader
system impact
Factors
• Perform impact assessments
considering the context to address
vulnerabilities and avoid unfair
outputs
• Implement a continuous
assessment plan throughout the
model's lifecycle
• Consider the broader system
impact during model integration
UNIDENTIFIED MODEL LIMITS
Inadequate understanding of AI model
boundary conditions may produce
unintended and suboptimal outcomes
Risk
CONTROLS
• Neglecting system limitations
awareness and boundary
condition considerations
• Lack of ongoing monitoring
• Lack of interdisciplinary
collaboration
• Ethical misalignment in using AI
Factors
• Develop ongoing boundary testing
processes during project framing
• Define monitoring procedures
throughout all the lifecycle
• Form multi-disciplinary
development teams
UNIDENTIFIED MODEL LIMITS
• Behaviors may exhibit substantial
cultural and contextual variations,
rendering generalized predictions
challenging
• Predictive models may struggle to
extrapolate behavior patterns
consistently across diverse
individuals
• Behaviors may be dynamic and can
undergo significant transformations
over time
Challenges
• Human decision-making may be shaped
by a multitude of complex factors,
including subjective experiences and
individual interpretations
• Behaviors may defy predictability
owing to the presence of free will,
cognitive biases, and the impact of
unique and unforeseen stimuli
• Behavioral data may engender
concerns regarding ethical, compliance,
and privacy considerations
Model Security Gaps
Inadequate security requirements may
expose AI models to adversarial attacks
and loss of data integrity and availability
Risk
CONTROLS
• Incomplete security
specifications
• Lack of adversarial robustness
testing
• Neglected threat modeling
Factors
• Enhance security program with AI
considerations
• Address potential system
compromise methods
• Assess dynamic AI-related risks
• Integrate information security into
risk management policies
Algorithm-
Related
Risks
Ethics Oversight Gaps
Insufficient algorithm ethics checks lead
to discrimination and unreliable models
unfit for predictions
Risk
CONTROLS
• Lack of defined AI Ethics
Governance to address ethical
considerations in training data
• Inadequate controls in algorithm
development
• Neglecting hidden biases in
model outputs.
• Ethical misalignment with
agents' preferences
• Absence of AI ethics council
inspections
Factors
• Enhance algorithm development
controls to address ethics in
training data and mitigate hidden
biases
• Inspect algorithms through an AI
ethics council
• Combine game theory and machine
learning
Non Compliance
Limited insight into decision-making
processes may harbor hidden biases and
threaten the model credibility
Risk
CONTROLS
• Lack of embedded compliance
checks
• Insufficient model validation for
regulatory requirements
• Lack of transparency in AI
decision-making
• Incomplete understanding of
decision processes
• Unexamined societal biases in
data and models
Factors
• Strengthen model validation,
emphasizing transparent decision
understanding
• Conduct extensive pre-deployment
testing of models
• Enhance algorithm validation to
meet ethical AI standardsand
mitigate societal biases in data and
models
Black box
Inscrutable machine learning algorithms
may prevent human understanding of the
decision-making processes
Risk
CONTROLS
• Model bias and errors
• Lack of transparency and
accountability of business and
technical owners and
stakeholders
• Ethical and regulatory concerns
Factors
• Embed black box testing in the
model lifecycle to explain, debug,
and improve models for
stakeholders
• Require explanations from model
owners
• Operationalize tools for audits and
monitoring model impact on
humans
Algorithm Aversion
• End users may not rely on
recommendations when there isn't
a readily identifiable entity to hold
accountable for those
recommendations
• End users may be concerned about
the potential for algorithmic
discrimination or harm
• End users may prefer for their own
judgments, which can lead to
cognitive dissonance when
algorithms offer disparate
recommendations
Challenges
• End users may exhibit skepticism when
it comes to placing trust in AI
algorithm-generated decisions, even
when these decisions surpass human
judgment
• End users may harbor doubts about
algorithmic recommendations when
they lack clarity about the decision-
making process
• End users may view AI algorithms as
dehumanizing, which can result in a
sense of diminished control or the loss
of a personal touch
Model Instability
Changing relationships between the input
features, the data and the target variables
(concept and data drifts) may lead to
deteriorating model accuracy over time
Risk
CONTROLS
• Changes in data distribution
• Altered input-output
relationships
• Multicollinearity-induced
parameter instability
• High redundancy in the model
structure focused on improving
the performance
Factors
• Embed model stability checks in
the model lifecycle
• Check for data distribution changes
and input-output relationship
shifts
• Conduct sensitivity analysis and
scenario testing to enhance model
robustness, accuracy, and
understanding
Model Misselection
Inadequate model selection may result in
suboptimal AI performance and
inaccurate decisions
Risk
CONTROLS
• Poor candidate model
generation and development
• Neglect of key parameters
• Insufficient data and
considerations for analysis
• Inadequate system
understanding
Factors
• Tailor model selection to the
specific context
• Form diverse review teams for
model categorization
• Ensure models are fit-for-purpose,
explainable, reproducible, and
robust
Missing checks
Failure to validate and cross-validate the
model features may compromise the
integrity and lead to data-driven errors
Risk
CONTROLS
• Weak governance tools for
model monitoring,
explainability, and biases
• Omission of feature validation
• Lack of cross-validation
• Limited stakeholder
involvement
Factors
• Integrate reasonability, accuracy
checks and cross-validation checks
into risk management
• Embed checks throughout the
model lifecycle
• Engage business and technical
stakeholders in issue mitigation
Overfitting AND underfitting
Excessively complex or simplistic
modelling unable to capture good
patterns in the training data may limit the
prediction accuracy
Risk
CONTROLS
• Inadequate model complexity
control
• Limited data diversity
• Inadequate feature selection
• Data noise and anomalies
Factors
• Define and train risk management
procedures for fit risks
• Distinguish data issues from fit
issues
• Continuously assess fit's impact on
outputs and equity gaps
Suboptimal Hyperparameter Configuration
Inadequate hyperparameter
configurations may impair AI model
performance, effectiveness and reliability
Risk
CONTROLS
• Complex hyperparameter tuning
• Poor calibration understanding
• Lack of documentation
• Neglecting continuous
assessment
• Failure to capture decision
evidence
Factors
• Incorporate hyperparameter
specifications and calibration into
risk and impact assessments
• Document software package
choices and specific values
• Continuously evaluate and adapt
hyperparameters
Suboptimal DimRed
Dimensionality reduction, such as feature
selection and extraction, may hinder
interpretability
Risk
CONTROLS
• Inadequate dimensionality
reduction techniques
• Incorrect feature selection
• Poorly chosen feature extraction
methods
• Lack of interpretability in
models
Factors
• Use linear discriminant analysis,
distributed stochastic neighbor
embedding and auto encoders
• Optimize dimensionality reduction
• Justify and document the chosen
dimensionality reduction approach
OPERATION-
Related
Risks
AI Policy Gaps
Inadequate AI policies and procedures
may lead to unmanaged risks in AI
systems, hindering their benefits and
exposing to unforeseen challenges
Risk
CONTROLS
• Absence of specific AI
governance policies
• Neglect of indirect AI technology
influences in policies
• Lack of infrastructure support
policies for AI at scale
• Insufficient consideration of AI
knowledge in supporting non-AI
functions
Factors
• Develop AI governance,
infrastructure and use policies with
articulated roles for model
developers, users, and validators
• Include indirect AI influences in
supporting technology policies
• Promote AI knowledge sharing in
broader organizational functions
Third-Party Failure
Inadequate third-party components and
vendors may compromise software
integrity, reliability, security, and
performance
Risk
CONTROLS
• Lack of visibility into and
security vulnerabilities of open
source and commercial
components
• Incomplete or outdated
software inventory, source
components and licenses
• Inadequate prioritization of
third-party vulnerability
mitigation
Factors
• Create and maintain a detailed
software Bill of Materials
• Monitor external threats and
vulnerability disclosures
• Prioritize vulnerability mitigation
based on risk assessments
• Perform due diligence audits in
vendors
unattainable SCALABILITY
The utilization of actual data, users, and
customers in deploying the AI solution
may degrade the performance
Risk
CONTROLS
• Inadequate infrastructure for
scaling
• Increased computing power
demands
• Model performance degradation
• Delayed model updates and
approvals
Factors
• Monitor the performance to
address slowdowns
• Ensure AI system's scalability with
appropriate servers, computer
power and communication
• Automate model updates when
metrics deviate from performance
objectives
Big o notation in Scalability
Assesses the efficiency of searching, sorting, data
manipulation, and other tasks within an AI algorithm. It
quantifies efficiency by considering the relationship
between processing time and data usage in proportion to
the size of the input data.
CONTROLS
• Performance impact
assessments for different
alternatives
• Benchmarking pre and post-
changes
• Selection of data structures
• Quick, merge and heap sorting
selection
• Linear, binary and tree searching
selection
Processes
• Assess and compare the trade-off
between efficiency and complexity
in the given use case
• Enhance the AI algorithm for
scalability and efficient memory,
arrays and resource allocation
• Ensure that the computation time
meets end-users' expectations
Big o notation in Scalability
TINME
TO
COMPLETE
Input Size
O(n^2)
Recursion,
bubble
selection and
insertion
sorts
O(n log n)
Quicksort
Mergesort
O(n)
Arrays and
lists
O(log n)
Binary, tree
and heap
searches
👈 totally not cool area
absolutely lit area 👉
Infrastructure Malfunction
Infrastructure malfunctions stemming
from diverse factors may result in
software performance degradation or
data loss
Risk
CONTROLS
• Inadequate IT support
• Outdated AI knowledge
• Undefined roles and
responsibilities
Factors
• Establish expert IT support for AI
systems
• Ensure ongoing AI training for IT
teams
• Define clear roles and
responsibilities
• Monitor the operation of
supporting infrastructure
INSECURE Edge Systems
Insecure edge hardware may be
compromised affecting the data residing
at the edge
Risk
CONTROLS
• Physical components in edge
devices
• Data and model theft risks
• Interconnected systems
vulnerability
• Overlapping edge uses
Factors
• Implement remote disablement
methods
• Apply data and model obfuscation
techniques
• Isolate critical assets among
systems
• Minimize redundancy in edge use
cases
failed access controls
Granting excessive permissions in the AI
environments may expose to attack
vectors leading to data breaches and
security incidents
Risk
CONTROLS
• Inadequate permission
governance tools, in particular
for cloud-services
• Permissions mischaracterized as
misconfigurations
• Anomalous activity monitoring
gaps
• Unauthorized users with
excessive permissions
• Overly broad permission grants
Factors
• Detect and rectify excessive
permissions, including
mischaracterized
misconfigurations
• Continuously monitor for
anomalous activities
• Minimize the gap between granted
and used permissions
Inadequate Fallback Systems
Weak backup and restoration processes
may affect the resilience and security of
the AI application
Risk
CONTROLS
• Lack of backup systems, and
contingency plans
• Incomplete risk-tiering
• Insufficient data encryption for
backups
• Unassigned responsibilities
Factors
• Establish a comprehensive fallback
plan with responsibilities
• Define clear risk triggers and tiering
and contingency plans
• Implement varied data backup
schedules
Quantifying
AI risks
What is the
probability oF a?
P(a)
probability
How much more
likely is a than b?
Pdf (a) / Pdf (b)
probability density functions
HOW A IS
Parametrized?
MLA (A) & MAP (A)
Maximum Likelihood Estimation
Maximum A Posteriori
HOW A IS
APROXIMATED?
RV (A) & CLT (A)
Random Variable (monte Carlo SIM)
Central Limit Theorem
IS A SIGNIFICANT? P VALUES (A)
bootstrap resampling,
Losses from cyber incidents
Lognormal distribution fit
Events with losses > 20M USD
Median
200k
USD
Information Risk Insights Study 2017-2022
Event frequency model parameters
Type Mean μ St deviation σ
Upper bound -2.28 .87
Lower bound -6.39 1.78
Losses from cyber incidents
IRIS 2022 Distribution of reported cyber event losses by company revenue
Confidentiality Integrity Availability
Disclosure: employee or
consultant may expose the
model or training data through
either negligence or intention
Oracle: attacker may input the
model to analyze its outputs,
aiming to reverse engineer and
extract either the model itself
or the training data
Poisoning: attacker may
manipulate the training data,
aiming to distort outputs,
introduce backdoors, or
sabotage the model, causing it
to produce inaccurate
predictions or classifications
Evasion: attacker may modify a
minor portion of the training
data to trigger significant
changes in outputs, thereby
influencing decisions made by
the model
Bias: developer may use
incomplete training data
leading to discriminatory
outcomes
Denial of service: attacker may
flood the system with numerous
requests or data to overwhelm
its capacity and slow down
service consumption
Perturbation: attacker may
input data to exploit the model's
fragility, intentionally inducing
errors or unexpected behavior
Abuse: user may misuse the
model for fraudulent or
unethical purposes
Third party: vendor may fail to
deliver expected supporting
services
QUantification
Compensations
Costs of rebuilding
Costs of remediation
Fairness
Loss of revenue
Loss productivity hours
Cost of changes
Performance
Penalties for privacy laws
Compensations
Costs of notifications
Legal fees
PRIVACY
Penalties for AI laws
Compensations
Legal fees
Overcosts of customer
acquisition
Regulations
Costs of response, restoration
and remediation
Loss of competitiveness (IPs)
Loss of fraud
Robustness
Costs of wrong decision-
making
Compensations
Costs of reprogramming
Explainability
AGGREGATION
X1, X2, ..., Xn ~ Poisson (λ)
X1 + X2 ~ Poisson (2λ)
Likelihood of observing a
specific number of
Poisson-distributed
events when both X1 and
X2 are considered
together
Poisson
X1,..., Xn ~ Normal (μ, σ²)
X1 + X2 ~ Normal (2μ, 2σ²)
Combined impact or
exposure of a specific
number of Normal-
distributed events when
both X1 and X2 are
considered together
Normal
X1 ~ Binomial (n₁, p)
X2 ~ Binomial (n₂, p)
Likelihood of observing a
specific number of
independent events when
both X1 and X2 are
considered together
Binomial
+ = + =
Law of Total Expectation
Calculate the expected loss average of a random variable by
considering all possible loss values and their associated
probabilities
E(X)=E(E(X∣Y))=∑E(X∣Y=y)⋅P(Y=y)
E(X) represents the expected value of the random variable X
E(X∣Y) represents the expected value of X given a specific value of the random variable Y
E(E(X∣Y)) means taking the expected value of E(X∣Y) over all possible values of Y
Decompose independent losses to be able to aggregate a total
exposure
Example IN R
P_B <- 0.05 # 5% probability of a security incident on the IA model causing a data breach
mean_compensation <- 50000 # Mean compensation costs (customer compensations, legal and notification costs)
sd_compensation <- 2 # Standard deviation of compensation cost
mean_regeneration <- 20000 # Mean data regeneration costs (model rebuilding and data regeneration costs)
sd_regeneration <- 2 # Standard deviation of data regeneration cost
# Generate random samples from lognormal distributions for compensation and regeneration costs
compensation_samples <- rlnorm(1000, log(mean_compensation), log(sd_compensation))
regeneration_samples <- rlnorm(1000, log(mean_regeneration), log(sd_regeneration))
# Calculate the expected losses using the Law of Total Expectation
Expected_Cost_Compensation <- P_B * mean(compensation_samples)
Expected_Cost_Regeneration <- P_B * mean(regeneration_samples)
Expected_Loss <- Expected_Cost_Compensation + Expected_Cost_Regeneration
# Print the results
cat("The expected compensation cost of a data breach is $", Expected_Cost_Compensation, "n")
cat("The expected data regeneration cost of a data breach is $", Expected_Cost_Regeneration, "n")
cat("The total expected loss of a data breach is $", Expected_Loss, "n")
Example IN R
samples <- regeneration_samples + compensation_samples
hist(samples, breaks = 100, main = "Distribution of Total Losses",
xlab = "Total Loss", ylab = "Frequency", col = "lightblue")
Counting
Risk
scenario
Event A
Failed
Event B
Succeeded
P(A)
P(B)
Experiment Outcome
Model attack
Event A
Failed
Event B
Succeeded
P(A)=
95%
P(B)=
5%
Compensations
Regeneration
How many options?
Product Rule of Counting
If an experiment has two
independent parts, where the first
part can result in one of |m|
outcomes and the second part can
result in one of |n| outcomes, then
the total number of outcomes for
the experiment is |m| * |n|.
|m| * |n| = 2 * 2 = 4
(Fail, 0,0)
(Succ, Comp, Rege)
(Succ, 0, Rege)
(Succ, Comp, 0)
Probability
• our belief that an event 𝐸 occurs
• quantitative way to express our degree of belief or
confidence in the occurrence of an event
• number between 0 and 1 to which we ascribe meaning
• represented through probability distributions, which
describe how probabilities are distributed among different
possible outcomes
𝑛
# of total trials
𝑛 (𝐸 )
# trials where 𝐸 occurs
Combinatorics & Bow Tie
Event
Cause A
Cause B
Cause A.A
Cause A.B
Cause B.A
Cause B.B
Cons 1
Cons 2
Cons 1.1
Cons 1.2
Cons 2.1
Cons 2.2
Tier 1 = 2 outcomes (direct or primary impact)
Tier 2 = 4 outcomes (indirect or secondary impact)
Tier 3 = 8 outcomes
|A| = m, |B| = n, A ∩ B = ∅
Tier n = 2^n = exponential growth!
Combinatorics & Bow Tie
Data
Breach
Subversion
No subversion
Misconfiguration
No
misconfiguration
Misconfiguration
No
misconfiguratio
Regeneration
- 20000
No
Regeneration
Compensations
- 50000
No
compensations
Compensations
- 50000
No
compensations
What is the probability of avoiding data
regeneration and compensation costs?
80%*50%= 40%
P(20)
P(80)
P(50)
P(50)
R code
# Set the parameters for risk assessment
Simulations <- 10000 # Number of random simulations
Loss1 <- 20000 # Estimated loss of the tier 1 impact
StDev1 <- 0.2 # Estimated standard deviation 1
Prob1 <- 0.2 # Probability of occurrence of the tier 1 impact
Loss2 <- 50000 # Estimated loss of the tier 2 impact
StDev2 <- 0.2 # Estimated standard deviation 2
Prob2 <- 0.5 # Probability of occurrence of the tier 2 impact
# Calculate and combine impacts
x1 <- rlnorm(Simulations, log(Loss1), StDev1) *
rbinom(Simulations, 1, Prob1)
x2 <- rlnorm(Simulations, log(Loss2), StDev2) *
rbinom(Simulations, 1, Prob2)
hist(x1 + x2, main="Risk Impact Aggregation",
xlab="Combined Impact")
← 40%
Combinatorics & Impacts
Data
Breach
Product rule of counting
List of combinations of potential impacts without repetition
R code
impacts <- c("Regeneration", "Compensations",
"Remediation", "Fines")
all_combinations <- list()
for (n_impacts in 1:4) {
combinations <- combn(impacts, n_impacts, simplify =
FALSE)
all_combinations[[as.character(n_impacts)]] <- combinations
}
all_combinations
Regeneration
Compensations
Remediation
Fines
Combinatorics & Impacts
Data
Breach
$`1`[[1]] "Regeneration"
$`1`[[2]] "Compensations"
$`1`[[3]] "Remediation"
$`1`[[4]] "Fines"
$`2`[[1]] "Regeneration“ "Compensations"
$`2`[[2]] "Regeneration" "Remediation"
$`2`[[3]] "Regeneration" "Fines"
$`2`[[4]] "Compensations" "Remediation"
$`2`[[5]] "Compensations" "Fines"
$`2`[[6]] "Remediation" "Fines"
$`3`[[1]] "Regeneration" "Compensations" "Remediation"
$`3`[[2]] "Regeneration" "Compensations" "Fines"
$`3`[[3]] "Regeneration" "Remediation" "Fines"
$`3`[[4]] "Compensations" "Remediation" "Fines"
$`4`[[1]] "Regeneration“ "Compensations" "Remediation“ "Fines"
Regeneration
Compensations
Remediation
Fines
Prof. Hernan
Huwyler, MBA CPA
Executive Education
Compliance, RiskS, Controls
Cyber Security and Governance
/hernanwyler
/hewyler

Más contenido relacionado

Similar a Prof. Hernan Huwyler IE Law School - AI Risks and Controls.pdf

Questions for successful test automation projects
Questions for successful test automation projectsQuestions for successful test automation projects
Questions for successful test automation projects
Daniel Ionita
 
CaseWare Monitor - New in 5.4 Release
CaseWare Monitor - New in 5.4 ReleaseCaseWare Monitor - New in 5.4 Release
CaseWare Monitor - New in 5.4 Release
Alessa
 
AI Governance Playbook
AI Governance PlaybookAI Governance Playbook
AI Governance Playbook
Antony Turner
 
Chapter 15 software product metrics
Chapter 15 software product metricsChapter 15 software product metrics
Chapter 15 software product metrics
SHREEHARI WADAWADAGI
 
Nist 800 53 deep dive 20210813
Nist 800 53 deep dive 20210813Nist 800 53 deep dive 20210813
Nist 800 53 deep dive 20210813
Kinetic Potential
 
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
TEST Huddle
 
Assessing System Risk the Smart Way
Assessing System Risk the Smart WayAssessing System Risk the Smart Way
Assessing System Risk the Smart Way
Security Innovation
 
Fehmida Sayed - IT Head, Senior Manager-Infra and Infosec
Fehmida Sayed - IT Head, Senior Manager-Infra and InfosecFehmida Sayed - IT Head, Senior Manager-Infra and Infosec
Fehmida Sayed - IT Head, Senior Manager-Infra and Infosec
Fehmida Sayed
 
CNIT 160: Ch 2a: Introduction to Information Security Governance
CNIT 160: Ch 2a: Introduction to Information Security GovernanceCNIT 160: Ch 2a: Introduction to Information Security Governance
CNIT 160: Ch 2a: Introduction to Information Security Governance
Sam Bowne
 
Unit 8 software quality and matrices
Unit 8 software quality and matricesUnit 8 software quality and matrices
Unit 8 software quality and matrices
Preeti Mishra
 
Fractional Chief AI Officer Services For Hire
Fractional Chief AI Officer Services For HireFractional Chief AI Officer Services For Hire
Fractional Chief AI Officer Services For Hire
Value Amplify Consulting
 
Write in own words about given below job desception how it relates t.docx
Write in own words about given below job desception how it relates t.docxWrite in own words about given below job desception how it relates t.docx
Write in own words about given below job desception how it relates t.docx
herbertwilson5999
 
Data architecture around risk management
Data architecture around risk managementData architecture around risk management
Data architecture around risk management
Suvradeep Rudra
 
Steps in it audit
Steps in it auditSteps in it audit
Steps in it audit
kinjalmkothari92
 
Introduction to Information Security CSE
Introduction to Information Security CSEIntroduction to Information Security CSE
Introduction to Information Security CSE
BurhanKhan774154
 
Intro.ppt
Intro.pptIntro.ppt
Intro.ppt
RamaNingaiah
 
Cybersecurity Frameworks and You: The Perfect Match
Cybersecurity Frameworks and You: The Perfect MatchCybersecurity Frameworks and You: The Perfect Match
Cybersecurity Frameworks and You: The Perfect Match
McKonly & Asbury, LLP
 
CNIT 160 Ch 4a: Information Security Programs
CNIT 160 Ch 4a: Information Security ProgramsCNIT 160 Ch 4a: Information Security Programs
CNIT 160 Ch 4a: Information Security Programs
Sam Bowne
 
PMP.pptx
PMP.pptxPMP.pptx
PMP.pptx
BabarBashir18
 
Shruti ppt
Shruti pptShruti ppt
Shruti ppt
SHRUTI SAGAR
 

Similar a Prof. Hernan Huwyler IE Law School - AI Risks and Controls.pdf (20)

Questions for successful test automation projects
Questions for successful test automation projectsQuestions for successful test automation projects
Questions for successful test automation projects
 
CaseWare Monitor - New in 5.4 Release
CaseWare Monitor - New in 5.4 ReleaseCaseWare Monitor - New in 5.4 Release
CaseWare Monitor - New in 5.4 Release
 
AI Governance Playbook
AI Governance PlaybookAI Governance Playbook
AI Governance Playbook
 
Chapter 15 software product metrics
Chapter 15 software product metricsChapter 15 software product metrics
Chapter 15 software product metrics
 
Nist 800 53 deep dive 20210813
Nist 800 53 deep dive 20210813Nist 800 53 deep dive 20210813
Nist 800 53 deep dive 20210813
 
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
Ray Scott - Agile Solutions – Leading with Test Data Management - EuroSTAR 2012
 
Assessing System Risk the Smart Way
Assessing System Risk the Smart WayAssessing System Risk the Smart Way
Assessing System Risk the Smart Way
 
Fehmida Sayed - IT Head, Senior Manager-Infra and Infosec
Fehmida Sayed - IT Head, Senior Manager-Infra and InfosecFehmida Sayed - IT Head, Senior Manager-Infra and Infosec
Fehmida Sayed - IT Head, Senior Manager-Infra and Infosec
 
CNIT 160: Ch 2a: Introduction to Information Security Governance
CNIT 160: Ch 2a: Introduction to Information Security GovernanceCNIT 160: Ch 2a: Introduction to Information Security Governance
CNIT 160: Ch 2a: Introduction to Information Security Governance
 
Unit 8 software quality and matrices
Unit 8 software quality and matricesUnit 8 software quality and matrices
Unit 8 software quality and matrices
 
Fractional Chief AI Officer Services For Hire
Fractional Chief AI Officer Services For HireFractional Chief AI Officer Services For Hire
Fractional Chief AI Officer Services For Hire
 
Write in own words about given below job desception how it relates t.docx
Write in own words about given below job desception how it relates t.docxWrite in own words about given below job desception how it relates t.docx
Write in own words about given below job desception how it relates t.docx
 
Data architecture around risk management
Data architecture around risk managementData architecture around risk management
Data architecture around risk management
 
Steps in it audit
Steps in it auditSteps in it audit
Steps in it audit
 
Introduction to Information Security CSE
Introduction to Information Security CSEIntroduction to Information Security CSE
Introduction to Information Security CSE
 
Intro.ppt
Intro.pptIntro.ppt
Intro.ppt
 
Cybersecurity Frameworks and You: The Perfect Match
Cybersecurity Frameworks and You: The Perfect MatchCybersecurity Frameworks and You: The Perfect Match
Cybersecurity Frameworks and You: The Perfect Match
 
CNIT 160 Ch 4a: Information Security Programs
CNIT 160 Ch 4a: Information Security ProgramsCNIT 160 Ch 4a: Information Security Programs
CNIT 160 Ch 4a: Information Security Programs
 
PMP.pptx
PMP.pptxPMP.pptx
PMP.pptx
 
Shruti ppt
Shruti pptShruti ppt
Shruti ppt
 

Más de Hernan Huwyler, MBA CPA

Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
Hernan Huwyler, MBA CPA
 
Model to Quantify Compliance Risks.pdf
Model to Quantify Compliance Risks.pdfModel to Quantify Compliance Risks.pdf
Model to Quantify Compliance Risks.pdf
Hernan Huwyler, MBA CPA
 
Prof Hernan Huwyler MBA CPA - Ditch your Heat Maps
Prof Hernan Huwyler MBA CPA - Ditch your Heat MapsProf Hernan Huwyler MBA CPA - Ditch your Heat Maps
Prof Hernan Huwyler MBA CPA - Ditch your Heat Maps
Hernan Huwyler, MBA CPA
 
Profesor Hernan Huwyler MBA CPA - Operacional Compliance
Profesor Hernan Huwyler MBA CPA - Operacional ComplianceProfesor Hernan Huwyler MBA CPA - Operacional Compliance
Profesor Hernan Huwyler MBA CPA - Operacional Compliance
Hernan Huwyler, MBA CPA
 
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023 Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
Hernan Huwyler, MBA CPA
 
The Behavioral Science of Compliance CUMPLEN.pdf
The Behavioral Science of Compliance CUMPLEN.pdfThe Behavioral Science of Compliance CUMPLEN.pdf
The Behavioral Science of Compliance CUMPLEN.pdf
Hernan Huwyler, MBA CPA
 
R is for Risk 2 Risk Management using R
R is for Risk 2 Risk Management using RR is for Risk 2 Risk Management using R
R is for Risk 2 Risk Management using R
Hernan Huwyler, MBA CPA
 
Compliance and the russian invasion - Prof Hernan Huwyler
Compliance and the russian invasion - Prof Hernan HuwylerCompliance and the russian invasion - Prof Hernan Huwyler
Compliance and the russian invasion - Prof Hernan Huwyler
Hernan Huwyler, MBA CPA
 
DPO Day Conference - Minimizing Privacy Risks
DPO Day Conference - Minimizing Privacy RisksDPO Day Conference - Minimizing Privacy Risks
DPO Day Conference - Minimizing Privacy Risks
Hernan Huwyler, MBA CPA
 
Master in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
Master in Sustainability Leadership Sustainability Risks Prof Hernan HuwylerMaster in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
Master in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
Hernan Huwyler, MBA CPA
 
Cyber Laundering and the AML Directives
Cyber Laundering and the AML DirectivesCyber Laundering and the AML Directives
Cyber Laundering and the AML Directives
Hernan Huwyler, MBA CPA
 
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
Hernan Huwyler, MBA CPA
 
10 Mistakes in Implementing the ISO 37301
10 Mistakes in Implementing the ISO 3730110 Mistakes in Implementing the ISO 37301
10 Mistakes in Implementing the ISO 37301
Hernan Huwyler, MBA CPA
 
Qa Financials - 10 Smart Controls for Software Development
Qa Financials  - 10 Smart Controls for Software DevelopmentQa Financials  - 10 Smart Controls for Software Development
Qa Financials - 10 Smart Controls for Software Development
Hernan Huwyler, MBA CPA
 
Information Risk Management - Cyber Risk Management - IT Risks
Information Risk Management - Cyber Risk Management - IT RisksInformation Risk Management - Cyber Risk Management - IT Risks
Information Risk Management - Cyber Risk Management - IT Risks
Hernan Huwyler, MBA CPA
 
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwylerStronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
Hernan Huwyler, MBA CPA
 
IE Curso ISO 37301 Aseguramiento de Controles de Cumplimiento
IE Curso  ISO 37301 Aseguramiento de Controles de Cumplimiento IE Curso  ISO 37301 Aseguramiento de Controles de Cumplimiento
IE Curso ISO 37301 Aseguramiento de Controles de Cumplimiento
Hernan Huwyler, MBA CPA
 
Strategy Insights - How to Quantify IT Risks
Strategy Insights - How to Quantify IT Risks Strategy Insights - How to Quantify IT Risks
Strategy Insights - How to Quantify IT Risks
Hernan Huwyler, MBA CPA
 
Hernan Huwyler - Boards in a Digitalized World
Hernan Huwyler - Boards in a Digitalized WorldHernan Huwyler - Boards in a Digitalized World
Hernan Huwyler - Boards in a Digitalized World
Hernan Huwyler, MBA CPA
 
IDA DTU RiskLab How to validate your risk data
IDA DTU RiskLab How to validate your risk dataIDA DTU RiskLab How to validate your risk data
IDA DTU RiskLab How to validate your risk data
Hernan Huwyler, MBA CPA
 

Más de Hernan Huwyler, MBA CPA (20)

Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
Asociacion Profesionistas de Compliance - Initiatives to Reduce the Cost of C...
 
Model to Quantify Compliance Risks.pdf
Model to Quantify Compliance Risks.pdfModel to Quantify Compliance Risks.pdf
Model to Quantify Compliance Risks.pdf
 
Prof Hernan Huwyler MBA CPA - Ditch your Heat Maps
Prof Hernan Huwyler MBA CPA - Ditch your Heat MapsProf Hernan Huwyler MBA CPA - Ditch your Heat Maps
Prof Hernan Huwyler MBA CPA - Ditch your Heat Maps
 
Profesor Hernan Huwyler MBA CPA - Operacional Compliance
Profesor Hernan Huwyler MBA CPA - Operacional ComplianceProfesor Hernan Huwyler MBA CPA - Operacional Compliance
Profesor Hernan Huwyler MBA CPA - Operacional Compliance
 
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023 Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
Hernan Huwyler - IE Compliance Corporate Risk Management Full 2023
 
The Behavioral Science of Compliance CUMPLEN.pdf
The Behavioral Science of Compliance CUMPLEN.pdfThe Behavioral Science of Compliance CUMPLEN.pdf
The Behavioral Science of Compliance CUMPLEN.pdf
 
R is for Risk 2 Risk Management using R
R is for Risk 2 Risk Management using RR is for Risk 2 Risk Management using R
R is for Risk 2 Risk Management using R
 
Compliance and the russian invasion - Prof Hernan Huwyler
Compliance and the russian invasion - Prof Hernan HuwylerCompliance and the russian invasion - Prof Hernan Huwyler
Compliance and the russian invasion - Prof Hernan Huwyler
 
DPO Day Conference - Minimizing Privacy Risks
DPO Day Conference - Minimizing Privacy RisksDPO Day Conference - Minimizing Privacy Risks
DPO Day Conference - Minimizing Privacy Risks
 
Master in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
Master in Sustainability Leadership Sustainability Risks Prof Hernan HuwylerMaster in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
Master in Sustainability Leadership Sustainability Risks Prof Hernan Huwyler
 
Cyber Laundering and the AML Directives
Cyber Laundering and the AML DirectivesCyber Laundering and the AML Directives
Cyber Laundering and the AML Directives
 
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
Hernan Huwyler - Iberoamerican Compliance Conference UCM Congreso Iberoameric...
 
10 Mistakes in Implementing the ISO 37301
10 Mistakes in Implementing the ISO 3730110 Mistakes in Implementing the ISO 37301
10 Mistakes in Implementing the ISO 37301
 
Qa Financials - 10 Smart Controls for Software Development
Qa Financials  - 10 Smart Controls for Software DevelopmentQa Financials  - 10 Smart Controls for Software Development
Qa Financials - 10 Smart Controls for Software Development
 
Information Risk Management - Cyber Risk Management - IT Risks
Information Risk Management - Cyber Risk Management - IT RisksInformation Risk Management - Cyber Risk Management - IT Risks
Information Risk Management - Cyber Risk Management - IT Risks
 
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwylerStronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
Stronger 2021 Building the Blocks to Quantify Cyber Risks - Prof hernan huwyler
 
IE Curso ISO 37301 Aseguramiento de Controles de Cumplimiento
IE Curso  ISO 37301 Aseguramiento de Controles de Cumplimiento IE Curso  ISO 37301 Aseguramiento de Controles de Cumplimiento
IE Curso ISO 37301 Aseguramiento de Controles de Cumplimiento
 
Strategy Insights - How to Quantify IT Risks
Strategy Insights - How to Quantify IT Risks Strategy Insights - How to Quantify IT Risks
Strategy Insights - How to Quantify IT Risks
 
Hernan Huwyler - Boards in a Digitalized World
Hernan Huwyler - Boards in a Digitalized WorldHernan Huwyler - Boards in a Digitalized World
Hernan Huwyler - Boards in a Digitalized World
 
IDA DTU RiskLab How to validate your risk data
IDA DTU RiskLab How to validate your risk dataIDA DTU RiskLab How to validate your risk data
IDA DTU RiskLab How to validate your risk data
 

Último

The Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb PlatformThe Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb Platform
SabaaSudozai
 
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel ChartSatta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
➒➌➎➏➑➐➋➑➐➐Dpboss Matka Guessing Satta Matka Kalyan Chart Indian Matka
 
How MJ Global Leads the Packaging Industry.pdf
How MJ Global Leads the Packaging Industry.pdfHow MJ Global Leads the Packaging Industry.pdf
How MJ Global Leads the Packaging Industry.pdf
MJ Global
 
The Steadfast and Reliable Bull: Taurus Zodiac Sign
The Steadfast and Reliable Bull: Taurus Zodiac SignThe Steadfast and Reliable Bull: Taurus Zodiac Sign
The Steadfast and Reliable Bull: Taurus Zodiac Sign
my Pandit
 
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
Stephen Cashman
 
Call8328958814 satta matka Kalyan result satta guessing
Call8328958814 satta matka Kalyan result satta guessingCall8328958814 satta matka Kalyan result satta guessing
Call8328958814 satta matka Kalyan result satta guessing
➑➌➋➑➒➎➑➑➊➍
 
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
taqyea
 
Digital Transformation Frameworks: Driving Digital Excellence
Digital Transformation Frameworks: Driving Digital ExcellenceDigital Transformation Frameworks: Driving Digital Excellence
Digital Transformation Frameworks: Driving Digital Excellence
Operational Excellence Consulting
 
DearbornMusic-KatherineJasperFullSailUni
DearbornMusic-KatherineJasperFullSailUniDearbornMusic-KatherineJasperFullSailUni
DearbornMusic-KatherineJasperFullSailUni
katiejasper96
 
Profiles of Iconic Fashion Personalities.pdf
Profiles of Iconic Fashion Personalities.pdfProfiles of Iconic Fashion Personalities.pdf
Profiles of Iconic Fashion Personalities.pdf
TTop Threads
 
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
APCO
 
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
taqyea
 
Observation Lab PowerPoint Assignment for TEM 431
Observation Lab PowerPoint Assignment for TEM 431Observation Lab PowerPoint Assignment for TEM 431
Observation Lab PowerPoint Assignment for TEM 431
ecamare2
 
Top mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptxTop mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptx
JeremyPeirce1
 
Pitch Deck Teardown: Kinnect's $250k Angel deck
Pitch Deck Teardown: Kinnect's $250k Angel deckPitch Deck Teardown: Kinnect's $250k Angel deck
Pitch Deck Teardown: Kinnect's $250k Angel deck
HajeJanKamps
 
2024-6-01-IMPACTSilver-Corp-Presentation.pdf
2024-6-01-IMPACTSilver-Corp-Presentation.pdf2024-6-01-IMPACTSilver-Corp-Presentation.pdf
2024-6-01-IMPACTSilver-Corp-Presentation.pdf
hartfordclub1
 
Chapter 7 Final business management sciences .ppt
Chapter 7 Final business management sciences .pptChapter 7 Final business management sciences .ppt
Chapter 7 Final business management sciences .ppt
ssuser567e2d
 
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
Lviv Startup Club
 
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
AnnySerafinaLove
 
Business storytelling: key ingredients to a story
Business storytelling: key ingredients to a storyBusiness storytelling: key ingredients to a story
Business storytelling: key ingredients to a story
Alexandra Fulford
 

Último (20)

The Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb PlatformThe Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb Platform
 
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel ChartSatta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
 
How MJ Global Leads the Packaging Industry.pdf
How MJ Global Leads the Packaging Industry.pdfHow MJ Global Leads the Packaging Industry.pdf
How MJ Global Leads the Packaging Industry.pdf
 
The Steadfast and Reliable Bull: Taurus Zodiac Sign
The Steadfast and Reliable Bull: Taurus Zodiac SignThe Steadfast and Reliable Bull: Taurus Zodiac Sign
The Steadfast and Reliable Bull: Taurus Zodiac Sign
 
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
The Heart of Leadership_ How Emotional Intelligence Drives Business Success B...
 
Call8328958814 satta matka Kalyan result satta guessing
Call8328958814 satta matka Kalyan result satta guessingCall8328958814 satta matka Kalyan result satta guessing
Call8328958814 satta matka Kalyan result satta guessing
 
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
一比一原版(QMUE毕业证书)英国爱丁堡玛格丽特女王大学毕业证文凭如何办理
 
Digital Transformation Frameworks: Driving Digital Excellence
Digital Transformation Frameworks: Driving Digital ExcellenceDigital Transformation Frameworks: Driving Digital Excellence
Digital Transformation Frameworks: Driving Digital Excellence
 
DearbornMusic-KatherineJasperFullSailUni
DearbornMusic-KatherineJasperFullSailUniDearbornMusic-KatherineJasperFullSailUni
DearbornMusic-KatherineJasperFullSailUni
 
Profiles of Iconic Fashion Personalities.pdf
Profiles of Iconic Fashion Personalities.pdfProfiles of Iconic Fashion Personalities.pdf
Profiles of Iconic Fashion Personalities.pdf
 
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...
 
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
 
Observation Lab PowerPoint Assignment for TEM 431
Observation Lab PowerPoint Assignment for TEM 431Observation Lab PowerPoint Assignment for TEM 431
Observation Lab PowerPoint Assignment for TEM 431
 
Top mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptxTop mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptx
 
Pitch Deck Teardown: Kinnect's $250k Angel deck
Pitch Deck Teardown: Kinnect's $250k Angel deckPitch Deck Teardown: Kinnect's $250k Angel deck
Pitch Deck Teardown: Kinnect's $250k Angel deck
 
2024-6-01-IMPACTSilver-Corp-Presentation.pdf
2024-6-01-IMPACTSilver-Corp-Presentation.pdf2024-6-01-IMPACTSilver-Corp-Presentation.pdf
2024-6-01-IMPACTSilver-Corp-Presentation.pdf
 
Chapter 7 Final business management sciences .ppt
Chapter 7 Final business management sciences .pptChapter 7 Final business management sciences .ppt
Chapter 7 Final business management sciences .ppt
 
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
Maksym Vyshnivetskyi: PMO KPIs (UA) (#12)
 
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.
 
Business storytelling: key ingredients to a story
Business storytelling: key ingredients to a storyBusiness storytelling: key ingredients to a story
Business storytelling: key ingredients to a story
 

Prof. Hernan Huwyler IE Law School - AI Risks and Controls.pdf

  • 1. HOW TO CONTROL RISKS IN Artificial intelligence Prof. Hernan Huwyler MBA CPA Executive Education Governance, Compliance, and Cyber Security
  • 2. RISK PROCESS for AI • Perform an initial risk assessment and prioritize AI assets for further review • Identify opportunities for targeted governance to reduce risks, optimize costs, and enhance the value derived from AI resources Management • Identify all AI assets and libraries including shadow AI implementations • Establish a searchable database of AI assets • Foster a user community to promote collaboration and the adoption of AI best practices Discovery • Develop a policies and procedures with AI- specific controls • Incorporate AI applications into the IT, risk and compliance policies • Train compliance with AI-related controls • Test the AI-specific controls Compliance
  • 3. AI Model Risk Management • Assess data, algorithmic, performance, computational feasibility, and vendor risks • Enhance controls for change management, targeted model reviews, and ongoing monitoring Implement • Embed into decision- making: innovation plan, product approvals, third-party sourcing, and end-user computing of AI-based software • Improve risk criteria to cover the materiality, customer and societal impacts and complexity Design • Keep senior management updated on AI model development, review, and usage • Report the testing and outcomes analysis for AI models • Hold developers and model owners accountable for safe deploying Communicate
  • 4. OBJECTIVES AT RISK • Deployment systems • Core systems • Infrastructure • Edge systems Security • Training data • Use data • Feedback data • Model code DATA • Third-parties • Dependencies • User experience PERFORMANCE • Fairness • Infrastructure • Incorrect feedback Explanability • AI and privacy regulations • Contracts • Insurance Compliance • Budgeted resources • Time objectives COSTS
  • 5. Risk and COMPLIANCE • Implement risk management system to proactively mitigate liabilities • Implement granular scenario analysis to address AI-specific threats • Evaluate the potential human rights impacts when AI interacts with users as intended by the model (bias, privacy, explainability, robustness) EU Artificial Intelligence Law • Demonstrate controls on privacy, human rights, data, model and cyber security risks • Model threats, attacks and surface to assess software risks (NIST 800 218 PW.1.1) • Analyze vulnerabilities to gather risk data and plan responses (NIST 800 218 RV.2.1) • Have a qualified independent person to review that the design addresses the identified risks (NIST 800 218 RW.2.1) • Analyze the risk of applicable technology stacks US Executive Orders 13859 on AI and 14028 on cyber Sec
  • 6. New SKILLS for Risk Managers Understand • AI specific threat vectors • Performance versus interpretability trade- off • Degradation and flagging SCENARIOS Understand • AI scope reviews • Bias testing • Explainability reports • Data features and testing • Statistical data checks • Model output reviews • Bayesian hypothesis testing Practices Understand • Roles of data modelers, and analytics • Model and use documentation • Escalation mechanisms and workflows • AI use cases Structure
  • 7. AI Computer power Probabilities DATA Probabilistic model for predictions Beyond machine learning Curated Classified Complete Clean Consistent
  • 9. Inadequate Data Representativeness Uneven training data from skewed sampling may lead to erroneous AI model predictions Risk CONTROLS • Inadequate data preprocessing • Biased data collection methods • Unrepresentative training datasets • Selection biases in sampling • Data augmentation mishandling • Lack of diversity in input sources • Data drift over time Factors • Verify data representativeness • Conduct sensitivity analysis • Implement advanced modeling techniques to mitigate selection bias • Integrate fairness-aware AI algorithms to identify and rectify bias
  • 10. low Data quality Incomplete, inconsistent, or incorrect data without detection processes may lead to inaccurate AI model predictions Risk CONTROLS • Lack of data quality standards • Absence of standardized data improvement processes • Data privacy and ownership issues • Weak data fitness for the use case • High data customization • Reuse of prior model-derived data Factors • Establish data quality rules • Enhance remediation processes • Centralize data management • Test the data formatting, metadata completeness, and indexing • Address root causes of errors
  • 11. Data scarcity Inadequate selection, size and relevance of data sets may lead to inaccurate AI model predictions Risk CONTROLS • Privacy, data source, and logistical constraints • Limited available data due to the unique nature of the domain • Lack of resources to process large volume of data • Need for long-term historical data • Use of third-party data Factors • Define standard identifiers and consistent definitions • Integrate physics-informed machine learning • Review training, synthetic and augmented data for accuracy and relevance
  • 12. Insufficient External Data Quality A data vendor's inadequate lineage information, components, and processes may compromise data quality and model performance Risk CONTROLS • Lack of vendor due diligence and requirements • Insufficient quality control measures • Data volume exceeds infrastructure capacity • Inadequate response time optimization Factors • Enforce standards of model risk management • Set service level agreements with provenance data • Monitor aggregation • Validate inputs and reliability • Track response times
  • 14. Misjudged AI Risk Ratings Inadequate risk assessments on model pattern recognition, probability theory, and engineering may lead to vulnerabilities and implementation issues Risk CONTROLS • Lack of traditional statistical foundation for AI modeling • Complex AI model patterns • Probability-based decision- making • Model engineering nuances Factors • Integrate standards into modeling processes, approvals, third-party sourcing, and IT • Maintain an AI model inventory with metadata • Adapt design reviews to model- specific risks
  • 15. Missing Business Requirements Inadequate review of use case, operationalization, and consumption the AI model development may overlook business requirements Risk CONTROLS • Incomplete use case analysis • Lack of operationalization clarity • Insufficient consumption strategy Factors • Assess AI's unique behavior potential beyond documented requirements • Identify and address capability gaps across conceptualization, pilot, and operationalization phases before project initiation
  • 16. Model Usage MISSUNDESTANDING Inadequate understanding of intended uses of the model may hinder informed decision-making, leading to potential human right harm and compliance losses Risk CONTROLS • Lack of context understanding • Unrecognized impact on different groups • Absence of continuous assessment plan • Failure to consider the broader system impact Factors • Perform impact assessments considering the context to address vulnerabilities and avoid unfair outputs • Implement a continuous assessment plan throughout the model's lifecycle • Consider the broader system impact during model integration
  • 17. UNIDENTIFIED MODEL LIMITS Inadequate understanding of AI model boundary conditions may produce unintended and suboptimal outcomes Risk CONTROLS • Neglecting system limitations awareness and boundary condition considerations • Lack of ongoing monitoring • Lack of interdisciplinary collaboration • Ethical misalignment in using AI Factors • Develop ongoing boundary testing processes during project framing • Define monitoring procedures throughout all the lifecycle • Form multi-disciplinary development teams
  • 18. UNIDENTIFIED MODEL LIMITS • Behaviors may exhibit substantial cultural and contextual variations, rendering generalized predictions challenging • Predictive models may struggle to extrapolate behavior patterns consistently across diverse individuals • Behaviors may be dynamic and can undergo significant transformations over time Challenges • Human decision-making may be shaped by a multitude of complex factors, including subjective experiences and individual interpretations • Behaviors may defy predictability owing to the presence of free will, cognitive biases, and the impact of unique and unforeseen stimuli • Behavioral data may engender concerns regarding ethical, compliance, and privacy considerations
  • 19. Model Security Gaps Inadequate security requirements may expose AI models to adversarial attacks and loss of data integrity and availability Risk CONTROLS • Incomplete security specifications • Lack of adversarial robustness testing • Neglected threat modeling Factors • Enhance security program with AI considerations • Address potential system compromise methods • Assess dynamic AI-related risks • Integrate information security into risk management policies
  • 21. Ethics Oversight Gaps Insufficient algorithm ethics checks lead to discrimination and unreliable models unfit for predictions Risk CONTROLS • Lack of defined AI Ethics Governance to address ethical considerations in training data • Inadequate controls in algorithm development • Neglecting hidden biases in model outputs. • Ethical misalignment with agents' preferences • Absence of AI ethics council inspections Factors • Enhance algorithm development controls to address ethics in training data and mitigate hidden biases • Inspect algorithms through an AI ethics council • Combine game theory and machine learning
  • 22. Non Compliance Limited insight into decision-making processes may harbor hidden biases and threaten the model credibility Risk CONTROLS • Lack of embedded compliance checks • Insufficient model validation for regulatory requirements • Lack of transparency in AI decision-making • Incomplete understanding of decision processes • Unexamined societal biases in data and models Factors • Strengthen model validation, emphasizing transparent decision understanding • Conduct extensive pre-deployment testing of models • Enhance algorithm validation to meet ethical AI standardsand mitigate societal biases in data and models
  • 23. Black box Inscrutable machine learning algorithms may prevent human understanding of the decision-making processes Risk CONTROLS • Model bias and errors • Lack of transparency and accountability of business and technical owners and stakeholders • Ethical and regulatory concerns Factors • Embed black box testing in the model lifecycle to explain, debug, and improve models for stakeholders • Require explanations from model owners • Operationalize tools for audits and monitoring model impact on humans
  • 24. Algorithm Aversion • End users may not rely on recommendations when there isn't a readily identifiable entity to hold accountable for those recommendations • End users may be concerned about the potential for algorithmic discrimination or harm • End users may prefer for their own judgments, which can lead to cognitive dissonance when algorithms offer disparate recommendations Challenges • End users may exhibit skepticism when it comes to placing trust in AI algorithm-generated decisions, even when these decisions surpass human judgment • End users may harbor doubts about algorithmic recommendations when they lack clarity about the decision- making process • End users may view AI algorithms as dehumanizing, which can result in a sense of diminished control or the loss of a personal touch
  • 25. Model Instability Changing relationships between the input features, the data and the target variables (concept and data drifts) may lead to deteriorating model accuracy over time Risk CONTROLS • Changes in data distribution • Altered input-output relationships • Multicollinearity-induced parameter instability • High redundancy in the model structure focused on improving the performance Factors • Embed model stability checks in the model lifecycle • Check for data distribution changes and input-output relationship shifts • Conduct sensitivity analysis and scenario testing to enhance model robustness, accuracy, and understanding
  • 26. Model Misselection Inadequate model selection may result in suboptimal AI performance and inaccurate decisions Risk CONTROLS • Poor candidate model generation and development • Neglect of key parameters • Insufficient data and considerations for analysis • Inadequate system understanding Factors • Tailor model selection to the specific context • Form diverse review teams for model categorization • Ensure models are fit-for-purpose, explainable, reproducible, and robust
  • 27. Missing checks Failure to validate and cross-validate the model features may compromise the integrity and lead to data-driven errors Risk CONTROLS • Weak governance tools for model monitoring, explainability, and biases • Omission of feature validation • Lack of cross-validation • Limited stakeholder involvement Factors • Integrate reasonability, accuracy checks and cross-validation checks into risk management • Embed checks throughout the model lifecycle • Engage business and technical stakeholders in issue mitigation
  • 28. Overfitting AND underfitting Excessively complex or simplistic modelling unable to capture good patterns in the training data may limit the prediction accuracy Risk CONTROLS • Inadequate model complexity control • Limited data diversity • Inadequate feature selection • Data noise and anomalies Factors • Define and train risk management procedures for fit risks • Distinguish data issues from fit issues • Continuously assess fit's impact on outputs and equity gaps
  • 29. Suboptimal Hyperparameter Configuration Inadequate hyperparameter configurations may impair AI model performance, effectiveness and reliability Risk CONTROLS • Complex hyperparameter tuning • Poor calibration understanding • Lack of documentation • Neglecting continuous assessment • Failure to capture decision evidence Factors • Incorporate hyperparameter specifications and calibration into risk and impact assessments • Document software package choices and specific values • Continuously evaluate and adapt hyperparameters
  • 30. Suboptimal DimRed Dimensionality reduction, such as feature selection and extraction, may hinder interpretability Risk CONTROLS • Inadequate dimensionality reduction techniques • Incorrect feature selection • Poorly chosen feature extraction methods • Lack of interpretability in models Factors • Use linear discriminant analysis, distributed stochastic neighbor embedding and auto encoders • Optimize dimensionality reduction • Justify and document the chosen dimensionality reduction approach
  • 32. AI Policy Gaps Inadequate AI policies and procedures may lead to unmanaged risks in AI systems, hindering their benefits and exposing to unforeseen challenges Risk CONTROLS • Absence of specific AI governance policies • Neglect of indirect AI technology influences in policies • Lack of infrastructure support policies for AI at scale • Insufficient consideration of AI knowledge in supporting non-AI functions Factors • Develop AI governance, infrastructure and use policies with articulated roles for model developers, users, and validators • Include indirect AI influences in supporting technology policies • Promote AI knowledge sharing in broader organizational functions
  • 33. Third-Party Failure Inadequate third-party components and vendors may compromise software integrity, reliability, security, and performance Risk CONTROLS • Lack of visibility into and security vulnerabilities of open source and commercial components • Incomplete or outdated software inventory, source components and licenses • Inadequate prioritization of third-party vulnerability mitigation Factors • Create and maintain a detailed software Bill of Materials • Monitor external threats and vulnerability disclosures • Prioritize vulnerability mitigation based on risk assessments • Perform due diligence audits in vendors
  • 34. unattainable SCALABILITY The utilization of actual data, users, and customers in deploying the AI solution may degrade the performance Risk CONTROLS • Inadequate infrastructure for scaling • Increased computing power demands • Model performance degradation • Delayed model updates and approvals Factors • Monitor the performance to address slowdowns • Ensure AI system's scalability with appropriate servers, computer power and communication • Automate model updates when metrics deviate from performance objectives
  • 35. Big o notation in Scalability Assesses the efficiency of searching, sorting, data manipulation, and other tasks within an AI algorithm. It quantifies efficiency by considering the relationship between processing time and data usage in proportion to the size of the input data. CONTROLS • Performance impact assessments for different alternatives • Benchmarking pre and post- changes • Selection of data structures • Quick, merge and heap sorting selection • Linear, binary and tree searching selection Processes • Assess and compare the trade-off between efficiency and complexity in the given use case • Enhance the AI algorithm for scalability and efficient memory, arrays and resource allocation • Ensure that the computation time meets end-users' expectations
  • 36. Big o notation in Scalability TINME TO COMPLETE Input Size O(n^2) Recursion, bubble selection and insertion sorts O(n log n) Quicksort Mergesort O(n) Arrays and lists O(log n) Binary, tree and heap searches 👈 totally not cool area absolutely lit area 👉
  • 37. Infrastructure Malfunction Infrastructure malfunctions stemming from diverse factors may result in software performance degradation or data loss Risk CONTROLS • Inadequate IT support • Outdated AI knowledge • Undefined roles and responsibilities Factors • Establish expert IT support for AI systems • Ensure ongoing AI training for IT teams • Define clear roles and responsibilities • Monitor the operation of supporting infrastructure
  • 38. INSECURE Edge Systems Insecure edge hardware may be compromised affecting the data residing at the edge Risk CONTROLS • Physical components in edge devices • Data and model theft risks • Interconnected systems vulnerability • Overlapping edge uses Factors • Implement remote disablement methods • Apply data and model obfuscation techniques • Isolate critical assets among systems • Minimize redundancy in edge use cases
  • 39. failed access controls Granting excessive permissions in the AI environments may expose to attack vectors leading to data breaches and security incidents Risk CONTROLS • Inadequate permission governance tools, in particular for cloud-services • Permissions mischaracterized as misconfigurations • Anomalous activity monitoring gaps • Unauthorized users with excessive permissions • Overly broad permission grants Factors • Detect and rectify excessive permissions, including mischaracterized misconfigurations • Continuously monitor for anomalous activities • Minimize the gap between granted and used permissions
  • 40. Inadequate Fallback Systems Weak backup and restoration processes may affect the resilience and security of the AI application Risk CONTROLS • Lack of backup systems, and contingency plans • Incomplete risk-tiering • Insufficient data encryption for backups • Unassigned responsibilities Factors • Establish a comprehensive fallback plan with responsibilities • Define clear risk triggers and tiering and contingency plans • Implement varied data backup schedules
  • 42. What is the probability oF a? P(a) probability How much more likely is a than b? Pdf (a) / Pdf (b) probability density functions HOW A IS Parametrized? MLA (A) & MAP (A) Maximum Likelihood Estimation Maximum A Posteriori HOW A IS APROXIMATED? RV (A) & CLT (A) Random Variable (monte Carlo SIM) Central Limit Theorem IS A SIGNIFICANT? P VALUES (A) bootstrap resampling,
  • 43. Losses from cyber incidents Lognormal distribution fit Events with losses > 20M USD Median 200k USD Information Risk Insights Study 2017-2022 Event frequency model parameters Type Mean μ St deviation σ Upper bound -2.28 .87 Lower bound -6.39 1.78
  • 44. Losses from cyber incidents IRIS 2022 Distribution of reported cyber event losses by company revenue
  • 45. Confidentiality Integrity Availability Disclosure: employee or consultant may expose the model or training data through either negligence or intention Oracle: attacker may input the model to analyze its outputs, aiming to reverse engineer and extract either the model itself or the training data Poisoning: attacker may manipulate the training data, aiming to distort outputs, introduce backdoors, or sabotage the model, causing it to produce inaccurate predictions or classifications Evasion: attacker may modify a minor portion of the training data to trigger significant changes in outputs, thereby influencing decisions made by the model Bias: developer may use incomplete training data leading to discriminatory outcomes Denial of service: attacker may flood the system with numerous requests or data to overwhelm its capacity and slow down service consumption Perturbation: attacker may input data to exploit the model's fragility, intentionally inducing errors or unexpected behavior Abuse: user may misuse the model for fraudulent or unethical purposes Third party: vendor may fail to deliver expected supporting services
  • 46. QUantification Compensations Costs of rebuilding Costs of remediation Fairness Loss of revenue Loss productivity hours Cost of changes Performance Penalties for privacy laws Compensations Costs of notifications Legal fees PRIVACY Penalties for AI laws Compensations Legal fees Overcosts of customer acquisition Regulations Costs of response, restoration and remediation Loss of competitiveness (IPs) Loss of fraud Robustness Costs of wrong decision- making Compensations Costs of reprogramming Explainability
  • 47. AGGREGATION X1, X2, ..., Xn ~ Poisson (λ) X1 + X2 ~ Poisson (2λ) Likelihood of observing a specific number of Poisson-distributed events when both X1 and X2 are considered together Poisson X1,..., Xn ~ Normal (μ, σ²) X1 + X2 ~ Normal (2μ, 2σ²) Combined impact or exposure of a specific number of Normal- distributed events when both X1 and X2 are considered together Normal X1 ~ Binomial (n₁, p) X2 ~ Binomial (n₂, p) Likelihood of observing a specific number of independent events when both X1 and X2 are considered together Binomial + = + =
  • 48. Law of Total Expectation Calculate the expected loss average of a random variable by considering all possible loss values and their associated probabilities E(X)=E(E(X∣Y))=∑E(X∣Y=y)⋅P(Y=y) E(X) represents the expected value of the random variable X E(X∣Y) represents the expected value of X given a specific value of the random variable Y E(E(X∣Y)) means taking the expected value of E(X∣Y) over all possible values of Y Decompose independent losses to be able to aggregate a total exposure
  • 49. Example IN R P_B <- 0.05 # 5% probability of a security incident on the IA model causing a data breach mean_compensation <- 50000 # Mean compensation costs (customer compensations, legal and notification costs) sd_compensation <- 2 # Standard deviation of compensation cost mean_regeneration <- 20000 # Mean data regeneration costs (model rebuilding and data regeneration costs) sd_regeneration <- 2 # Standard deviation of data regeneration cost # Generate random samples from lognormal distributions for compensation and regeneration costs compensation_samples <- rlnorm(1000, log(mean_compensation), log(sd_compensation)) regeneration_samples <- rlnorm(1000, log(mean_regeneration), log(sd_regeneration)) # Calculate the expected losses using the Law of Total Expectation Expected_Cost_Compensation <- P_B * mean(compensation_samples) Expected_Cost_Regeneration <- P_B * mean(regeneration_samples) Expected_Loss <- Expected_Cost_Compensation + Expected_Cost_Regeneration # Print the results cat("The expected compensation cost of a data breach is $", Expected_Cost_Compensation, "n") cat("The expected data regeneration cost of a data breach is $", Expected_Cost_Regeneration, "n") cat("The total expected loss of a data breach is $", Expected_Loss, "n")
  • 50. Example IN R samples <- regeneration_samples + compensation_samples hist(samples, breaks = 100, main = "Distribution of Total Losses", xlab = "Total Loss", ylab = "Frequency", col = "lightblue")
  • 51. Counting Risk scenario Event A Failed Event B Succeeded P(A) P(B) Experiment Outcome Model attack Event A Failed Event B Succeeded P(A)= 95% P(B)= 5% Compensations Regeneration How many options? Product Rule of Counting If an experiment has two independent parts, where the first part can result in one of |m| outcomes and the second part can result in one of |n| outcomes, then the total number of outcomes for the experiment is |m| * |n|. |m| * |n| = 2 * 2 = 4 (Fail, 0,0) (Succ, Comp, Rege) (Succ, 0, Rege) (Succ, Comp, 0)
  • 52. Probability • our belief that an event 𝐸 occurs • quantitative way to express our degree of belief or confidence in the occurrence of an event • number between 0 and 1 to which we ascribe meaning • represented through probability distributions, which describe how probabilities are distributed among different possible outcomes 𝑛 # of total trials 𝑛 (𝐸 ) # trials where 𝐸 occurs
  • 53. Combinatorics & Bow Tie Event Cause A Cause B Cause A.A Cause A.B Cause B.A Cause B.B Cons 1 Cons 2 Cons 1.1 Cons 1.2 Cons 2.1 Cons 2.2 Tier 1 = 2 outcomes (direct or primary impact) Tier 2 = 4 outcomes (indirect or secondary impact) Tier 3 = 8 outcomes |A| = m, |B| = n, A ∩ B = ∅ Tier n = 2^n = exponential growth!
  • 54. Combinatorics & Bow Tie Data Breach Subversion No subversion Misconfiguration No misconfiguration Misconfiguration No misconfiguratio Regeneration - 20000 No Regeneration Compensations - 50000 No compensations Compensations - 50000 No compensations What is the probability of avoiding data regeneration and compensation costs? 80%*50%= 40% P(20) P(80) P(50) P(50)
  • 55. R code # Set the parameters for risk assessment Simulations <- 10000 # Number of random simulations Loss1 <- 20000 # Estimated loss of the tier 1 impact StDev1 <- 0.2 # Estimated standard deviation 1 Prob1 <- 0.2 # Probability of occurrence of the tier 1 impact Loss2 <- 50000 # Estimated loss of the tier 2 impact StDev2 <- 0.2 # Estimated standard deviation 2 Prob2 <- 0.5 # Probability of occurrence of the tier 2 impact # Calculate and combine impacts x1 <- rlnorm(Simulations, log(Loss1), StDev1) * rbinom(Simulations, 1, Prob1) x2 <- rlnorm(Simulations, log(Loss2), StDev2) * rbinom(Simulations, 1, Prob2) hist(x1 + x2, main="Risk Impact Aggregation", xlab="Combined Impact") ← 40%
  • 56. Combinatorics & Impacts Data Breach Product rule of counting List of combinations of potential impacts without repetition R code impacts <- c("Regeneration", "Compensations", "Remediation", "Fines") all_combinations <- list() for (n_impacts in 1:4) { combinations <- combn(impacts, n_impacts, simplify = FALSE) all_combinations[[as.character(n_impacts)]] <- combinations } all_combinations Regeneration Compensations Remediation Fines
  • 57. Combinatorics & Impacts Data Breach $`1`[[1]] "Regeneration" $`1`[[2]] "Compensations" $`1`[[3]] "Remediation" $`1`[[4]] "Fines" $`2`[[1]] "Regeneration“ "Compensations" $`2`[[2]] "Regeneration" "Remediation" $`2`[[3]] "Regeneration" "Fines" $`2`[[4]] "Compensations" "Remediation" $`2`[[5]] "Compensations" "Fines" $`2`[[6]] "Remediation" "Fines" $`3`[[1]] "Regeneration" "Compensations" "Remediation" $`3`[[2]] "Regeneration" "Compensations" "Fines" $`3`[[3]] "Regeneration" "Remediation" "Fines" $`3`[[4]] "Compensations" "Remediation" "Fines" $`4`[[1]] "Regeneration“ "Compensations" "Remediation“ "Fines" Regeneration Compensations Remediation Fines
  • 58. Prof. Hernan Huwyler, MBA CPA Executive Education Compliance, RiskS, Controls Cyber Security and Governance /hernanwyler /hewyler