I gave this presentation to the Mendoza School of Business at Notre Dame University on September 20th, 2018.
The presentation covers 5 major ethical themes of AI's application in business, and simple frameworks for helping executives assess those risks, and the potential costs that they might imply.
Top 10 Most Downloaded Games on Play Store in 2024
Managing the Risks of AI - A Planning Guide for Executives
1. Managing the Risks of AI - A
Planning Guide for Executives
With Daniel Faggella
CEO at TechEmergence (Emerj)
2. Overview
• Framing AI and ethics in the context of
business strategy and business value
• Highlight a specific ethical issue (transparency,
accountability, job loss, etc)
– Provide a simple framework for beginning to
tackle this ethical issue, examples included
• End
4. Promises
• By the end of this talk you’ll be able to:
– (a) Determine what ethical concerns are relevant
for your business
– (b) Work though many of the potentially relevant
ethical concerns with simple frameworks
• My job is to train your antennae to pick up on
what matters, and feel just fine ignoring what
doesn’t – and you’ll have a better sense of
what to pay attention to by the end of this talk
5. About TechEmergence
• Largest audience of AI-focused executives on earth:
– 250,000 AI-focused business readers per month, large
email list and podcast listenership
– We cover the most important sectors of AI in industry
(banking, pharma, etc), and make 95% of our research free
– Focus: Possibilities / Probabilities of AI in Industry
• Our work:
– Commissioned market research for business and
government clients (World Bank, others)
– AI strategy development with a focus on aligning company
goals with existing, powerful AI trends in the market
• Covering the ethical concerns of AI since 2012
6. • “AI needs to have human values”
• “Needs to be free from bias”
• “AI should be used for good”
7. Tropes
• “AI needs to have human values”
• “Needs to be free from bias”
• “AI should be used for good”
9. Business Context Comes First
• 1 - Strategic direction of the company: What thriving looks
like in the future (growth, profit, positioning / place in the
market)
• 2 - What are the critical initiatives to get us to those goals,
what would need to happen / change in the years ahead?
• 3 - What can AI do… where can it fit into this mix? (this
involves informing 2, but also adding new ideas to it…
occasionally even adding to 1)
– 3a - Determine the possible applications for “1” and “2”
– 3b - Determine future of job roles / department structures
– 3c - Determine potential ethical considerations that might
matter
10. Business Context Comes First
• The “ethics” conversation - like the “AI” conversation -
can’t be addressed on its own because it’s a “hot
topic.”
– AI applications for their own sake are shameworthy in so
much as they at an attempt to appear “cool”
– AI ethics conversation at an executive level are
shameworthy in so much as they are an attempt to appear
“good”
11. 1 - Transparency
• Decision-making criterion, when is transparency
(interpretability) needed?
– Not Needed: Content / product recommendation
– Needed: Mutual fund allocations. Cancer diagnoses
– ??: Loan or financing risk assessment
• What are the decisions of algorithms based on? Are
those the criterion we’d choose? Will there be legal or
PR trouble if people realized what these systems were
training on?
– Income levels. Location of residence. Race. Gender. Age.
Religion.
12. 2 - Accuracy
• If you need 100% probably aren’t looking for
machine learning as a solution. Machine learning
is a statistical approach. If you need a simple “yes”
or “no” to be final, then just program regular
software to do just that, hard-code the rules.
• If it’s impossible to hard-core the rules, and it’s
impossible for an expert human - on their BEST
day - to be 100% correct about an issue - then
indeed we can’t expect a machine learning system
to be any more accurate.
13. 2 - Accuracy
• Better than a baseline (such as better than current human
performance):
– Customer service email tickets (DigitalGenius)
– Paperwork filing or processing (RAGE Frameworks)
– Fraud detection for payments or fake user accounts (SiftScience)
• Long feedback loop:
– Loan risk (must “trickle in” over time… very hard to call early on)
– Hiring or recruiting (Humalyze)
• Must be 100% accurate:
– Presenting financial data in the form of reports to an executive (Ezeop)
• Not applicable:
– Recommendations for products or content (measured by response, not
by % correct) (LiftIgniter)
16. 3 - Accountability
• What kinds of issues are “owned” by who? When
something goes wrong, who takes the blame and
who is responsible to fix it?
– Generally, the AI team who builds the model and then
continually stays on top of its algorithmic “drifting”
should have to own the issues that come up
• If an AI application is developed by an outside vendor, there
must be some ongoing maintenance, testing, and upkeep
on the part of the vendor and the buyer… creating a joint
team responsible for the performance of the applications,
and responsible for addressing the issues with the software.
– Often, this should be drawn up in the contractual
agreement with a vendor
17. 4 – Decisions: Man or Machine?
• What can be handled completely by a machine?
• What must be handled completely by a human?
• Middle ground:
– Sometimes something must be handed to a machine IF a
certain criterion is the case, such as IF:
• A system is less than 90% sure about it’s customer support ticket
reply
• A system is less than 80% sure about it’s risk assessment for a
loan
– Can be informed by a machine, but decision made by a
human
• Informing a plan for treatment - IBM Watson Health
• Stock trading
• Note: Many processes involve multiple decisions, and
the role of many and machine will vary for each.
20. 5 – Job Security and AI
• Big firms:
– Hide as much of the conversations about restructuring or layoffs
– (Possibly) Seem “progressive” by talking about these topics and doing retraining
– (Definitely) Think about what each of your roles look like in the next 5-10 years
– (Definitely) Hire what you need to hire, not as you have in the past. Hire tech
whenever you can.
• The only consideration big firms have to make here is with regards to
perception. Manage it well:
– Seem ready and willing to embrace disruption with a strong vision for the future
where you WIN, not just survive.
– Have - and express confidence in - a plan to use talent well in the re-oriented job
functions you’ll have in the future.
– Don’t avoid any planning about about potential layoffs and re-orgs, but
obviously keep them under wraps.
– If you have to do layoffs or re-orgs, do it with strong reference to (a) and (b)…
i.e. a strong vision for thriving in the future.
21.
22. 5 – Job Security and AI
• Positions requiring management skills, or social
connection / social tact are rather challenging to
automate
• Positions of the highest risk for automation are
positions that require minimal context on
anything other than the inputs and outputs of the
role
– High context vs low context
– Plumber vs welder in a manufacturing line
– Financial manager in procurement vs auditor
23. 5 – Job Security and AI
https://www.youtube.com/watch?v=4NEIwKooOoI
26. 5 – Job Security and AI
• Determine what the future of the various departments
and job roles might look like, including:
– Which roles might require more employees, which roles
might require less
– Critical talent acquisition needs in the next 2-5 years
– How to re-train current employees to meet future demand
– Initial pilot programs for re-skilling
• Hypothetical examples:
– Certain developers learning basic Python skills
– Customer support agents learning to handle more
edge-cases and high-touch troubleshooting
27. Take-Aways for Leaders
• Keep these conversations housed squarely
within your existing strategic planning process
- as an extension and enhancement thereto,
not as a distraction from it
• Ethics can be framed as a consideration for
decision-making, for:
– Imbuing company values into our tech and
products
– Determining our resource requirements to bring
AI’s value to life