Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Próximo SlideShare
Introduction to AI
Introduction to AI
Cargando en…3
×

Eche un vistazo a continuación

1 de 60 Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning (20)

Anuncio

Más reciente (20)

DWX 2018 Session about Artificial Intelligence, Machine and Deep Learning

  1. 1. Artificial intelligence Why we should care, how it works and what benefits we can get from it ? Mykola Dobrochynskyy Software Factories, 2018 ceo@soft-fact.de 1
  2. 2. What is this Session about 2 Agenda This Talk like thin rope pulling your big ship (AI, ML & Deep Learning knowledge).
  3. 3. 3 “gold” Circles – start with Why! 3 Agenda What? (Agenda 6-7) How? (Agenda 2-5) Why? (Agenda 1) Wie von Simon Sinek geprägt
  4. 4. Agenda 1. Motivation 2. History, present and the future 3. AI - Artificial Intelligence 4. ML - Machine Learning 5. DL - Deep Learning 6. Sandbox playing 7. Chances and Risks 8. Start now! (Resources and references) 9. Questions and Answers. * AWS TTS service "Polly": https://console.aws.amazon.com/polly * see demo/Joanna_Intro.txt 4 Agenda
  5. 5. Entropy of a (Software-)System 5 𝑺 = 𝒌 𝑩 ∗ 𝒍𝒏 Ω 1 Entropy (physical) N (Ω) – Number of states Time / H (S) 𝑯 ~ 𝒍𝒏 N 1 Entropy (software) Software entropy (H) grows over time. That's why the complexity and information loss probability of an IT system increases. To counteract, we must reduce software entropy! 1. Motivation
  6. 6. How to combat software and data erosion? 6 Optimizing IT-Infrastructure Test-Driven Development Software- Refactoring Optimizing Software- Architecture Model-Driven Development AI, Machine & Deep Learning Optimizing Prozesses i.E. Agile ALM – Application Lifecycle Management Continuous Integration & Delivery 1. Motivation
  7. 7. 7 1. Motivation Objective reasons for AI revolution Exponential data growth Large amounts of unstructured data Short-lived live data Exponential data growth - companies have recognized the value of big data and want it not to delete or "forget" (just like the human brain does) - data is the gold of the 21en century! Many unstructured data - many areas of IoT, weather, physics, chemistry, organic, transport (autonomous driving), etc. collect lots of unstructured data such i.E. measurements. This "dark matter" of data must be represented by AI in a meaningful way and/or classified. Many short-lived live data - such as sensor data from Exchange forecast a technical part are useless, if this part is broken "earlier".
  8. 8. 8 1. Motivation AI - Economy Forecasts
  9. 9. A dream of 'Thinking figure' 9 2. History & future Pygmalion und Galatea Pandora and her box
  10. 10. AI story had begun with a negative statement 10 Augusta Ada (Byron) King, Countess of Lovelace. English mathematician and writer. “The Analytical Engine* has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical revelations or truths. Its province is to assist us in making available what we are already acquainted with" (Ada Lovelace 1843) * - the “Analytical Engine” was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. 2. History & future
  11. 11. … and has been continued with double negation a century later 11 Alan Mathison Turing „The Analytical Engine was a universal digital computer, so that, if its storage capacity and speed were adequate, it could by suitable programming be made to mimic the machine in question**. Probably this argument did not occur to the Countess (Ada Lovelace) or to Babbage”* * § 6. / (6) in A. Turing. Computing Machinery and Intelligence. Mind-Journal, 1950: https://www.csee.umbc.edu/courses/471/papers/turing.pdf ** as a „machine in question“ a digital „participant“ of the Imitation Game as well known as Turing-Test had been meant (see the participant “A” in the next slide). 2. History & future
  12. 12. Can machines "think"? 12 „The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. … We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?" “ * * A. Turing. Computing Machinery and Intelligence. Mind-Journal, 1950: https://www.csee.umbc.edu/courses/471/papers/turing.pdf
  13. 13. AI History 13 Artifical Intelligence On September 2, 1955, the project was formally proposed by McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” * Timeline-Source: K.E. Park
  14. 14. AI and 4. Industrial Revolution Artifical Intelligence is the “electricity” of the 4. Industrial Revolution 14 Source: Alan Murray. Fortune.com 2. History & future
  15. 15. Key AI success factors 15 2. History & future 1. Moor's Law (CPU / GPU / TPU / HPC / Cloud) 2. Big Data (Training Input & Subject Goal) 3. Falling Error Rate (i.e. IMAGE-Net) 4. Rising investments / sales In addition to the well-founded academic AI, Machine And Deep Learning theory since mid 50th and objective reasons in the field, there are 4 key exponents that drives AI revolution:
  16. 16. AI success factors – Moor’s Law 16 Source: https://humanswlord.files.wordpress.com 2. History & future
  17. 17. AI success factors – Big Data 17 2. History & future
  18. 18. AI success factors – Moor’s Law 18 Source: https://www.quora.com 2. History & future
  19. 19. AI success factors – special hardware 19 2. History & future
  20. 20. CPU vs. GPU 20 2. History & future
  21. 21. CPU vs. GPU vs. TPU 21 Source: GOOGLE CLOUD BIG DATA AND MACHINE LEARNING BLOG 2. History & future
  22. 22. AI Definition According to John McCarthy, Artificial Intelligence (AI) is an information and engineering science dedicated to the production of "intelligent" machines and especially "intelligent" computer programs. The research area wants to use computer intelligence to understand human intelligence, but does not have to limit itself to the methods that are observed biologically in human intelligence. In humans, many animals, and in some machines, different types and degrees of intelligence occur. According to McCarthy, the computational part of the intelligence is the ability to achieve the goals in the world. In other words, a computer is built and / or programmed (trained) in such a way that it can independently solve problems, learn from the mistakes, make decisions, perceive its surroundings, and communicate with people in a natural way (for example, linguistically). 22 3. Artificial Intelligence
  23. 23. Ontology of the Human Intelligence 23 Creati- vity Facts/Solutions Predict Judge Abstract/Compose Action Re-usesolutions Decide Experiment Manipulate Speak/gesticulate/emotions Under- standing Analyze Compare/recognize Search Translate Link Knowledge Learn Remember Discover Observe Associate Sen- ses Feel Hear See 3. Artificial Intelligence
  24. 24. AWI - Artificial weak Intelligence Artifical weak (or narrow) Intelligence does not solve all, but only a given narrow range of the human intelligence ontology. In the case of a narrow AI, the simulation of a certain range of intelligent behavior with the aid of mathematics and computer science is concerned. 24 3. Artificial Intelligence
  25. 25. AHI - Artificial hybrid Intelligence 25 Hybrid artificial intelligence does not solve all but several of the AI domains in parallel that are crucial for the problem domain and can be combined with human intelligence and interaction. This is a combination of several simulations of intelligent behavior with one another and (in some cases) with human intelligence. 3. Artificial Intelligence
  26. 26. ASI - Artificial strong Intelligence Artificial strong intelligence aka AI-Singularity has as its goal to create an artificial intelligence that "mechanizes" human thinking, consciousness and emotions. Even after decades of research, the questions of the strong AI are not fully understood philosophically and the objectives remain largely visionary. According to some predictions however AI-Singularity could be reached in a few decades or even sooner. As a powerful technology ASI could be very good or very bad thing for human beings. 26 3. Artificial Intelligence
  27. 27. AI, ML & Deep Learning Ontology 27 Source: www.deeplearningbook.org 3. Artificial Intelligence
  28. 28. Machine Learning - Definition 28 4. Machine Learning Machine Learning (ML) is general term for the artificial generation of knowledge from experience. An artificial system learns from examples and can generalize after completion of the learning phase. That is, it Do not just memorize the examples, but recognize them in the learning data regularities. For software that means, according to Thomas Mitchell: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E".
  29. 29. Machine Learning - Ontology 29 4. Machine Learning Machine Learning Supervised Learning Classification Regression Ranking Unsupervised Learning Clustering Segmentation DimensionReduction Reinforcement Learning Decisionprocess Rewardsystem Recommendation system
  30. 30. Machine Learning Tasks 30 4. Machine Learning
  31. 31. ML – Reinforcement Learning 31 4. Machine Learning
  32. 32. Machine Learning Algorithms 32 4. Machine Learning By Girisch Khanzode
  33. 33. Deep Learning Advances in Timeline 33 Source - Keynote: Deep Learning Frameworks - Yoshua Bengio #reworkDL 5. Deep Learning
  34. 34. The new AI Paradigm – replace Programming with Training 34 5. Deep Learning
  35. 35. Biological neuron 35 Source: https://www.embedded-vision.com 5. Deep Learning
  36. 36. Neuron Mathematical Model 36 Source: https://www.embedded-vision.com 5. Deep Learning
  37. 37. Artifical Neural Network 37 Source: https://www.embedded-vision.com 5. Deep Learning Important Deep Learning Architectures see here
  38. 38. Backpropagation – adjust weights through gradient descent 38 5. Deep Learning Source - Geoffrey Hinton: The Foundations of Deep Learning
  39. 39. Training of a Neural Network 39 Source: https://www.embedded-vision.com 5. Deep Learning
  40. 40. How a Deep Learning Model is trained 40 5. Deep Learning
  41. 41. The breakthrough in Computer Vision 41 5. Deep Learning In 2012, Alex Krizhevskiy, Ilya Sutzkever and Geoffrey Hinton won the ImageNet Competition by far. Dropout regularization and ReLU activation were introduced and GPUs used for model training. Deep Convolutional Neural Network (CNN) - AlexNet Source: A. Krizhevsky, I. Sutskever, G. Hinton. ImageNet Classification with Deep Convolutional Neural Networks.
  42. 42. Testbilder mit passenden Labels 42 5. Deep Learning Source: A. Krizhevsky, I. Sutskever, G. Hinton. ImageNet Classification with Deep Convolutional Neural Networks.
  43. 43. Deep Learning Libs 43 5. Deep Learning • Keras is able to run seamlessly on both CPUs and GPUs • TensorFlow, Theano backend, and the Microsoft Cognitive Toolkit (CNTK) backend • CUDA (Compute Unified Device Architecture) – Nvidia GPU-Lib • cuDNN (CUDA Deep Learning Network) – Nvidia GPU-LiB • TensorFlow is itself wrapping a low-level library for tensor operations called Eigen • BLAS (Basic Linear Algebra Subprograms) – Linear Algebra Libs
  44. 44. Progress in Deep Learning • Speech recognition • Computer vision • Machine translation • Reasoning, attention and memory • Reinforcement learning (Games, Go etc.) • Robotics & control • Long-term dependencies, very deep nets 44 5. Deep Learning
  45. 45. Deep Learning Success drivers • Lots and lots of data • Very flexible ML models • Enough computing power • Computationally efficient inference • Powerful predecessors that can beat dimensionality problem through compositions (like human abstractions) • Deep ML Architectures with multiple levels 45 5. Deep Learning
  46. 46. Demo. Alexa Playground 46 6. Sandbox playing AWS Cloud Architecural Diagramm of the Alexa Powerpoint-Skill Alexa, TRIGGER presentation start Pull Next Alexa-command from the Message-Queue
  47. 47. Demo Azure ML-Studio. Income-Prediction with Two-Class Decision-Jungle. 47 6. Sandbox playing
  48. 48. Demo Azure ML-Studio. Income-Prediction with Two-Class Neural Network. 48 6. Sandbox playing
  49. 49. Demo. AI stars recognition. AWS – Round 1. 49 6. Sandbox playing
  50. 50. Demo. AI stars recognition. Azure Cognitive Services – Round 2. 50 6. Sandbox playing
  51. 51. Classification-Demo playground.tensorflow.org 51 6. Sandbox playing
  52. 52. Regression-Demo playground.tensorflow.org 52 6. Sandbox playing
  53. 53. Demo. AI just for Fun! 53 • AI Experiments-Collection • Music MixLab • Quick-draw - guess what I've drawn! • X-Degrees Separation 6. Sandbox playing
  54. 54. AI Applications • Computer vision (Security, healthcare, IoT, science …) • Machine translation • Natural Language Processing & Speech (i.e. Alexa, Siri etc.) • Search / Suggestions / Analytics (Google, Amazon, financials …) • Robotics & control (industry, aero-space, public sector…) • Autonomous vehicles (Mars-Rover, Self- driving cars …) 54 7. Chances and Risks
  55. 55. From AI to AGI / ASI • Exponential data growth: big data, weather, science, entertainment, unstructured and short-living data • Complexity: climate, energy, resources, economics, physics etc. • Solving Al as Artificial General Intelligence (AGI) is potentially the meta-solution to all these problems • The goal is to make Al science and/or Al-assisted science come true • Artificial Strong Intelligence (ASI) aka AI-Singularity with human-level and beyond could be a big Meta- AI-Network of the AI-/AGI-Domains. • ASI could come faster as we could think! It could be very powerful and useful (and scary!). So it should be used ethically and responsibly. • Philosophical problems of the ASI 55 7. Chances and Risks
  56. 56. ML Adoption Matrix – where your see yourself? 56 8. Start now! ML- Provider ML-Driver ML- Ignorer ML- Adopter ML-Adoption ML-Development
  57. 57. Recommended Links 57 • Materials of this session: https://bizzdozer.com/dwx2018 • ML Online Course: http://course.fast.ai/ • Artificial Intelligence. MIT Open Coursware. MIT, 2015: https://ocw.mit.edu/courses/electrical- engineering-and-computer-science/6-034- artificial-intelligence-fall-2010/ • Kaggle - place to data science projects: https://www.kaggle.com 8. Start now!
  58. 58. Recommended Publications 58 1. Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning (Adaptive Computation and Machine Learning). MIT Press, 2016: http://www.deeplearningbook.org 2. Francois Chollet. Deep Learning with Python. Manning Publications Company, 2017 3. Santanu Pattanayak. Pro Deep Learning with TensorFlow: A Mathematical Approach to Advanced Artificial Intelligence in Python. Apress, 2017. 4. Mykola Dobrochynskyy. „Deep Learning“ articles in dotnetpro-Magazin (ab 2018/09) 8. Start now!
  59. 59. Conclusion • You need concrete AI-Plan / Strategy (like for Mobile in the past decade “Mobile first” goes to “AI First”) in order to keep pace with competitors. • AI converts Information into Knowledge and programmers into data scientists. • AI learns differently as a human – AI with training on the Big-Data an the human with small chunks of data, learned experiences and abstractions as well as from genome derived information. • Most of the value (by now) is generated by supervised learning models (i.e. cognitive services) • AI-Singularity is not expected in the near feature, but things could change quickly (i.e. winning machine- algorithm for the Go-game was expected at least in 10-15 years, but the big sensation was happened March. 2016, as AlphaGo-program* won Lee Sedol – winner of 18 world titles) 59 Artifical Intelligence * - There are an astonishing 10 to the power of 170 possible board configurations - more than the number of atoms in the known universe!
  60. 60. Thank you! 60 Mykola Dobrochynskyy is Managing Director of Software Factories. His focus and interests are Model-driven Software Development, Code Generation, Artificial Intelligence, Machine and Deep Learning, as well as Cloud and Service-oriented Software Architectures. Artifical Intelligence ceo@soft-fact.de @my_dobro

×