Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Application and Data Services

27 visualizaciones

Publicado el

The new data-driven industrial revolution highlights the need for big data technologies to unlock the potential in various application domains. The insurance and finance services industry is rapidly transformed by data-intensive operations and applications. FinTech and InsuranceTech combine very large datasets from legacy banking systems with other data sources such as financial markets data, regulatory datasets, real-time retail transactions, and more, improving financial services and activities for customers.

Publicado en: Tecnología
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

Application and Data Services

  1. 1. Application and Data Services Dr. Richard McCreadie University of Glasgow
  2. 2. Background in Container Orchestration Current best practices in industry state that application deployment should be managed using containers where possible Provide a standard format for all compute Sharable and many turn-key solutions already exist for common applications In cloud/cluster environments, an additional container management platform is needed Schedule container execution on available resources Enable communication between different containers Manage storage Enable application scaling and load balancing Kubernetes is currently the most popular of these with ~45% market share 2 https://www.datadoghq.com/docker-adoption/ https://www.datadoghq.com/container-report/
  3. 3. Claim API SQL Database New Claims Claim Storage - Continuous Example Application: Insurance Loss Estimation Aim: For a customer with an insurance policy, and a number of past claims, predict the monetary gain/loss to the insurance company at the end of the policy lifetime Gain/Loss is calculated at the end of each policy year using a deep neural network The deep neural network needs to be updated with new claim information at the end of each week 3 Policy Monitor Policy Loss Estimation Sales Team Alerts Policy Gain/Loss Prediction - Continuous AI Deep Learning Model Learning - Per Week Deep Learning Deep Learning Deep Learning
  4. 4. Knowledge and Technology Gap Platforms like Kubernetes do not provide effective tools to deploy and manage complex applications No central management of multi-component applications No in-built sequencing of operations within an application No identification of the correct amount of resources to assign to application components No monitoring application/container-level Metrics/Quality of Service 4
  5. 5. Application and Data Services The BigDataStack Application and Data Services aim to provide an additional tool-set to make tackling these use- cases faster and easier 5 Grouping of Kubernetes Components into Applications via BigDataStack Playbooks Support for re-usable Operation Sequences with common operations such as Apply, Wait-For, Execute-Command Semi-automated resource estimation for containers Standardized metric collection, storage and visualisation Application API providing direct access to both application- level metrics as well as available operations/sequences
  6. 6. Architecture 6 OpenShift/Kubernetes Monitoring Users Application State DB OpenShift Client Event Exchange Application API Global Decision Tracker Realization UI Resource Estimation Prometheus Metric Store Application and Data Services
  7. 7. Example Application: Insurance Loss Estimation Advantages: 7 All three components and their desired processing properties can be defined within a single BigDataStack Playbook, then deployed and managed automatically Operation Sequences can be used to order component deployment dependencies, e.g. the database needs to be running an in ‘ready’ state before launching model learning During container deployment, resource estimation will set CPU, Memory and GPU requests automatically, and can learn over time how to avoid wasting resources Standardized metric collection enables both component up-time and model performance to be monitored and visualised, in addition to be used as a trigger Application API can be used to enable custom business logic services to control the application at run-time (e.g. scale up/down or trigger model learning)
  8. 8. Open Source Release The current build is currently in alpha testing for product recommendation and insurance use- cases within BigDataStack Planned open source release by the end of the year 8

×