Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Threat Detection and Response at Scale with Dominique Brezinski

1.744 visualizaciones

Publicado el

Security monitoring and threat response has diverse processing demands on large volumes of log and telemetry data. Processing requirements span from low-latency stream processing to interactive queries over months of data. To make things more challenging, we must keep the data accessible for a retention window measured in years. Having tackled this problem before in a massive-scale environment using Apache Spark, when it came time to do it again, there were a few things I knew worked and a few wrongs I wanted to right.

We approached Databricks with a set of challenges to collaborate on: provide a stable and optimized platform for Unified Analytics that allows our team to focus on value delivery using streaming, SQL, graph, and ML; leverage decoupled storage and compute while delivering high performance over a broad set of workloads; use S3 notifications instead of list operations; remove Hive Metastore from the write path; and approach indexed response times for our more common search cases, without hard-to-scale index maintenance, over our entire retention window. This is about the fruit of that collaboration.

Publicado en: Datos y análisis
  • Inicia sesión para ver los comentarios

Threat Detection and Response at Scale with Dominique Brezinski

  1. 1. Dominique Brezinski — Apple Information Security • Threat Detection and Response • at Scale
  2. 2. This is about the data platform aspect, not the specific analytics
  3. 3. Agenda • Use Cases, Scale, • and Challenges/Solutions
  4. 4. Enabling Detection and Analytics
  5. 5. Diverse threats require diverse data sets
  6. 6. Streams (left joined) with context and filtered or (inner joined) with indicators
  7. 7. Large time window, multi-dataset graphs
  8. 8. Enabling Triage and Containment
  9. 9. Search and Query
  10. 10. WHERE date > current_date() - 30 days
  11. 11. Scale
  12. 12. >100TB new data a day
  13. 13. >300 billion events per day
  14. 14. Most queried table: 504,761,911,529,518 bytes, 11,149,012,553,409 rows Yeah, trillions!
  15. 15. Streaming Ingestion Architecture
  16. 16. WHERE src_ip = x AND dst_ip = y Total data size: 504 terabytes, 11,149,387,374,965 rows Scanned data size: 36.5 terabytes, 722,630,063,648 rows Additional reduction thanks to data skipping (bytes): 92.4% Additional reduction thanks to data skipping (rows): 93.2%
  17. 17. Simple. Unified.

×