Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Scalable and Reliable Logging at Pinterest

1.769 visualizaciones

Publicado el

At Pinterest, hundreds of services and third-party tools that are implemented in various programming languages generate billions of events every day. To achieve scalable and reliable low latency logging, there are several challenges: (1) uploading logs that are generated in various formats from tens of thousands of hosts to Kafka in a timely manner; (2) running Kafka reliably on Amazon Web Services where the virtual instances are less reliable than on-premises hardware; (3) moving tens of terabytes data per day from Kafka to cloud storage reliably and efficiently, and guaranteeing exact one time persistence per message.

In this talk, we will present Pinterest’s logging pipeline, and share our experience addressing these challenges. We will dive deep into the three components we developed: data uploading from service hosts to Kafka, data transportation from Kafka to S3, and data sanitization. We will also share our experience in operating Kafka at scale in the cloud.

Publicado en: Datos y análisis

Scalable and Reliable Logging at Pinterest

  1. 1. Scalable and Reliable Logging at Pinterest Krishna Gade krishna@pinterest.com Yu Yang yuyang@pinterest.com
  2. 2. Agenda • What is Pinterest? • Logging Infrastructure Deep-dive • Managing Log Quality • Summary & Questions
  3. 3. What is Pinterest?
  4. 4. What is Pinterest? Pinterest is a discovery engine
  5. 5. What is the weather in SF today?
  6. 6. What is central limit theorem?
  7. 7. What do I cook for dinner today?
  8. 8. What’s my style?
  9. 9. Where shall we travel this summer?
  10. 10. Pinterest is solving this discovery problem
  11. 11. Humans + Algorithms
  12. 12. Kafka App Data Architecture Singer Qubole (Hadoop, Spark) Merced Pinball Skyline Redshift Pinalytics Product Storm Stingray A/B Testing
  13. 13. Logging Infrastructure
  14. 14. Logging Infrastructure Requirements • High availability • Minimum data loss • Horizontal scalability • Low latency • Minimum operation overhead
  15. 15. Pinterest Logging Infrastructure • thousands of hosts • >120 billion messages, tens of terabytes per day • Kafka as the central message transportation hub • >500 Kafka brokers • home-grown technologies for logging to Kafka and moving data from Kafka to cloud storage AppServers events Kafka Cloud storage
  16. 16. Logging Infrastructure v1 events Kafka 0.7 Host app app app data uploader Real-time consumers
  17. 17. Problems with Kafka 0.7 pipelines • Data loss • Kafka 0.7 broker failure —> data loss • high back pressure —> data loss • Operability • broker replacement —> reboot all dependent services to pick up the latest broker list
  18. 18. Challenges with Kafka that supports replication • Multiple copies of messages among brokers • cannot copy message directly to S3 to guarantee exact once persistence • Cannot randomly pick Kafka brokers to write to • Need to find the leader of each topic partition • Handle various corner cases
  19. 19. Logging Infrastructure v2 events Kafka 0.8 Host app log files Singer Secor/Merced Sanitizer Real-time consumers
  20. 20. Logging Agent Requirement • reliability • high throughput, low latency • minimum computation resource usage • support various log file format (text, thrift, etc.) • fairness scheduling
  21. 21. Singer Logging Agent • Simple logging mechanism • applications log to disk • Singer monitors file system events and uploads logs to Kafka • Isolate applications from Singer agent failures • Isolate applications from Kafka failures • >100MB/second for log files in thrift • Production Environment Support • dynamic configuration detection • adjustable log uploading latency • auditing • heartbeat mechanism Host app log files Singer
  22. 22. Singer Internals Singer Architecture LogStream monitor Configuration watcher Reader Writer Log repository Reader Writer Reader Writer Reader Writer Log configuration LogStream processors A - 1 A -2 B - 1 C - 1 Log configuration Staged Event Driven Architecture
  23. 23. Running Kafka in the Cloud • Challenges • brokers can die unexpectedly • EBS I/O performance can degrade significantly due to resource contention • Avoid virtual hosts co-location on the same physical host • faster recovery
  24. 24. Running Kafka in the Cloud • Initial settings • c3.2xlarge + EBS • Current settings • d2.xlarge • local disks help to avoid EBS contention problem • minimize data on each broker for faster recovery • availability zone aware topic partition allocation • multiple small clusters (20-40 brokers) for topic isolation
  25. 25. Scalable Data Persistence 33 • Strong consistency: each message is saved exactly once • Fault tolerance: any worker node is allowed to crash • Load distribution • Horizontal scalability • Configurable upload policies events Kafka 0.8 Secor/Merced
  26. 26. Secor 34 • Uses Kafka high level consumer • Strong consistency: each message is saved exactly once • Fault tolerance: any worker node is allowed to crash • Load distribution • Configurable upload policies events Kafka 0.8 Secor
  27. 27. Challenges with consensus-based workload distribution • Kafka consumer group rebalancing can prevent consumer from making progress • It is difficult to recover when high-level consumer lags behind on some topic partitions • Manual tuning is required for workload distribution of multiple topics • Inconvenient to add new topics • Efficiency
  28. 28. Merced • central workload distribution • master creates tasks • master and workers communicate through zookeeper
  29. 29. Merced
  30. 30. Log Quality
  31. 31. Log Quality Log quality can be broken down into two areas: • log reliability - Reliability is fairly easy to measure: did we lose any data? • log correctness - Correctness, on the other hand, is much more difficult as it requires the interpretation of data.
  32. 32. Challenges • Instrumentation is an after-thought for most feature developers • Features can get shipped breaking existing logging or no logging • Once an iOS or Android release is out, it will keep generating bad data for weeks • Data quality bugs are harder to find and fix compared to code quality
  33. 33. Tooling
  34. 34. Anomaly Detection • Started with a simple model based on the assumption that daily changes are normally distributed. • Revised that model until it has only a few alerts, mostly real and important. • Hooked it up to a daily email to our metrics avengers.
  35. 35. How did we do? • Interest follows went up after we started emailing recommended interests to follow • Push notifications about board follows broke • Signups from Google changed as we ran experiments • Our tracking broke when we released a new repin experience • Our tracking of mobile web signups changed
  36. 36. Auditing Logs Manual audits will have their limitations, especially with regards to coverage but will catch critical bugs. • However, we need two things: • Repeatable process that can scale • Tooling required to support the process • Regression Audit • Maintain a playbook of "core logging actions" • Use tooling to verify the output of the actions • New Feature Audit • Gather requirements for analysis and produce a list of events that need to be captured with the feature • Instrument the application • Test the logging output using existing tooling
  37. 37. Summary • Invest in your logging infra pretty early on. • Kafka has matured a lot and with some tuning works well in the Cloud. • Data quality is not free, need to proactively ensure it. • Invest in automated tools to detect quality issues both pre- and post-release. • Culture building and education go a long way.
  38. 38. Thank you! Btw, we’re hiring :)
  39. 39. Questions?

×