Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

101 ways to configure kafka - badly (Kafka Summit)

11.438 visualizaciones

Publicado el

Top 5 mistakes we have done with Kafka - and what to do instead.

Publicado en: Tecnología

101 ways to configure kafka - badly (Kafka Summit)

  1. 1. 101* ways to configure Kafka - badly Audun Fauchald Strand Lead Developer Infrastructure @audunstrand bio: gof, mq, ejb, mda, wli, bpel eda, soa, ws*,esb, ddd Henning Spjelkavik Architect @spjelkavik bio: Skiinfo (Vail Resorts), enjoys reading jstacks
  2. 2. agenda introduction to kafka kafka @ 101* mistakes questions “From a certain point onward there is no longer any turning back. That is the point that must be reached.” ― Franz Kafka, The Trial
  3. 3. Top 5 1. no consideration of data on the inside vs outside 2. schema not externally defined 3. same config for every client/topic 4. 128 partitions as default config 5. running on 8 overloaded nodes
  4. 4. 2nd largest website in norway classified ads ( Ebay, Zillow in one) 60 millions pageviews a day 80 microservices 130 developers 1000 deploys to production a week 6 minutes from commit to deploy (median)
  5. 5. #kafkasummit @spjelkavik @audunstrand Schibsted Media Group 6800 people in 30 countries is a part of
  6. 6. kafka @
  7. 7. kafka @finn. no architecture use cases tools
  8. 8. #kafkasummit @spjelkavik @audunstrand in the beginning ... Architecture governance board decided to use RabbitMQ as message queue. Kafka was installed for a proof of concept, after developers spotted it januar 2013.
  9. 9. #kafkasummit @spjelkavik @audunstrand 2013 - POC “High” volume Stream of classified ads Ad matching Ad indexed mod05 zk kafka mod07 zk kafka mod01 zk kafka mod03 zk kafka mod06 zk kafka mod08 zk kafka mod02 zk kafka mod04 zk kafka dc 1 dc 2 Version 0.8.1 4 partitions common client java library thrift
  10. 10. #kafkasummit @spjelkavik @audunstrand 2014 - Adoption and complaining low volume/ high reliability Ad Insert Product Orchestration Payment Build Pipeline click streams mod05 zk kafka mod07 zk kafka mod01 zk kafka mod03 zk kafka mod06 zk kafka mod08 zk kafka mod02 zk kafka mod04 zk kafka dc 1 dc 2 Version 0.8.1 4 partitions experimenting with configuration common java library
  11. 11. #kafkasummit @spjelkavik @audunstrand tooling alerting
  12. 12. #kafkasummit @spjelkavik @audunstrand 2015 - Migration and consolidation “reliable messaging” asynchronous communication between services store and forward zipkin slack notifications dc 1 dc 2 Version 0.8.2 5-20 partitions multiple configurations broker05 zk kafka broker01 zk kafka broker03 zk kafka broker04 zk kafka broker02 zk kafka
  13. 13. #kafkasummit @spjelkavik @audunstrand tooling Grafana dashboard visualizing jmx stats kafka-manager kafka-cat
  14. 14. #kafkasummit @spjelkavik @audunstrand 2016 - Confluent zk04 zk broker01 broker05 kafka kafka broker03 kafka broker04 kafka broker02 kafka zk05 zk zk02 zk zk03 zk zk01 zk platform schema registry data replication kafka connect kafka streams
  15. 15. 101* mistakes “God gives the nuts, but he does not crack them.” ― Franz Kafka
  16. 16. Pattern Language why is it a mistake what is the consequence what is the correct solution what has done
  17. 17. Top 5 1. no consideration of data on the inside vs outside 2. schema not externally defined 3. same config for every client/topic 4. 128 partitions as default config 5. running on 8 overloaded nodes
  18. 18. #kafkasummit @spjelkavik @audunstrand mistake: no consideration of data on the inside vs outside
  19. 19. #kafkasummit @spjelkavik @audunstrand why is it a mistake everything published on Kafka (0.8.2) is visible to any client that can access
  20. 20. #kafkasummit @spjelkavik @audunstrand what is the consequence direct reads across services/domains is quite normal in legacy and/or enterprise systems coupling makes it hard to make changes unknown and unwanted coupling has a cost Kafka had no security per topic - you must add that yourself
  21. 21. #kafkasummit @spjelkavik @audunstrand what is the correct solution Consider what is data on the inside, versus data on the outside Convention for what is private data and what is public data If you want to change your internal representation often, map it before publishing it publicly (Anti corruption layer)
  22. 22. #kafkasummit @spjelkavik @audunstrand what has done Decided on a naming convention (i.e Public.xyzzy) for public topics Communicates the intention (contract)
  23. 23. #kafkasummit @spjelkavik @audunstrand mistake: schema not externally defined
  24. 24. #kafkasummit @spjelkavik @audunstrand why is it a mistake data and code needs separate versioning strategies version should be part of the data defining schema in a java library makes it more difficult to access data from non- jvm languages very little discoverability of data, people chose other means to get their data difficult to create tools
  25. 25. #kafkasummit @spjelkavik @audunstrand what is the consequence development speed outside jvm has been slow change of data needs coordinated deployment no process for data versioning, like backwards compatibility checks difficult to create tooling that needs to know data format, like data lake and database sinks
  26. 26. #kafkasummit @spjelkavik @audunstrand what is the correct solution platform has a separate schema registry apache avro multiple compatibility settings and evolutions strategies connect Take complexity out of the applications
  27. 27. #kafkasummit @spjelkavik @audunstrand what has done still using java library, with schemas in builders confluent platform 2.0 is planned for the next step, not (just) kafka 0.9
  28. 28. #kafkasummit @spjelkavik @audunstrand mistake: running mixed load with a single, default configuration
  29. 29. #kafkasummit @spjelkavik @audunstrand why is it a mistake Historically - One Big Database with Expensive License Database world - OLTP and OLAP Changed with Open Source software and Cloud Tried to simplify the developer's day with a single config Kafka supports very high throughput and highly reliable
  30. 30. #kafkasummit @spjelkavik @audunstrand what is the consequence Trade off between throughput and degree of reliability With a single configuration - the last commit wins Either high throughput, and risk of loss - or potentially too slow
  31. 31. #kafkasummit @spjelkavik @audunstrand what is the correct solution Understand your use cases and their needs! Use proper pr topic configuration Consider splitting / isolation
  32. 32. #kafkasummit @spjelkavik @audunstrand Defaults that are quite reliable Exposing configuration variables in the client Ask the questions; ● at least once delivery ● ordering - if you partition, what must have strict ordering ● 99% delivery - is that good enough? ● what level of throughput is needed what has done
  33. 33. #kafkasummit @spjelkavik @audunstrand Configuration Configuration for production ● Partitions ● Replicas (default.replication.factor) ● Minimum ISR (min.insync.replicas) ● Wait for acknowledge when producing messages (request.required.acks, block.on.buffer.full) ● Retries ● Leader election Configuration for consumer ● Number of threads ● When to commit (autocommit.enable vs consumer.commitOffsets)
  34. 34. #kafkasummit @spjelkavik @audunstrand Gwen Shapira recommends... ● akcs = all ● block.on.buffer.full = true ● retries = MAX_INT ● max.inflight.requests.per.connect = 1 ● Producer.close() ● replication-factor >= 3 ● min.insync.replicas = 2 ● unclean.leader.election = false ● auto.offset.commit = false ● commit after processing ● monitor!
  35. 35. #kafkasummit @spjelkavik @audunstrand mistake: default configuration of 128 partitions for each topic
  36. 36. #kafkasummit @spjelkavik @audunstrand why is it a mistake partitions are kafkas way of scaling consumers, 128 partitions can handle 128 consumer processes in 0.8; clusters could not reduce the number of partitions without deleting data highest number of consumers today is 20
  37. 37. #kafkasummit @spjelkavik @audunstrand what is the consequence our 0.8 cluster was configured with 128 partitions as default, for all topics. many partitions and many topics creates many datapoints that must be coordinated zookeeper must coordinate all this rebalance must balance all clients on all partitions zookeeper and kafka went down (may 2015) Users could note create ads for two days
  38. 38. #kafkasummit @spjelkavik @audunstrand what is the correct solution small number of partitions as default increase number of partitions for selected topics understand your use case (throughput target) reduce length of transactions on consumer side Max partitions on a broker => 1500 advised in our case - we had 38k
  39. 39. #kafkasummit @spjelkavik @audunstrand what has done 5 partitions as default 2 heavy-traffic topics have more than 5 partitions
  40. 40. #kafkasummit @spjelkavik @audunstrand mistake: deploy a proof of concept hack - in production ; i.e why we had 8 zk nodes
  41. 41. #kafkasummit @spjelkavik @audunstrand why is it a mistake Kafka was set up by Ops for a proof of concept - not for hardened production use By coincidence we had 8 nodes for kafka, the same 8 nodes for zookeeper Zookeeper is dependent on a majority quorum, low latency between nodes The 8 nodes were NOT dedicated - in fact - they were overloaded already
  42. 42. #kafkasummit @spjelkavik @audunstrand what is the consequence Zookeeper recommends 3 nodes for normal usage, 5 for high, and any more is questionable More nodes leads to longer time for finding consensus, more communication If we get a split between data centers, there will be 4 in each You should not run Zk between data centers, due to latency and outage possibilities
  43. 43. #kafkasummit @spjelkavik @audunstrand what is the correct solution Have an odd number of Zookeeper nodes - preferrably 3, at most 5 Don’t cross data centers Check the documentation before deploying serious production load Don’t run a sensitive service (Zookeeper) on a server with 50 jvm-based services, 300% over committed on RAM Watch GC times
  44. 44. #kafkasummit @spjelkavik @audunstrand what has done dc 1 dc 2 broker05 zk kafka broker01 zk kafka broker03 zk kafka broker04 zk kafka broker02 zk kafka Version 0.8.2 5-20 partitions multiple configurations
  45. 45. #kafkasummit @spjelkavik @audunstrand “They say ignorance is bliss.... they're wrong ” ― Franz Kafka
  46. 46. #kafkasummit @spjelkavik @audunstrand References / Further reading Designing data intensive systems, Martin Kleppmann Data on the inside - data on the outside, Pat Helland I Heart Logs, Jay Kreps The Confluent Blog, Kafka - The definitive guide
  47. 47. “It's only because of their stupidity that they're able to be so sure of themselves.” ― Franz Kafka, The Trial Audun Fauchald Strand @audunstrand Henning Spjelkavik @spjelkavik Q?
  48. 48. #kafkasummit @spjelkavik @audunstrand Runner up Using pre-1.0 software Have control of topic creation Kafka is storage - treat it like one also ops-wise Client side rebalancing, misunderstood Commiting on all consumer threads, believing that you only commited on one