Saturday, April 18, 2020

Performance instrumentation via DataDog

Recently my team was looking for a solution to implement custom metrics in Java microservices that would then ultimately be fed to DataDog. We explored the following multiple options to add custom performance instrumentation.
  • Using StatsD: StatsD is a  network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP or TCP and sends aggregates to one or more pluggable backend services (e.g., Graphite, DataDog). StatsD is very popular and has become a de facto standard for collecting metrics. Opensource libraries are available in all popular languages to define and collect metrics. More information on StatsD can be found here - https://github.com/statsd/statsd
  • Using DogStatsD: DogStatsD is a custom daemon by DataDog. You can consider it as an extension over StatsD with support for many more metric types. This daemon needs to be installed on the node where you need to collect metrics. If a DataDog agent is already installed on the node, then this daemon is started by default. DataDog has also provided a  java library for interfacing with DogStatsD. More information can be found here - https://docs.datadoghq.com/developers/dogstatsd/
  • Using DataDog HTTP API: DataDog also exposes a REST API that can be used to push metrics to the DataDog server. But it does not make sense to push each and every metric using HTTP. We would need some kind of aggregator on the client side that would collate all data for a time period and then make a HTTP call to DataDog server. https://docs.datadoghq.com/api/
  • Using DropWizard bridge: If you are already using the popular DropWizard metrics library, then the developers at Coursera have created a neat opensource library that acts as a bridge between DropWizard and DataDog - https://github.com/coursera/metrics-datadog
  • Using Micrometer Metrics Facade: If you are using Spring Boot, then this is the best seamless option available for you. Spring Boot Actuator has default support for Micrometer facade library and already provides a DataDogRepository implementation that can be used to push metrics to DataDog. The advantage of using Micrometer facade library is that we can switch to any other metrics backend easily - e.g. switching from DataDog to AWS CloudWatch. Also we can have composite repository wherein we can publish the same metrics to multiple backends. 
We finally decided to use the Micrometer metrics library, as all our microservices were on Spring Boot. Spring Boot 2 has many OOTB metrics configured in micrometer that are of tremendous value for DevOps teams - https://spring.io/blog/2018/03/16/micrometer-spring-boot-2-s-new-application-metrics-collector

Behind the scenes, the micrometer DataDog repository uses the DataDog HTTP APIs to push metrics to the server. There is a background thread that collects/aggregates data and then makes a periodic call to the DataDog server. Perusing the following source code files would give a good overview of how this works: 
https://git.io/JfJDC
https://git.io/JfJD8

To configure DataDog in Spring Boot, you just need to enable the following 2 properties. 
management.metrics.export.datadog.api-key=YOUR_KEY //API key 
management.metrics.export.datadog.step=30s //the interval at which metrics are sent to Datadog

It is also very easy to implement micrometer code in Spring Boot. Sample code below: 

Wednesday, April 15, 2020

Kafka poll() vs heatbeat()

In older versions of Kafka, the consumer was responsible for polling the broker frequently to prove that it is still alive. If the consumer does not poll() within a specified time-limit, then the broker considers that consumer to be dead and starts re-balancing the messages to other consumers.

But in latest versions of Kafka Consumer, a dedicated background heartbeat thread is started. This heartbeat thread sends periodic heartbeats to the broker to say -"Hey, I am alive and kicking!..I am processing messages and will poll() soon again".

Thus the newer versions of Kafka decouple polling functionality and heartbeat functionality. So now we have two threads running, the heartbeat thread and the processing thread (polling thread).
The heartbeat frequency is defined by the session.timeout.ms property (default = 10 secs)

Since there is a separate heartbeat thread now, the authors of Kafka Consumer decided to set the default for the polling timeout as INTEGER_MAX. (attribute: max.poll.interval.ms)
Hence no matter how long the processing takes (on the processing/polling thread), the Kafka broker will never consider the consumer to be dead. Only if no poll() request is received after INTERGER_MAX time, then the consumer would be considered dead.
.
Caveat: If your processing has a bug - (e.g. infinite loop, processing has called a third-party webservice and is stuck, etc.), then the consumer will never be pronounced dead and the messages will start getting piled up in that partition. Hence, it may be a good idea to set a realistic time for the polling() interval, so that we can rebalance the messages to other consumers. 

The following 2 stackoverflow discussions were extremely beneficial to us to help us understand the above.

https://stackoverflow.com/questions/47906485/max-poll-intervals-ms-set-to-int-max-by-default
https://stackoverflow.com/questions/39730126/difference-between-session-timeout-ms-and-max-poll-interval-ms-for-kafka-0-10-0