Table of Contents
This is Part 2 of the "Karafka to Async Journey" series. Part 1 covered WaterDrop's integration with Ruby's async ecosystem and how fibers can yield during Kafka dispatches. This article covers another improvement in this area: migration of the producer polling engine to file descriptor-based polling.
When I released WaterDrop's async/fiber support in September 2025, the results were promising - fibers significantly outperformed multiple producer instances while consuming less memory. But something kept nagging me.
Every WaterDrop producer spawns a dedicated background thread for polling librdkafka's event queue. For one or two producers, nobody cares. But Karafka runs in hundreds of thousands of production processes. Some deployments use transactional producers, where each worker thread needs its own producer instance. Ten worker threads means ten producers and ten background polling threads - each competing for Ruby's GVL, each consuming memory, each doing the same repetitive work. Things will get even more intense once Karafka consumer becomes async-friendly, as it is under development.
The Thread Problem
Every time you create a WaterDrop producer, rdkafka-ruby spins up a background thread (rdkafka.native_kafka#<n>) that calls rd_kafka_poll(timeout) in a loop. Its job is to check whether librdkafka has delivery reports ready and to invoke the appropriate callbacks.
With one producer, you get one extra thread. With 25, you get 25. Each consumes roughly 1MB of stack space. Each competes with your application threads for the GVL. And most of the time, they're doing nothing - sleeping inside poll(timeout), waiting for events that may arrive once every few milliseconds.
I wanted one thread that could monitor all producers simultaneously, reacting only when there's actual work to do.
How librdkafka Polling Works (and Why It's Wasteful)
librdkafka is inherently asynchronous. When you produce a message, it gets buffered internally and dispatched by librdkafka's own I/O threads. When the broker acknowledges delivery, librdkafka places a delivery report on an internal event queue. rd_kafka_poll() drains that queue and invokes your callbacks.
The problem is how rd_kafka_poll(timeout) waits. Calling rd_kafka_poll(250) blocks for up to 250 milliseconds. From Ruby's perspective, this is a blocking C function call. The rdkafka-ruby FFI binding releases the GVL during this call so other threads can run, but the calling thread is stuck until either an event arrives or the timeout expires.
Every rd_kafka_poll(timeout) call must release the GVL before entering C and reacquire it afterward. This cycle happens continuously, even when the queue is empty. With 25 producers, that's 25 threads constantly cycling through GVL release/reacquire. And there's no way to say "watch these 25 queues and wake me when any of them has events."
The File Descriptor Alternative
Luckily for me, librdkafka has a lesser-known API that solves both problems: rd_kafka_queue_io_event_enable().
You can create an OS pipe and hand the write end to librdkafka:
int pipefd[2];
pipe(pipefd);
rd_kafka_queue_io_event_enable(queue, pipefd[1], "1", 1);
Whenever the queue transitions from empty to non-empty, librdkafka writes a single byte to the pipe. The actual events are still on librdkafka's internal queue - the pipe is purely a wake-up signal. This is edge-triggered: it only fires on the empty-to-non-empty transition, not per-event.
The read end of the pipe is a regular file descriptor that works with Ruby's IO.select. The Poller thread spends most of its time in IO.select, which handles GVL release natively. When a pipe signals readiness, we call poll_nb(0) - a non-blocking variant that skips GVL release entirely:
100,000 iterations:
rd_kafka_poll: ~19ms (5.1M calls/s) - releases GVL
rd_kafka_poll_nb: ~12ms (8.1M calls/s) - keeps GVL
poll_nb is ~1.6x faster
Instead of 25 threads each paying the GVL tax on every iteration, one thread pays it once in IO.select and then drains events across all producers without GVL overhead.
One Thread to Poll Them All
By default, a singleton Poller manages all FD-mode producers in a single thread:
When a producer is created with config.polling.mode = :fd, it registers with the global Poller instead of spawning its own thread. The Poller creates a pipe for each producer and tells librdkafka to signal through it.
The polling loop calls IO.select on all registered pipes. When any pipe becomes readable, the Poller drains it and runs a tight loop that processes events until the queue is empty or a configurable time limit is hit:
def poll_drain_nb(max_time_ms)
deadline = monotonic_now + max_time_ms
loop do
events = rd_kafka_poll_nb(0)
return true if events.zero? # fully drained
return false if monotonic_now >= deadline # hit time limit
end
end
When IO.select times out (~1 second by default), the Poller does a periodic poll on all producers regardless of pipe activity - a safety net for edge cases like OAuth token refresh that may not trigger a queue write. Regular events, including statistics.emitted callbacks, do write to the pipe and wake the Poller immediately.
The Numbers
Benchmarked on Ruby 4.0.1 with a local Kafka broker, 1,000 messages per producer, 100-byte payloads:
| Producers | Thread Mode | FD Mode | Improvement |
|---|---|---|---|
| 1 | 27,300 msg/s | 41,900 msg/s | +54% |
| 2 | 29,260 msg/s | 40,740 msg/s | +39% |
| 5 | 27,850 msg/s | 40,080 msg/s | +44% |
| 10 | 26,170 msg/s | 39,590 msg/s | +51% |
| 25 | 24,140 msg/s | 36,110 msg/s | +50% |
39-54% faster across the board. The improvement comes from three things: immediate event notification via the pipe, the 1.6x faster poll_nb that skips GVL overhead, and consolidating all producers into a single polling thread that eliminates GVL contention.
The Trade-offs
Callbacks execute on the Poller thread. In thread mode, each producer's callbacks ran on its own polling thread. In FD mode with the default singleton Poller, all callbacks share the single Poller thread. Don't perform expensive or blocking operations inside message.acknowledged or statistics.emitted. This was never recommended in thread mode either, but FD mode makes it worse - if your callback takes 500ms, it delays polling for all producers on that Poller, not just one.
Don't close a producer from within its own callback when using FD mode. Callbacks execute on the Poller thread, and closing from within would cause synchronization issues. Close producers from your application threads.
How to Use It
producer = WaterDrop::Producer.new do |config|
config.kafka = { 'bootstrap.servers': 'localhost:9092' }
config.polling.mode = :fd
end
Pipe creation, Poller registration, lifecycle management - all handled internally.
You can differentiate priorities between producers:
high = WaterDrop::Producer.new do |config|
config.polling.mode = :fd
config.polling.fd.max_time = 200 # more polling time
end
low = WaterDrop::Producer.new do |config|
config.polling.mode = :fd
config.polling.fd.max_time = 50 # less polling time
end
max_time controls how long the Poller spends draining events for each producer per cycle. Higher values mean more events processed per wake-up but less fair scheduling across producers.
Dedicated Pollers for Callback Isolation
By default, all FD-mode producers share a single global Poller. If a slow callback in one producer risks starving others, you can assign a dedicated Poller via config.polling.poller:
dedicated_poller = WaterDrop::Polling::Poller.new
producer = WaterDrop::Producer.new do |config|
config.kafka = { 'bootstrap.servers': 'localhost:9092' }
config.polling.mode = :fd
config.polling.poller = dedicated_poller
end
Each dedicated Poller runs its own thread (waterdrop.poller#0, waterdrop.poller#1, etc.). You can also share a dedicated Poller between a subset of producers to group them - for example, giving critical producers their own shared Poller while background producers use the global singleton. The dedicated Poller shuts down automatically when its last producer closes.
When config.polling.poller is nil (the default), the global singleton is used. Setting a custom Poller is only valid with config.polling.mode = :fd.
The Rollout Plan
I'm being deliberately cautious. Karafka runs in too many production environments to rush this.
Phase 1 (WaterDrop 2.8, now): FD mode is opt-in. Thread mode stays the default.
Phase 2 (WaterDrop 2.9): FD mode becomes the default. Thread mode remains available with a deprecation warning.
Phase 3 (WaterDrop 2.10): Thread mode is removed. Every producer uses FD-based polling.
A full major version cycle to test before it becomes mandatory.
What's Next: The Consumer Side
The producer was the easier target - simpler event loop, more straightforward queue management. I'm working on similar improvements for Karafka's consumer, where the gains could be even more significant. Consumer polling has additional complexity around max.poll.interval.ms and consumer group membership, but the core idea is the same: replace per-thread blocking polls with file descriptor notifications and efficient multiplexing.
Find WaterDrop on GitHub and check PR #780 for the full implementation details.