One Thread to Poll Them All: How a Single Pipe Made WaterDrop 50% Faster

This is Part 2 of the "Karafka to Async Journey" series. Part 1 covered WaterDrop's integration with Ruby's async ecosystem and how fibers can yield during Kafka dispatches. This article covers another improvement in this area: migration of the producer polling engine to file descriptor-based polling.

When I released WaterDrop's async/fiber support in September 2025, the results were promising - fibers significantly outperformed multiple producer instances while consuming less memory. But something kept nagging me.

Every WaterDrop producer spawns a dedicated background thread for polling librdkafka's event queue. For one or two producers, nobody cares. But Karafka runs in hundreds of thousands of production processes. Some deployments use transactional producers, where each worker thread needs its own producer instance. Ten worker threads means ten producers and ten background polling threads - each competing for Ruby's GVL, each consuming memory, each doing the same repetitive work. Things will get even more intense once Karafka consumer becomes async-friendly, as it is under development.

The Thread Problem

Every time you create a WaterDrop producer, rdkafka-ruby spins up a background thread (rdkafka.native_kafka#<n>) that calls rd_kafka_poll(timeout) in a loop. Its job is to check whether librdkafka has delivery reports ready and to invoke the appropriate callbacks.

With one producer, you get one extra thread. With 25, you get 25. Each consumes roughly 1MB of stack space. Each competes with your application threads for the GVL. And most of the time, they're doing nothing - sleeping inside poll(timeout), waiting for events that may arrive once every few milliseconds.

I wanted one thread that could monitor all producers simultaneously, reacting only when there's actual work to do.

How librdkafka Polling Works (and Why It's Wasteful)

librdkafka is inherently asynchronous. When you produce a message, it gets buffered internally and dispatched by librdkafka's own I/O threads. When the broker acknowledges delivery, librdkafka places a delivery report on an internal event queue. rd_kafka_poll() drains that queue and invokes your callbacks.

The problem is how rd_kafka_poll(timeout) waits. Calling rd_kafka_poll(250) blocks for up to 250 milliseconds. From Ruby's perspective, this is a blocking C function call. The rdkafka-ruby FFI binding releases the GVL during this call so other threads can run, but the calling thread is stuck until either an event arrives or the timeout expires.

Every rd_kafka_poll(timeout) call must release the GVL before entering C and reacquire it afterward. This cycle happens continuously, even when the queue is empty. With 25 producers, that's 25 threads constantly cycling through GVL release/reacquire. And there's no way to say "watch these 25 queues and wake me when any of them has events."

The File Descriptor Alternative

Luckily for me, librdkafka has a lesser-known API that solves both problems: rd_kafka_queue_io_event_enable().

You can create an OS pipe and hand the write end to librdkafka:

int pipefd[2];
pipe(pipefd);
rd_kafka_queue_io_event_enable(queue, pipefd[1], "1", 1);

Whenever the queue transitions from empty to non-empty, librdkafka writes a single byte to the pipe. The actual events are still on librdkafka's internal queue - the pipe is purely a wake-up signal. This is edge-triggered: it only fires on the empty-to-non-empty transition, not per-event.

The read end of the pipe is a regular file descriptor that works with Ruby's IO.select. The Poller thread spends most of its time in IO.select, which handles GVL release natively. When a pipe signals readiness, we call poll_nb(0) - a non-blocking variant that skips GVL release entirely:

100,000 iterations:
  rd_kafka_poll:    ~19ms (5.1M calls/s) - releases GVL
  rd_kafka_poll_nb: ~12ms (8.1M calls/s) - keeps GVL
  poll_nb is ~1.6x faster

Instead of 25 threads each paying the GVL tax on every iteration, one thread pays it once in IO.select and then drains events across all producers without GVL overhead.

One Thread to Poll Them All

By default, a singleton Poller manages all FD-mode producers in a single thread:

When a producer is created with config.polling.mode = :fd, it registers with the global Poller instead of spawning its own thread. The Poller creates a pipe for each producer and tells librdkafka to signal through it.

The polling loop calls IO.select on all registered pipes. When any pipe becomes readable, the Poller drains it and runs a tight loop that processes events until the queue is empty or a configurable time limit is hit:

def poll_drain_nb(max_time_ms)
  deadline = monotonic_now + max_time_ms
  loop do
    events = rd_kafka_poll_nb(0)
    return true if events.zero?       # fully drained
    return false if monotonic_now >= deadline  # hit time limit
  end
end

When IO.select times out (~1 second by default), the Poller does a periodic poll on all producers regardless of pipe activity - a safety net for edge cases like OAuth token refresh that may not trigger a queue write. Regular events, including statistics.emitted callbacks, do write to the pipe and wake the Poller immediately.

The Numbers

Benchmarked on Ruby 4.0.1 with a local Kafka broker, 1,000 messages per producer, 100-byte payloads:

Producers Thread Mode FD Mode Improvement
1 27,300 msg/s 41,900 msg/s +54%
2 29,260 msg/s 40,740 msg/s +39%
5 27,850 msg/s 40,080 msg/s +44%
10 26,170 msg/s 39,590 msg/s +51%
25 24,140 msg/s 36,110 msg/s +50%

39-54% faster across the board. The improvement comes from three things: immediate event notification via the pipe, the 1.6x faster poll_nb that skips GVL overhead, and consolidating all producers into a single polling thread that eliminates GVL contention.

The Trade-offs

Callbacks execute on the Poller thread. In thread mode, each producer's callbacks ran on its own polling thread. In FD mode with the default singleton Poller, all callbacks share the single Poller thread. Don't perform expensive or blocking operations inside message.acknowledged or statistics.emitted. This was never recommended in thread mode either, but FD mode makes it worse - if your callback takes 500ms, it delays polling for all producers on that Poller, not just one.

Don't close a producer from within its own callback when using FD mode. Callbacks execute on the Poller thread, and closing from within would cause synchronization issues. Close producers from your application threads.

How to Use It

producer = WaterDrop::Producer.new do |config|
  config.kafka = { 'bootstrap.servers': 'localhost:9092' }
  config.polling.mode = :fd
end

Pipe creation, Poller registration, lifecycle management - all handled internally.

You can differentiate priorities between producers:

high = WaterDrop::Producer.new do |config|
  config.polling.mode = :fd
  config.polling.fd.max_time = 200  # more polling time
end

low = WaterDrop::Producer.new do |config|
  config.polling.mode = :fd
  config.polling.fd.max_time = 50   # less polling time
end

max_time controls how long the Poller spends draining events for each producer per cycle. Higher values mean more events processed per wake-up but less fair scheduling across producers.

Dedicated Pollers for Callback Isolation

By default, all FD-mode producers share a single global Poller. If a slow callback in one producer risks starving others, you can assign a dedicated Poller via config.polling.poller:

dedicated_poller = WaterDrop::Polling::Poller.new

producer = WaterDrop::Producer.new do |config|
  config.kafka = { 'bootstrap.servers': 'localhost:9092' }
  config.polling.mode = :fd
  config.polling.poller = dedicated_poller
end

Each dedicated Poller runs its own thread (waterdrop.poller#0, waterdrop.poller#1, etc.). You can also share a dedicated Poller between a subset of producers to group them - for example, giving critical producers their own shared Poller while background producers use the global singleton. The dedicated Poller shuts down automatically when its last producer closes.

When config.polling.poller is nil (the default), the global singleton is used. Setting a custom Poller is only valid with config.polling.mode = :fd.

The Rollout Plan

I'm being deliberately cautious. Karafka runs in too many production environments to rush this.

Phase 1 (WaterDrop 2.8, now): FD mode is opt-in. Thread mode stays the default.

Phase 2 (WaterDrop 2.9): FD mode becomes the default. Thread mode remains available with a deprecation warning.

Phase 3 (WaterDrop 2.10): Thread mode is removed. Every producer uses FD-based polling.

A full major version cycle to test before it becomes mandatory.

What's Next: The Consumer Side

The producer was the easier target - simpler event loop, more straightforward queue management. I'm working on similar improvements for Karafka's consumer, where the gains could be even more significant. Consumer polling has additional complexity around max.poll.interval.ms and consumer group membership, but the core idea is the same: replace per-thread blocking polls with file descriptor notifications and efficient multiplexing.


Find WaterDrop on GitHub and check PR #780 for the full implementation details.

Claude on Incus – All the autonomy, securely

Here's claude-on-incus (or coi for short) - a tool for running Claude Code freely in isolated Incus containers.

If it's useful to you, a star helps.

Note: I'm also working on "code-on-incus" - a generalized version for running any AI coding assistant in isolated containers.

Why?

Three reasons: security, a clean host, and full contextual environments.

Security

Claude Code inherits your entire shell environment. Your SSH keys, git credentials, .env files with API tokens - everything. You either click "Allow" hundreds of times per session, or use --dangerously-skip-permissions and hope nothing goes wrong.

With coi, Claude runs in complete isolation. Your host credentials stay on the host. Claude can't leak what Claude can't see.

What remains exposed: The Claude API token must be present inside the container, and your mounted workspace files are accessible. A malicious or compromised model could theoretically exfiltrate these over the network. Network filtering to restrict outbound connections is under development.

Clean host, full capabilities

Claude loves installing things. Different Node versions, Python packages, Docker images, random build tools. On bare metal, this clutters your system with dependencies you may actually not need.

With coi, Claude can install and run whatever the task requires - without any of it touching your host. Need a specific Ruby version for one project? A Rust toolchain for another? Let Claude set it up in the container. Keep it if useful, throw it away if not.

VM-like isolation, Docker-like speed. Containers start in ~2 seconds.

Contextual environments

Each project can have its own persistent container with Claude's installed context and setup. Your web project has Node 20 and React tools. Your data project has Python 3.11 with pandas and jupyter. Your embedded project has cross-compilers and debugging tools.

Claude remembers what it installed and configured - per project, completely isolated from each other.

Why Incus over Docker?

Claude often needs to run Docker itself. Docker-in-Docker is a mess - you either bind-mount the host socket (defeating isolation) or run privileged mode (no security). Incus runs system containers where Docker works natively without hacks.

Incus also handles UID mapping automatically. No more chown after every session.

Quick start

# Install (or build from sources if you prefer)
curl -fsSL https://raw.githubusercontent.com/mensfeld/claude-on-incus/master/install.sh | bash

# Build image (first time only)
coi build

# Start coding
cd your-project
coi shell

Features And Capabilities

  • Multi-slot sessions - Run parallel Claude instances for different tasks. Each slot has its own isolated home directory, so files don't leak between sessions.
coi shell --slot 1  # Frontend work
coi shell --slot 2  # API debugging
  • Session resume - Stop working, come back tomorrow, pick up where you left off with full conversation history. Sessions are workspace-scoped, so you'll never accidentally resume a conversation from a different project:
coi shell --resume
  • Persistent containers - By default, containers are ephemeral but your workspace files always persist. Enable persistence to also keep your installed tools between sessions:
coi shell --persistent
  • Detachable sessions - All sessions run in tmux, allowing you to detach from running work and reattach later without losing progress. Your code analysis or long-running task continues in the background:
# Detach: Press Ctrl+b d
# Reattach to running session
coi attach

The "dangerous" flags are much safer now

Claude Code's --dangerously-skip-permissions flag has that name for good reason when running on bare metal. Inside a coi container, the threat model changes completely:

Risk Bare metal Inside coi
SSH key exposure Yes No - keys not mounted
Git credential theft Yes No - credentials not present
Environment variable leaks Yes No - host env not inherited
Docker socket access Yes No - separate Docker daemon
Host filesystem access Full Only mounted workspace

The "dangerous" flags give Claude full autonomy to work efficiently. The container isolation ensures that autonomy can't be weaponized against you.

Summary

coi gives you secure, isolated Claude Code sessions that don't pollute your host. Install anything, experiment freely, keep what works, discard what doesn't.

The project is MIT licensed on GitHub.

Copyright © 2026 Closer to Code

Theme by Anders NorenUp ↑