Running with Ruby

Lenovo ThinkPad X1 Carbon (6th Gen / 2018) Ubuntu 18.04 Tweaks

Warning: I’m not responsible for any damages or injury, including but not limited to special or consequential damages, that result from your use of this instructions.

Yesterday I’ve finally received my new Lenovo ThinkPad X1 Carbon 6 gen laptop. I cannot say anything bad about the hardware. It fits exactly into my needs and requirements. Unfortunately, there are some flaws when used with Linux (Ubuntu in my case). Here are some hints on how to make things better.

Touchpad and Trackpoint under Linux

This is the most irritating issue that you will encounter.

Note: Try the first solution presented here first. If it doesn’t help, fallback to the general solution.

Solution working with Kernel 4.17.1-041701-generic

Note: with this solution you may loose the “tap to click” functionality from time to time (until a reboot).

  1. Edit the /etc/modprobe.d/blacklist.conf file and comment out following line:

    # This line needs to be commented out
    # blacklist i2c_i801
  2. Edit the /etc/default/grub file and change this line:



    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash psmouse.synaptics_intertouch=1"
  3. Run following command:

    sudo update-grub
  4. Install xserver synaptics:

    sudo apt-get install xserver-xorg-input-synaptics
  5. Execute following command (not sure if needed):

    # You may want to add it to your .bashrc to make it work after reboot
    synclient TapButton1=1 TapButton2=3 TapButton3=2
  6. Reboot

Old general solution

If you have a touchpad with NFC you may observe following behaviors:

  • it may not detect movements,
  • won’t work with tap-to-click,
  • will occasionally wake up for a couple of seconds and will stop working again.

Unfortunately for this moment, you will have to disable trackpoint to make it work.

Here are the steps you need to follow.

Note: I’m mentioning also things you should not do just in case you’ve followed other instructions that didn’t work.

  1. Disable trackpoint in the BIOS settings.
  2. Disable NFC in the BIOS settings.
  3. Don’t disable trackpad in the BIOS settings (or enable it if you did) – this will make your touchpad embedded buttons work.
  4. You don’t have to use psmouse.synaptics_intertouch=1 at all for GRUB (no GRUB changes) and if you’ve applied this change, please revert it as followed:
  5. Don’t remove i2c_i801 from /etc/modprobe.d/blacklist.conf (it needs to be present and uncommented).
  6. Install the 4.17.0-41770rc5-generic (or newer) Linux kernel from the mainline based on the instructions presented below.
  7. Reboot your system with the new Kernel.
  8. Be happy with your working touchpad with the bottom physical buttons.

Installing the mainline kernel

  1. Go here:
  2. Download all the files for a selected kernel version from the Build for amd64 group except those with the lowlatency in their name. For me that was 4 files overall.
  3. Open a terminal and go to the location where you’ve downloaded the files.
  4. Run following command:
    sudo dpkg -i *.deb
  5. Reboot.

low cTDP and trip temperature in Linux

This problem is related to thermal throttling on Linux, that is set much below the Windows values. This will cause your laptop to run much slower than it could under heavy stress.

Before you attempt to apply this solution, please make sure that the problem still exists when you read it. To do so, open a Linux terminal and run following commands:

sudo apt-get install msr-tools
sudo rdmsr -f 29:24 -d 0x1a2

If you see 3 as a result value (or 15 when running on battery), you don’t have to do anything. Otherwise:

  1. Disable Secure Boot in the BIOS (won’t work otherwise)
  2. Run this command:
    sudo apt install git virtualenv build-essential python3-dev \
      libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev
  3. Clone this git repository and enter it:
    git clone
    cd lenovo-throttling-fix/
  4. Install the patches:
    sudo ./
  5. Check again, that the result from running the rdmsr command is 3

Personally, I use a bit lower temperature levels to preserve battery life in favor of performance. If you want to change the default values, you need to edit the /etc/lenovo_fix file and set the Trip_Temp_C for both battery and AC the way you want:

# Other options here...
Trip_Temp_C: 80

# Other options here...
Trip_Temp_C: 90

Battery charging thresholds

There are a lot of theories and information about ThinkPad charging thresholds. Some theories say thresholds are needed to keep the battery healthy, some think they are useless and the battery will work the same just as it is. In this article I will try not to settle that argument. 🙂 Instead I try to tell how and why I use them, and then proceed to show how they can be changed in different versions of Windows, should you still want to change these thresholds.

Description taken from: ThinkPad battery charging thresholds (for Windows).

I always stick with following settings for my laptops (and somehow I feel that it works):

  • Start threshold: 45%
  • Stop threshold: 75%

This means that the charging will start only if the battery level goes down below 45% and will stop at 75%. This prevents battery from being charged too often and from being charged beyond a recommended level.

To achieve this for Linux based machines you need to install some packages by running:

sudo apt-get install tlp tlp-rdw acpi-call-dkms tp-smapi-dkms acpi-call-dkms

After that just edit the /etc/default/tlp file and edit following values:

# Uncomment both of them if commented out

Reboot, run:

sudo tlp-stat | grep tpacpi-bat

and check if the values are as you expect:

tpacpi-bat.BAT0.startThreshold          = 45 [%]
tpacpi-bat.BAT0.stopThreshold           = 75 [%]

Note, that if you need to have your laptop fully charged, you can achieve that by running following command while connected to AC:

tlp fullcharge

Custom battery monitor / indicator

As you’ve probably already noticed, I really like keeping my laptop batteries in a good shape. It’s much easier, when you are aware of the state in which the battery is. Especially when it goes below 25% (as it is unhealthy for it). To get this type of notifications, you can just install the battery monitor app:

sudo add-apt-repository ppa:maateen/battery-monitor -y
sudo apt-get update
sudo apt-get install battery-monitor -y

after that reboot the system and the app will start automatically.

Too small (or too big) letters for WQHD resolution

If you’ve got yourself Carbon version with the WQHD screen, you may notice that everything is extremely big (or super small). That’s because of the scaling factor. Unfortunately, you cannot use fractional scaling (more details here), which means that you’ll end up either with everything being super small (100%) or super large (200%).

Luckily for you, there’s an easy way out.

Settings things from the console

Just run following commands:

gsettings set org.gnome.desktop.interface text-scaling-factor 1.5
gsettings set org.gnome.nautilus.icon-view default-zoom-level standard
gsettings set org.gnome.desktop.interface cursor-size 32
gsettings set dash-max-icon-size 64

Settings things using UI interface (Gnome-Tweak-Tools)

Install Gnome-Tweak-Tools as followed:

sudo apt-get install gnome-tweak-tool

run it and set the Font Scaling Factor to 1,50.

Unobtrusive mode

My previous Dell laptop had a great feature called Unobtrusive mode. By pressing Fn+B it would turn the screen off as well as keyboard and touchpad. Although I was unable to mimic the whole behavior, you can assign this command as a keyboard shortcut in Gnome to disable the screen upon pressing the Fn+B combination:

xset -display :0.0 dpms force off


I was kinda surprised with the amount of tuning required to make this laptop work with Ubuntu. I always considered Lenovo to be Linux friendly, especially that it is a brand loved by many programmers. On the other hand, maybe that’s exactly the reason why they didn’t put too much effort into making sure everything works out of the box. We’re programmers – we can fix that stuff on our own ;)

Anyhow, enjoy your X1 Carbon as much as I do!

Picture taken from the Lenovo website

Karafka framework 1.2.0 Release Notes (Ruby + Kafka)

Note: These release notes cover only the major changes. To learn about various bug fixes and changes, please refer to the change logs or check out the list of commits in the main Karafka repository on GitHub.

Note: 1.2 release is the last release that will require ActiveSupport to work.

Code quality

I will start with the same thing as with 1.1. We’re constantly working on having a better and easier code base. Despite many changes to our code-base stack, we were able to maintain a pretty decent offenses distribution and trends.

It’s worth pointing out, that we’re now using more extensively many components of the Dry-Rb ecosystem and we love it!


This release brings significant performance improvements allowing you to consume around 40-50k messages per second per single topic. We could do a bit more (around 5-10%) by using symbols as defaults for metadata params key names, but this would bring up a lot of complexity and confusion since JSON parsing returns string keys. It would also introduce some problematic incompatibilities when using additional backend engines that serialize the whole params_batch and deserialize it back.

Karafka is a complex piece of software and benchmarking it can be tricky. There are many use-cases that need to be considered. Some of them single threaded, some of them multi-threaded, some with a non-parsed data rejections and some requiring multiple-thread interactions. That’s why it is really hard to design a single benchmark that will be able to compare multiple Kafka + Ruby frameworks in a fair way.

We’ve decided not to go that way, but rather compare new releases with the previous once. Here are the results of running the same logic with 1.1 and 1.2 multiple times (the more the better):

For some edge cases, Karafka 1.2 can be up to 3x faster than 1.1.

If you are looking for some cross-framework benchmark results, they are available here.


Controllers are now Consumers

Initial versions of Karafka were built with an idea, that we could ignore the transportation layer when working with data. Regardless whether it was an HTTP request, Kafka message or anything else, as long as the data is in a compatible format, we should not have to adapt our business logic to it.

That was the primary reason, why prior to Karafka 1.2 you would put logic in controllers that inherited from ApplicationController or KarafkaController. And this was a mistake.

More and more companies use Karafka within a typical Ruby on Rails stack in which controllers are meant to be Rails controllers. Less experienced developers that encounter Karafka controllers within Rails app/controllers namespace would often end up trying to use some Rails controllers API specific magic without realizing that they’re within Karafka controller scope. To eliminate this problem and to match Kafka naming conventions, the processing units that are responsible for feeding you with Kafka data are being renamed to Consumers and from now on, there are no controllers in the Karafka ecosystem.

# Within app/consumers
class UsersCreatedConsumer < ApplicationConsumer
  def consume
    params_batch.each { |params| User.create!(params['user']) }

New instrumentation engine using Dry-Monitor

Note: Dry-Monitor usage requires a separate article. Here’s just a brief summary of what we did with it.

Old Karafka monitor was too magical. It would auto-detect the context in which it is invoked, automatically building notification scopes and doing a lot of other things. This was really cool but it was:

  • Slow
  • Hard to maintain
  • Bug sensitive
  • Code change sensitive
  • Not isolated from the rest of the system
  • Hard to use with custom tools like NewRelic or Airbrake
  • Limited when it comes to instrumenting with multiple tools at the same time
  • Too custom to be easily replaced

We are proud to announce, that from now on, Dry-Monitor is the instrumentation backbone of the whole Karafka ecosystem. Here’s a simple example of what you can achieve using it:

Karafka.monitor.subscribe 'params.params.parse.error' do |event|
  puts "Oh no! An error: #{event[:error]} occurred!"

and to be honest, possibilities are endless. From simple logging, through in-production performance monitoring up to multi-target complex instrumentation. Please refer to the Monitoring and logging section of Karafka Wiki for more details.

Dynamic Karafka::Params::Params parent class

Karafka is designed to handle a lot of messages. Each incoming message is wrapper with a lazy evaluated hash-like object. Prior to 1.2, each params object was built based on ActiveSupport::HashWithIndifferentAccess. Truth be told, it is not the fastest library ever (benchmark details here), especially when compared to a PORO Hash:

Common Hash#[] access:  8306261.5 i/s
Common Hash#fetch access:  6053147.2 i/s - 1.37x slower
HashWithIndifferentAccess #[] String:  3803546.0 i/s - 2.18x slower
HashWithIndifferentAccess#fetch String:  1993671.6 i/s - 4.17x slower
HashWithIndifferentAccess#fetch Symbol:  1932004.0 i/s - 4.30x slower
HashWithIndifferentAccess #[] Symbol:  1422367.3 i/s - 5.84x slower
Hash#with_indifferent_access #[] String:   470876.8 i/s - 17.64x slower
Hash#with_indifferent_access #fetch String:   414701.6 i/s - 20.03x slower
Hash#with_indifferent_access #fetch Symbol:   410033.7 i/s - 20.26x slower
Hash#with_indifferent_access #[] Symbol: 381347.2 i/s - 21.78x slower

Now imagine that in some cases, we create 50 0000 objects like that per second. This had to have a serious impact on the framework performance. As always, there needs to be a trade-off. Should we go with a Hash in the name of performance or should we use HashWithIndifferentAccess for the sake of the “simplicity”? We will let you choose whatever you find more suitable.

For that reason, we’ve provided a config params_base_class setting that you can use to set up the base params class from which Karafka::Params::Params will inherit. By default, it is a plain Hash.

require 'active_support/hash_with_indifferent_access'

class App < Karafka::App
  setup do |config|
    # Other settings...
    # config.params_base_class = Hash
    config.params_base_class = HashWithIndifferentAccess

Keep in mind, that you can use other base classes like for example concurrent hash for your advantage. The only requirement is that it needs to have the same API as a Ruby Hash.

System callbacks reorganization with multiple callbacks support

Note: This will be unified with a one set of events that you will be able to hook up to in 1.3 using Dry-Events.

Due to the fact, that some of the things happen in Karafka outside of consumers scope, there are two types of callbacks available:

Lifecycle callbacks – callbacks that are triggered during various moments in the Karafka framework lifecycle. They can be used to configure additional software dependent on Karafka settings or to do one-time stuff that needs to happen before consumers are created.
Consumer callbacks – callbacks that are triggered during various stages of messages flow

You can read more about them and how to use them in the Callbacks wiki section.

before_fetch_loop configuration block for early client usage (#seek, etc)

This new callback will be executed once per each consumer group per process before we start receiving messages. This is a great place if you need to use Kafka’s #seek functionality to reprocess already fetched messages again.

Note: Keep in mind, that this is a per process configuration (not per consumer) so you need to check if a provided consumer_group (if you use multiple) is the one you want to seek against.

class App < Karafka::App
  # Setup and other things...

  # Moves the offset back to 100 message, so we can reprocess messages again
  # @note If you use multiple consumers group, make sure you execute #seek on a client of
  #   a proper consumer group not on all of them
  before_fetch_loop do |consumer_group, client|
    topic = 'my_topic'
    partition = 0
    offset = 100

    if, partition, offset)

Rewritten NewRelic client

Thanks to NewRelic kindness, we were able to rewrite the whole listener that now can collect various information about the Karafka data flow. It is super easy to use and extend. You can find it in the Monitoring and Logging wiki section.

Key and/or partition key support for responders

You can now provide key and/or partition_key when using responders:

module Users
  class CreatedResponder < KarafkaResponder
    topic :users_created

    def respond(user)
      respond_to :users_created, user, key:

Alias for client#mark_as_consumed on a consumer level

Simple yet powerful. For max performance, you may use manual offset commit management. If you do that, you can now use the #mark_as_consumed directly, without having to refer to the #client object.

class UsersCreatedConsumer < ApplicationConsumer
  def consume
    params_batch.each { |params| User.create!(params['user']) }
    mark_as_consumed params_batch.last

Incompatibilities and breaking changes

Controllers are now Consumers

Please refer to the features section with this one. It is both a feature and a breaking change at the same time.

after_fetched renamed to after_fetch to normalize the naming convention

class ExamplesConsumer < Karafka::BaseConsumer
  include Karafka::Consumers::Callbacks

  after_fetched do
    # Some logic here

  def consume
    # some logic here

is now:

class ExamplesConsumer < Karafka::BaseConsumer
  include Karafka::Consumers::Callbacks

  after_fetch do
    # Some logic here

  def consume
    # some logic here

received_at renamed to receive_time to follow ruby-kafka and WaterDrop conventions

received_at params key is now receive_time. This means that two timestamp values are available for each params object:

  • receive_time – the moment message was received by our Karafka process
  • create_time – the moment our message was created in the producer

Hash is now the default params base class in favor of ActiveSupport::HashWithIndifferentAccess

Long story short: performance and fewer dependencies. You can still use it though:

require 'active_support/hash_with_indifferent_access'

class App < Karafka::App
  setup do |config|
    # Other settings...
    config.params_base_class = HashWithIndifferentAccess

All metadata keys are strings by default

Since now the default params class is a Hash, we had to pick either symbols or strings as key names for all the metadata attributes. We’ve decided to go with strings as they are more serialization friendly and cooperate with various backends used with Karafka.

Note: If you use HashWithIndifferentAccess, nothing really changes for you.

def consume
  params_batch.first.keys #=> ["parser", "partition", "offset", "key", "create_time", ...]

JSON parsing defaults now to string keys

Since there is no indifferent access by default, when lazy parsing the JSON Kafka data, it will default to string keys that will be merged to the params object. If you’re not planning to use the HashWithIndifferentAccess make sure that your code-base is ready for this change.

Karafka 1.1:

class UsersCreatedConsumer < ApplicationConsumer
  def consume
    # Assuming user data is in the 'user' json scope
    params_batch.each do |params| params[:user] #=> { name: 'Maciek' }
      params['user'] #=> { name: 'Maciek' }
      params['receive_time'] #=> 2018-02-27 18:53:31 +0100

Karafka 1.2:

class UsersCreatedConsumer < ApplicationConsumer
  def consume
    # Assuming user data is in the 'user' json scope
    params_batch.each do |params| params[:user] #=> nil
      params['user'] #=> { name: 'Maciek' }
      # Note, that system keys are strings as well
      params['receive_time'] #=> 2018-02-27 18:53:31 +0100

Configurators removed in favor of the after_init block configuration

What were configurators? Let me quote 1.1 wiki on that one:

For additional setup and/or configuration tasks you can create custom configurators. Similar to Rails these are added to a config/initializers directory and run after app initialization.

Due to a changed lifecycle of Karafka process, more things are being built dynamically upon boot. This means that in order to run initializers in a good way, we would have to control the load order in a more granular way. That’s why this functionality has been replaced with an after_init callback declaration:

class App < Karafka::App
  # Setup and other things...

  # Once everything is loaded and done, assign Karafka app logger as a Sidekiq logger
  # @note This example does not use config details, but you can use all the config values
  #   to setup your external components
  after_init do |_config|
    Sidekiq::Logging.logger = Karafka::App.logger

Note: you can have as many callbacks of any type as you want to. They also can be objects as long as the respond to a #call method.

Karafka ecosystem gems versioning convention

Karafka is combined from several independent libraries. The most important are:

  • Karafka – The main gem that is used to build Karafka applications that consume messages
  • WaterDrop – WaterDrop is a standalone Karafka component library for generating Kafka messages
  • Capistrano-Karafka – Integration for deployment using Capistrano
  • Karafka Sidekiq Backend – an optional proxy that will pass messages received from Karafka into Sidekiq jobs

Some Karafka users had problems using mismatched versions of those gems. From now on, they all will be released in sync up to the second version point. It means that if you decide to use Karafka 1.2 with other ecosystem libraries, you should match them to 1.2.* as well.

Note: This should be resolved automatically as we locked all the proper versions within gemspec, but still worth mentioning.


Our Wiki has been updated accordingly to the 1.2 status. You probably may want to look at the rewritten Monitoring and logging section and the new Testing guide that illustrates how you can test various Karafka ecosystem components.

Upgrade guide

Controllers are now Consumers

Following steps are required to move from controllers:

  • 1. Create app/consumers directory
  • 2. Rename ApplicationController (or KarafkaController) to ApplicationConsumer / KarafkaConsumer
  • 3. Move the ApplicationController and all Karafka controllers to app/consumers
  • 4. Rename files and classes by replacing “Controller” with “Consumer”
  • 5. If you use callbacks, don’t forget about Karafka::Consumers::Callbacks
  • 6. Do exactly the same with your specs/tests
  • 7. Replace the controller consumers groups definition in the karafka.rb file with consumer
  • 8. Rename all the “Controller” with “Consumer” in the karafka.rb file

Karafka, WaterDrop and friends version match

This should be resolved automatically but if you prefer, you can always lock all the Karafka ecosystem gems in your gemfile:

gem 'karafka', '~> 1.2'
gem 'karafka-sidekiq-backend', '~> 1.2'
gem 'capistrano-karafka', '~> 1.2'

Ruby on Rails HashWithIndifferentAccess params compatibility mode

If you still want to use HashWithIndifferentAccess, feel free to:

require 'active_support/hash_with_indifferent_access'
class App < Karafka::App
  setup do |config|
    # Other settings...
    # config.params_base_class = Hash
    config.params_base_class = HashWithIndifferentAccess

Default monitor and logger update

Please refer to the Monitoring and logging Wiki section for details of the way both of those things work now. If you used the default monitoring and logging without any customization, all you need to do is add this to your karafka.rb file after the setup part:


NewRelic client update

If you use our NewRelic example client, please take a look at the new one and upgrade accordingly.

Callbacks rename

class ExamplesConsumer < Karafka::BaseConsumer
  include Karafka::Consumers::Callbacks
  # Rename this
  after_fetched do
    # Some logic here

  # To this
  after_fetch do
    # Some logic here

Karafka params received_at renamed to receive_time

Again, just a name change: if you use ‘received_at’ params timestamp, you’ll enjoy it under the ‘receive_time’ key.

Getting started with Karafka

If you want to get started with Kafka and Karafka as fast as possible, then the best idea is to just clone our example repository:

git clone ./example_app

then, just bundle install all the dependencies:

cd ./example_app
bundle install

and follow the instructions from the example app Wiki.

« Older posts

Copyright © 2018 Running with Ruby

Theme by Anders NorenUp ↑