Tag: open-uri

Rails + Paperclip + Open-Uri – Downloading files from the internet and saving them with Paperclip

I don't like hotlinking. Being dependent on someone’s else infrastructure makes me a bit worried. Especially when I don't know who this "someone" is. That's why I've decided to download all the images that users embed in comments. After download, I attach them with Paperclip to their comments. Thanks to that, I can be sure, that they will always be served from my own servers (also I can use them later for some fancy stuff).

Theoretically, to obtain this, we could do something like that:

# We will use open-uri to download embedded images
require "open-uri"

file = open(image_url)
current_comment.pictures.create!(file: file)

Unfortunately this will "almost" work. If we try to get the file url:

current_picture.file.url #=> '/images/pictures/12313/file.'

we'll get it without an file extension. This happens, because open-uri creates a Tempfile without a file extension. Unluckily Paperclip uses file name to determine extension. To get around this issue, we need to create a new Tempfile with a valid extension and copy open-uri Tempfile content to newly created one:

# We will use open-uri to download embedded images
require "open-uri"

file = Tempfile.new ['', ".#{image_url.split('.').last}"]
file.binmode # note that our tempfile must be in binary mode
file.write open(image_url).read
file.rewind
file
current_comment.pictures.create!(file: file)

Update - Paperclip can do this on its own!

Posted by Aditya Sanghi (thanks a lot!):

current_comments.pictures.create!(file: URI.parse(image_url))

Although keep in mind, that you still need to handle 500, 404, etc errors (Paperclip can raise them).

Using Redis as a temporary cache for data shared by multiple independent processes

There are times, when you need to share some data between multiple processes, one of the ways is to use Redis. Not so long ago I've been working with an application which performs some types of calculations based on a crawled websites. Crawler pulls URLs from a page and sends them to a RabbitMQ. Then a worker is fetching those URLs and ignites multiply calculating processes (each of them is independent from the others - they just use the same dataset). They should use same dataset but to be honest... they didn't. Each process has been downloading same website over and over again, so instead of a page been downloaded one time - it has been downloaded minimum N times (N is a number of processes working with that page). That was insane! Each page was analyzed by all the processes only once and max time needed to handle those calculations was approximately  10-15 minutes (with 10 processes). A quite big amount of time was devoted to a page download.

Below you can see how it was working before optimization:

How it was working before

And how it should work:

As you can see above, there is one method which communicates with Redis. It should try to retrieve a cached page (or any other resource you want to store) and if it fails, it should download this page directly from the Internet.  As mentioned before, data should stay in Redis for a certain amount of time (max 15 minutes). Further more I don't want to take care of expired data. Would if be greate if data could just "magically" dissapear? It certainly would!

Redis key expiration

On each Redis key we can set a timeout. After the timeout has expired, the key will automatically be deleted. You can set expiration time like this (example from Redis doc):

redis>  SET mykey "Hello"
OK
redis>  EXPIRE mykey 10
(integer) 1
redis>  TTL mykey
(integer) 10
redis>  SET mykey "Hello World"
OK
redis>  TTL mykey
(integer) -1
redis>

We will store pages in Redis and set TTL on 15*60 seconds (15 minutes).

Connecting Ruby to Redis

Making Ruby work with Redis is really easy. There is a gem called "redis", which is quite simple to use:

require 'rubygems'
require 'redis'

options = {
  :timeout     => 120,
  :thread_safe => true
}

host = ENV["REDIS_HOST"] || 'localhost'
port = ENV["REDIS_PORT"] || 6379
db   = ENV["REDIS_DB"]   || 2

options.merge!({ :host => host, :port => port, :db => db })

redis = Redis.connect(options)

redis.set "my_key", "value"
p redis.get "my_key" # => "value"

Creating our main class

Source code is self explaining:

class WebPage
  attr_reader :redis, :ttl

  def initialize(redis, ttl = 5)
    @redis = redis
    @ttl = ttl
  end
  # Read page
  def read(url)
    # Try to get page content from radis
    # and if it fails
    unless d = get(url)
      # Just open it and read
      d = open(url).read
      set(url, d)
    end
    d
  end

  private

  # Set url key with page content
  def set(url, page)
    k = key(url)
    @redis.set(k, page)
    @redis.expire(k, @ttl)
  end

  # Get data from a Redis key (null if failed)
  def get(url)
    @redis.get(key(url))
  end

  # Prepare redis key for a certain url
  def key(url)
    "webpage:#{Digest::MD5.hexdigest(url)}"
  end
end

Of course if you want to use this in production, you should include also SSL (https) support and error handling (404, 500, timeouts, etc).

Performance benchmark

I've prepared a simple chart, where you can see performance boost when requesting same page over and over again (Redis cache benchmark source code):

To minimize temporary CPU load overhead (some random requests to my dev server, etc), I've repeated test set 10 times and I've averaged data so it should be quite reliable.

Making 100 requests takes about 6.5 seconds. Making 100 requests with cache takes about 0.09 second! It is 72 times faster!

Summary

Caching data is really important and differences can be really big (10, 100, 1000 times faster with caching enabled!). If you design any type of application, always think about caching as the one of the ways to speed it up.

Copyright © 2024 Closer to Code

Theme by Anders NorenUp ↑