Tag: Linux

Running a script on startup before X starts in Ubuntu Linux

I have two xorg.conf files that I use, depending on where I am. Lately I got sick of switching them manually like this:

sudo cp /etc/X11/xorg.conf.h /etc/X11/xorg.conf
sudo pkill X

So I decided to add switching script to rc.local file. Unfortunately rc.local seems to start after all other services. Of course this would work with sudo pkill X included. But to be honest, killing X every time I turn on my computer is a bit lame.

Run levels to the rescue

You can read about run levels here. I'll just tell you that you need to check your current run level by running this command:

runlevel

You'll get something like this:

[~]$ runlevel 
N 2
[~]$ 

My Ubuntu is by default using run level 2. So now I know that I need to hookup before X starts in this run level. To do this, you need to create a script in /etc/rc2.d/ directory. If you do a ls -al in any or RC dirs, you'll see that all the scripts there are just symlinks to /etc/init.d/ scripts. It is convenient to store all of them there, because they might be reused in other run levels.

Naming convention

All scripts (aliases) that should be executed in given run level, have a 'SNUMBER' prefix. S probably states for "Start" and number determines the order in which all the scripts well be executed (well there is also LSB that might "disrupt" the specified order but fortunately not in our case).

Example of rc2.d:

lrwxrwxrwx   1 root root    15 2012-04-25 16:42 S20mysql -> ../init.d/mysql
lrwxrwxrwx   1 root root    15 2011-06-29 09:55 S20nginx -> ../init.d/nginx
lrwxrwxrwx   1 root root    17 2011-11-14 21:25 S20postfix -> ../init.d/postfix
lrwxrwxrwx   1 root root    22 2011-06-29 02:03 S20redis-server -> ../init.d/redis-server
lrwxrwxrwx   1 root root    16 2012-04-28 21:51 S20tcpspy -> ../init.d/tcpspy
lrwxrwxrwx   1 root root    17 2011-06-22 01:18 S20vboxdrv -> ../init.d/vboxdrv
lrwxrwxrwx   1 root root    17 2011-06-21 22:40 S20winbind -> ../init.d/winbind
lrwxrwxrwx   1 root root    19 2011-06-22 00:05 S25bluetooth -> ../init.d/bluetooth
lrwxrwxrwx   1 root root    20 2011-06-22 00:05 S50pulseaudio -> ../init.d/pulseaudio
lrwxrwxrwx   1 root root    15 2011-06-22 00:05 S50rsync -> ../init.d/rsync
lrwxrwxrwx   1 root root    15 2011-06-22 00:05 S50saned -> ../init.d/saned
lrwxrwxrwx   1 root root    19 2011-06-22 00:05 S70dns-clean -> ../init.d/dns-clean
lrwxrwxrwx   1 root root    18 2011-06-22 00:05 S70pppd-dns -> ../init.d/pppd-dns
lrwxrwxrwx   1 root root    14 2012-05-27 19:33 S75sudo -> ../init.d/sudo
lrwxrwxrwx   1 root root    17 2011-06-21 22:39 S91apache2 -> ../init.d/apache2
lrwxrwxrwx   1 root root    22 2011-06-22 00:05 S99acpi-support -> ../init.d/acpi-support
lrwxrwxrwx   1 root root    21 2011-06-22 00:05 S99grub-common -> ../init.d/grub-common
lrwxrwxrwx   1 root root    18 2011-06-22 00:05 S99ondemand -> ../init.d/ondemand
lrwxrwxrwx   1 root root    18 2011-06-22 00:05 S99rc.local -> ../init.d/rc.local

X conf switchin script

I've named this script 'xselector' and I've placed it in /etc/init.d with rest of scripts. Remember to give this script execution rights!

#!/bin/sh
video_home(){
  rm /etc/X11/xorg.conf
  cp /etc/X11/xorg.conf.dom /etc/X11/xorg.conf
}

video_work(){
  rm /etc/X11/xorg.conf
  cp /etc/X11/xorg.conf.praca /etc/X11/xorg.conf
}

DAY=$(date +"%u")
HOUR=$(date +"%H")

# If this is a work day
if [ "$DAY" -lt 6 ]; then
  # And these are hours when I'm @ work
  if [ "$HOUR" -gt 7 -a "$HOUR" -lt 18 ]; then
    video_work
  else
    video_home
  fi
else
  video_home
fi

After we create our script we just need to add it like this:

sudo ln -s /etc/init.d/xselector /etc/rc2.d/S15xselector

Using MongoDB to store and retrieve CSV files content in Ruby

There come cases, when we want to store CSV or any other sort of files data in a database. The problem occurs when the input files differs (they might have different columns). This would not be a problem if we would be able to know the files specification before parsing and inserting into DB. One of the solutions (the fastest) would be to store them in separate DB tables (each filetype / table). But what we should do, when column amount and their names are unknown? We could use SQL database, create table for data and another table for mapping appropriate DB columns to CSV columns. This might work, but it would not be an elegant solution. So what could we do?

MongoDB to the rescue!

This case is just perfect for MongoDB. MongoDB  is a scalable, high-performance, open source NoSQL database that allows us to create documents what have different attributes assigned to them. To make it work with Ruby you just need to add this to your gem file:

gem "mongoid"

Then you need to specify Mongoid yaml config file and you are ready to go (the MongoDB installation instructions can be found here):

Mongoid.load!("./mongoid.yml", :production)

MongoID example config file

Here you have really small Mongoid config file:

production:
  sessions:
    default:
      hosts:
        - localhost:27017
      database: csv_files
      username: csv
      password: csv
  options:
    allow_dynamic_fields: true
    raise_not_found_error: false
    skip_version_check: false

You can use your own config file, just remember to set allow_dynamic_fields to true!

CSV parsing

We will do a really simple CSV parsing. We will store all the values as strings, so we just need to read CSV file and create all needed attributes in objects that should represent each file row:

class StoredCSV
  include Mongoid::Document
  include Mongoid::Timestamps

  def self.import!(file_path)
    columns = []
    instances = []
    CSV.foreach(file_path) do |row|
      if columns.empty?
        # We dont want attributes with whitespaces
        columns = row.collect { |c| c.downcase.gsub(' ', '_') }
        next
      end

      instances << create!(build_attributes(row, columns))
    end
    instances
  end

  private

  def self.build_attributes(row, columns)
    attrs = {}
    columns.each_with_index do |column, index|
      attrs[column] = row[index]
    end
    attrs
  end
end

Thats all! Instead of creating SQL tables and doing some mappings - we just allow MongoDB to dynamically create all the fields that we need for given CSV file.

Usage

StoredCSV.import!('data.csv')
stored_data = StoredCSV.all

Checking attribute names for given object - don't use Mongoid attribute_names method

You need to remember, that various instances might have (since they come from different files) different attributes, so you cannot just assume that all will have field "name". There is a Mongoid method attribute_names, but this method will return only predefined attributes:

StoredCSV.first.attribute_names => ["_type", "_id", "created_at", "updated_at"]

To obtain all the fields for given instance you need to do something like this

StoredCSV.first.attributes.collect{|k,v| k} => ["_id", "name", "format", "description"]

Summary

This was just a simple example but this should also be a good base for a bigger and better solution. There should be implemented more complex key extracting mechanism with prefix (this would protect from reassigning protected values (like "_id") and whole bunch of other improvements ;)

Copyright © 2025 Closer to Code

Theme by Anders NorenUp ↑