Tag: seo url

Relative and absolute urls expanding in Python

Today I've got a quite interesting piece of code, used to expand urls (join them when they are in parts). For example:

expand_url('http://www.test.com/', '/me') 
  => http://www.test.com/me
expand_url('http://www.test.com/abc', 'test') 
  => http://www.test.com/abc/test

Code was quite "crazy" (not mine):

import re
from urlparse import urlparse, urljoin, urlunparse

def expand_url(home, url):
    if re.match(r"^\w+\://", url):
        return url
    else:
        parts = home.split('/')
        if len(parts) > 2:
            if re.match(r"^/", url):
                return "%s//%s%s" % (parts[0], parts[2], url)
            else:
                url = url.split('/')
                if url[0] == '.':
                    del(url[0])
                proto = parts.pop(0)
                return "%s//%s" % (proto, "/".join(parts[1:-1] + url))
        else:
            return False

and it had one big disadvantage. When expanding urls with relative parts it didn't include hierarchies levels, so the output urls looked like this:

expand_url('http://www.test.com/', './../me')
  => http://www.test.com/./../me
expand_url('http://www.test.com/abc', './../../test') 
  => http://www.test.com/abc/./../../test

Lil bit messy I think. After googling I've found a nice and smaller expanding method (which works like a charm):

import posixpath
from urlparse import urlparse, urljoin, urlunparse

def expand_url(home, url):
    join = urljoin(home,url)
    url2 = urlparse(join)
    path = posixpath.normpath(url2[2])

    return urlunparse(
        (url2.scheme,url2.netloc,path,url2.params,url2.query,url2.fragment)
        )
expand_url('http://www.test.com/', './../me') 
  => http://www.test.com/me
expand_url('http://www.test.com/abc', './../../test') 
  => http://www.test.com/test

Rails + SEO = Fancy URLs

Today I've been thinking about "Seoing" my project and I have decided to tune it up a little. Would be nice to have urls like this:

2452254234-rails-seo-czyli-ladniejsze-adresy-url

Of course it was really easy in Rails to do this kind of stuff.

First of all, when you use find method in Rails - it casts your string into integer, so you don't need to strip SEO urls when using params[:id]:

Something.find(params[:id])

So, it is really easy to use it (nothing changes), but how to create such URL?

We take one of attributes (lets assume name), we remove all "non url" chars (spaces, any special chars, etc) and we do downcase on a gsub result. When we have our "seo part" we attach at the beginning of string object ID and that's all. I've developed simple method to convert string into url friendly:

  def to_url
    temp = self.downcase
    temp
    temp.gsub!(/[âäàãáäå�?ăąǎǟǡǻ�?ȃȧẵặ]/,'a')
    temp.gsub!(/[ëêéèẽēĕėẻȅȇẹȩęḙḛ�?ếễểḕḗệ�?]/,'e')
    temp.gsub!(/[�?iìíîĩīĭïỉ�?ịįȉȋḭɨḯ]/,'i')
    temp.gsub!(/[òóôõ�?�?ȯö�?őǒ�?�?ơǫ�?ɵøồốỗổȱȫȭ�?�?ṑṓ�?ớỡởợǭộǿ]/,'o')
    temp.gsub!(/[ùúûũūŭüủůűǔȕȗưụṳųṷṵṹṻǖǜǘǖǚừứữửự]/,'u')
    temp.gsub!(/[ỳýŷỹȳ�?ÿỷẙƴỵ]/,'y')
    temp.gsub!(/[ñǹń]/,'n')
    temp.gsub!(/[çć]/,'c')
    temp.gsub!(/[ß]/,'ss')
    temp.gsub!(/[œ]/,'oe')
    temp.gsub!(/[ij]/,'ij')
    temp.gsub!(/[�?ł]/,'l')
    temp.gsub!(/[ś]/,'s')
    temp.gsub!(/[źż]/,'z')
    temp.gsub!(/[^a-zA-Z 0-9]/, "")
    temp.gsub!(/\s/,'-')
    temp.gsub!(/\-+$/,'')
    temp.gsub!(/^\-+/,'')
    temp
  end

Performing:

"Rails + SEO = Fancy URLs".to_url

will give me:

"rails-seo-fancy-urls"

Now let's create method called url:

      # Create fancy seo friendly url
      def url
        "#{self.id}-#{self.name.to_url}"
      end

And finally we overwrite default Rails to_params method:

      def to_param
        self.url
      end

Now You can use nice, good-looking URLs without any additional changes in your application. Rails magic rulez :) If you want to use it in a various number of models, probably you will be interested in my Acts more SEO gem.

Copyright © 2024 Closer to Code

Theme by Anders NorenUp ↑