Android setup part 1 - Apps

Intro

I (as probably you), like to customize every aspect of your daily rutine to be as good and efficient as possible. The smart phones that everyone is walking around with today have a potential that is much bigger than what their users uses them for.

I will write a couple of blog posts about my own setup after a great amount of customizing, buying apps (which wasn’t worth it), reflashing, redesigning and a lot of try and fails. This first post is just a list of my favorite apps..

Note; that I don't use any screenshots, but I will provide links and descriptions. Note2; that this is my own personal perfect setup. The best I've found, at the moment.. Please comment if you have better ideas!

General setup

  • Phone: [Samsung Galaxy S3]
  • Android: 4.1
  • Rooted: yes!
  • Mod: [Cyanogenmod 10]

Apps

PS: I have left out some apps from the list, which are good, but not about this blogpost. Like 1Weater, GMail, Linkedin, GitHub, Google StreatView, Google Maps, Google Drive, Dropbox, Firefox, Google Goggles, Evernote and so on.. This list is about getting most out of the phone in general, not which todo-list, mail, hobbies I have :) I also took out Felleskatalogen and Gule Sider which is both Norwegian apps.

PS2: Sorry about the Norwegian currency (converted manually to usd as well). If there is a way to force google play store into displaying the currency in USD, please share! So prices are in NOK/USD!

PS3: The need root list might be a little uncomplete. But that doesn’t mather, because everyone should root their phone :)

PS4: Most of the apps that costs money in this list have free versions as well.. Try them first!

  • Android Tuner

    • Price: 55,- / $9.5
    • Info: An expensive but complete tool for all your android system tuning
    • root: some
  • Better Terminal

    • 22,- / $3.5
    • root: some
  • Business Calendar

    • 32,- / $5.5
    • Info: An advanced, calendar with nice views and nice widgets.
  • DoggCatcher

    • 28,- / $4.5
    • Info: I havent tried any others in a while, but this is a good podcast client.
  • Endomondo PRO

    • 30,- / $5
    • Info: A very good sports-tracker with a nice web overview.
  • Extended Controls

    • 7,- / $1
    • Info: Lots of button widgets that you can toggle and change almost any system setting with
  • ezPDF Reader

    • 23,- / $5
    • Info: A nice pdf reader on steroids.
  • Folder Organizer

    • 8,- / $1
    • Info: Create "tag" folders, nest them, star them and get control over where your apps are located.
  • KeePassDroid

    • Info: The most trusted solution for storing password safe, local, and encrypted
  • Last Call Widget

    • Info: A widget that can display the last contact you had contact with. You can call back with one touch
  • Light Flow

    • 15,- / $2.5
    • Info: Take control over your phones LED and make it useful. This app can control every "alarm" aspect for MANY other apps
    • root: some
  • Locus Map Tweek

    • Info: Add extra maps and some tweeks to locus (like google maps, which are gone by default)
  • Locus Pro

    • 46,- / $8
    • Info: The most advanced map app you will find on the phone. Not the typical navigation app, but stacked with map/gps features!
  • MathStudio

    • 111,- / $19
    • Info: If you are looking to replace your high-end calculator, this is the app for you!
  • Multicon

    • Info: Widgets that take makes 1x1 widget spaces able to contain 4 icons. Very useful to no polute your homescreen with shortcuts
  • Pocket

    • Info: Read long articles from the web that you have "saved" earlier using read-it-later/pocket in your browser.
  • QR Droid

    • Info: An overall good QR reader
  • Root Explorer

    • 23,- / $5
    • Info: A very good file explorer designed for root uses.
    • root: yes
  • Screebl Pro

    • 12,- / $2
    • Info: Detects when you are using/holding your phone to make sure the screen doesn't turn off.
  • Screen Filter

    • Info: Dim the screen light below the normal minimum with a click on an icon. Useful eg. at cinema reply to important texts.
  • Secure Settings

    • Info: Make tasker able to change deep system settings automaticly. Like which keyboard that is default on your phone.
    • root: yes
  • SQLite Editor

    • 17,- / $3
    • Info: Scans and gives a list of all apps internal databases, which you then can look at and edit| some
  • SwiftKey

    • 12,- / $2
    • Info: Probably the best keyboard out there..
  • Tapatalk

    • 17,- / $3
    • Info: A client to connect to different online forums (if they suport tapatalk (which they probably do)).
  • Tasker

    • 35,- / $6
    • Info: Lets you automate "anything", based on "anything".
    • root: some
  • Terminal Emulator

    • root: some
  • Terminal IDE

    • root: some
  • Titanium Backup

    • 37,- / $6 (in-app)
    • Info: A complete root level backup of your phone. Makes small packages of your apps and their settings which you can restore later
    • root: yes
  • Titanium Media Sync

    • 19,- / $3
    • Info: Automaticly sync files (like the titanium backup directory) to a remote server over eg, ssh, ftp or similar.
    • root: some
  • Widget Locker

    • 17,- / $3
    • Info: Lets you have widgets on your lockscreen. You can even interact with them if you want.
  • Wolfram Alpha

    • 17,- / $3
    • Info: Wolfram computes everything! Even tough it needs to use computerpower online, it is good to have this app from time to time
  • Zoom

    • Info: Create small little widgets, using your phone, that heavy integrates with tasker.

Making your Linux prompt as usefull as possible

There are a lot of colorful Linux prompts out there already, but most of them tends to be all about beeing colorful, not useful. Ninjab tries to be as useful as it can be in your day-to-day Linux management. It is made to be configurable, and easy to add your own bash hacks.

This blog-post is mostly to get some screenshots out, more info is available on the github page, and in the bash scripts themself.

Here is a couple of examples on how it behaves in different situations.

In normal writable folder, as a normal (green) user, undefined http_proxy (red @), and over ssh (cyan) hostname.

normal

Same as above, but truncated (directory is max 1/3 of screen width)

truncated

After a long running process

long_running_proc

Inside a clean git folder in the master branch

git

Inside a dirty (uncommited changes) git repo, after a failed command (with exit code != 0) that took a long time to run

git_dirty_ec_longrun

Inside a git repo with +1 committed change, with a tmux session running (but we are not in), and 3 background prosesses, and no write access to the current folder

git_tmux_bg_nowrite

Everything in the prompt means something, the color of the username, @ and hostname.

This is one part of ninjab. It will also set a couple of aliases, functions and shell settings. Take a look at the file in parts/* for more info about them There is also a lot of configurations for ninjab in the "config" file.

If you want your own bash stuff loaded by ninjab, just put your files in the "parts" folder.

More documentation is available on the github page.


Tagging files and folders using hashtags and symlinks

There is lots of tools out there that let you organise files (specially your picture archive). However, they are all depending on some sort of database, one master computer to add the tags from, and you can't browse the organised files in their organised structure from all devices.

I made this project because I had this exact problem organising my own pictures. I wanted something which:

  • You could tag pictures as close to where you look at them (IE, the file browser itself)
  • Is platform independent.
    • Like real platform independent! I wanted to browse this tags on my TV!
  • Not another thing to backup. I am already backing up the pictures themselves.
  • Support organising whole folders, not just single files.

There is probably many more than myself that is annoyed by this problem, therefor I will share my solution, which is a python script that goes trough all the files in a directory, looks at the filenames and looks for hashtags. This is put into my storage NAS's crontab and runs every hour.

Example

File structure

xeor@omi { ~/Documents/my_pictures }$ find .
.
./2012 #Business trip to #USA
./2012 #Business trip to #USA/dcim0123 #People-Lars.jpg
./2012 #Business trip to #USA/dcim0124.jpg
./2012 #Business trip to #USA/dcim0125.jpg
./2012 #Business trip to #USA/dcim0126 #Conference.jpg

Running taggo

taggo run_once

Tags created

xeor@omi { ~/Documents/tags }$ find . 
.
./Business
./Business/root - 2012 #Business trip to #USA
./Conference
./Conference/2012 #Business trip to #USA - dcim0126 #Conference.jpg
./People
./People/Lars
./People/Lars/2012 #Business trip to #USA - dcim0123 #People-Lars.jpg
./USA
./USA/root - 2012 #Business trip to #USA

Explaination

As you can see on the file structure, we created one folder and 4 files. The folder itself 2012 #Business trip to #USA have two tags, #Business and #USA (as you probably already knew :) ) The dcim0123 file have a tag like #People-Lars, which means that taggo should threat it as a sub tag.

The list of tags created is now just a bunch of symlinks to the original files. ./USA/root - 2012 #Business trip to #USA is a link to the folder called 2012 #Business trip to #USA, the same with ./Business/root - 2012 #Business trip to #USA. For our sub tag, you can see that it is in the directory People/Lars; ./People/Lars/2012 #Business trip to #USA - dcim0123 #People-Lars.jpg.

Configuration

In the file called taggo.cfg you can define stuff like tag indicator (the hashtag), sub tag separator, what filename the symlinked filenames should get (default is %(rel_folders)s - %(basename)s), what to replace / with in tag filenames, content folder and tag folder.

Taggo will automatically create the taggo.cfg file when you run it the first time. (Just do a ./taggo)

Usage

Using taggo is simple, just put it in any directory and put something like 22 * * /usr/bin/python /path/to/taggo run_once in the crontab. It will make sure that new symlinks is created.

If you rename a file, the symlink will die. But when you use the run_once parameter, it will automatically delete the invalid symlinks. I have been very careful when creating the delete function. It will only delete symlinks where the paths they point to does not exists. and to delete the empty directories, we are using os.rmdir, which is a python function that is made to delete empty directories only.

To find and use the project, check out the Github link at the top of this article.


Two factor ssh login, Google authenticator and SELinux

There is too many people that literally hate SELinux, and comes to the conclusion that it is way to complicated or unfriendly and just ends up turning it off instead of trying to fix it so you can live with it.

I want two factor authentication on one of my ssh servers. Google authenticator is todays perfect solution for this. It is well made, support many different platforms on the client side, have a pam module, is opensourced and build on open standards. Most major Linux distroes have a package for it, so it should be easy enough to install. On Scientific Linux (a great distro btw) or any other RedHat based distro it is already in the EPEL repository called google-authenticator.

This article isn't about setting up Google authenticator, there is plenty of blog articles about that already, on How-to-geek, mnxsolutions or other places Google will take you.

When you are done with the initial Google authenticator setup, you should have a file in your home directory called .google_authenticator, this file contains your authenticator secret, and other information the authenticator needs to log you in and keep track of what token is valid.

But to get it to work with SELinux can be a little more tricky. Here is my process from start to finish.

Making a plan

If you try to login now, you won't probably even see the Google authenticator ask for the token. This is because SELinux blocks sshd from reading random files in the users home directory. You can see this normally in /var/log/secure with an entry like this:

Nov 12 23:34:49 omi sshd(pam_google_authenticator)[3350]: Failed to read "/home/xeor/.google_authenticator"

One way to find out what went wrong is to use SELinux's audit2allow tool like this.

root@omi { ~ }# grep ssh /var/log/audit/audit.log | audit2allow
#============= sshd_t ==============
#!!!! The source type 'sshd_t' can write to a 'file' of the following types:
# user_tmp_t, auth_cache_t, faillog_t, ssh_home_t, pam_var_run_t, pcscd_var_run_t, sshd_var_run_t, gitosis_var_lib_t, sshd_tmpfs_t, var_auth_t, root_t, krb5_host_rcache_t

allow sshd_t user_home_dir_t:file { rename write getattr read create unlink open };

As you see, to fix the "failed to read" error, this is the SELinux module you have to make. There is several thing to take a note of based on this output and our current findings:

  • This is the errors generated by SELinux when SELinux is already in deny mode! This means that this is probably the first of many errors we will meet.
  • The allow rule audit2allow recommends is way to wide. It basically suggest that the server running as the sshd-type should have read/write/delete/+ info in the whole users home directory. This will defeat the purpose of having all this rules on sshd to lock it down.
  • sshd already have write access to a bunch of files. We will use semanage to find out exactly which directories this is.

To get a list of paths that SELinux changes context in, use semanage fcontext -l. The list you will get is the list for your current user. Like in the list below, you will see that the SELinux type ssh_home_t belongs to the files in /root/. That is because that is my current home directory.. More on this magic later in the article.

root@omi { ~ }# semanage fcontext -l | grep -E "user_tmp_t|auth_cache_t|faillog_t|ssh_home_t|pam_var_run_t|pcscd_var_run_t|sshd_var_run_t|gitosis_var_lib_t|sshd_tmpfs_t|var_auth_t|root_t|krb5_host_rcache_t"
/                                                  directory          system_u:object_r:root_t:s0 
/initrd                                            directory          system_u:object_r:root_t:s0 
/root/\.shosts                                     all files          system_u:object_r:ssh_home_t:s0 
/root/\.ssh(/.*)?                                  all files          system_u:object_r:ssh_home_t:s0 
/var/cache/coolkey(/.*)?                           all files          system_u:object_r:auth_cache_t:s0 
/var/cache/krb5rcache(/.*)?                        all files          system_u:object_r:krb5_host_rcache_t:s0 
/var/lib/abl(/.*)?                                 all files          system_u:object_r:var_auth_t:s0 
/var/lib/amanda/\.ssh(/.*)?                        all files          system_u:object_r:ssh_home_t:s0 
/var/lib/gitolite(/.*)?                            all files          system_u:object_r:gitosis_var_lib_t:s0 
/var/lib/gitolite/\.ssh(/.*)?                      all files          system_u:object_r:ssh_home_t:s0 
/var/lib/gitosis(/.*)?                             all files          system_u:object_r:gitosis_var_lib_t:s0 
/var/lib/pam_shield(/.*)?                          all files          system_u:object_r:var_auth_t:s0 
/var/lib/pam_ssh(/.*)?                             all files          system_u:object_r:var_auth_t:s0 
/var/log/btmp.*                                    regular file       system_u:object_r:faillog_t:s0 
/var/log/faillog                                   regular file       system_u:object_r:faillog_t:s0 
/var/log/tallylog                                  regular file       system_u:object_r:faillog_t:s0 
/var/run/faillock(/.*)?                            all files          system_u:object_r:faillog_t:s0 
/var/run/pam_mount(/.*)?                           all files          system_u:object_r:pam_var_run_t:s0 
/var/run/pam_ssh(/.*)?                             all files          system_u:object_r:var_auth_t:s0 
/var/run/pcscd\.comm                               socket             system_u:object_r:pcscd_var_run_t:s0 
/var/run/pcscd\.events(/.*)?                       all files          system_u:object_r:pcscd_var_run_t:s0 
/var/run/pcscd\.pid                                regular file       system_u:object_r:pcscd_var_run_t:s0 
/var/run/pcscd\.pub                                regular file       system_u:object_r:pcscd_var_run_t:s0 
/var/run/sepermit(/.*)?                            all files          system_u:object_r:pam_var_run_t:s0 
/var/run/sshd\.init\.pid                           regular file       system_u:object_r:sshd_var_run_t:s0 
/var/run/sudo(/.*)?                                all files          system_u:object_r:pam_var_run_t:s0 
/var/tmp/HTTP_23                                   regular file       system_u:object_r:krb5_host_rcache_t:s0 
/var/tmp/host_0                                    regular file       system_u:object_r:krb5_host_rcache_t:s0

Ok, so ~/.ssh/ kinda looks good, but that won't work without some pam configuration. Lets try to create an ssh SELinux module instead and use the default /home/username/.google_authenticator location.

Making a SELinux module

audit2allow supports an -M modulename option that creates the module for you based on what you pipe to it. However, we will make this one from scratch, since its easier to learn, and easier to test and maintain.

First, create a folder for our module, lets just put it in /root/selinux/modules/sshd_google_authenticator/ for now. Go into it. Make sure you can find the selinux-devel make file, usually located at /usr/share/selinux/devel/Makefile. In ScientificLinux yum provides /usr/share/selinux/devel/Makefile tells me that this already comes with the package selinux-policy You can now use this Makefile to compile the SELinux policy module. A good tip is to create an alias called semake like this:

alias semake='make -f /usr/share/selinux/devel/Makefile'

Still in your sshd_google_authenticator folder either use audit2allow to get the basic rules, or create them from scratch. A quick headsup about the file extensions;

  • .te is the type enforcement file. This contains all the rules and code to confine the application. This is the main file.
  • .fc is a list of paths and files that should get specific context. You can all of them using semange fcontext -l (That’s a small L).
  • .if interface file used for information for other domains communication. Don't mind this file for now.

Everything you really need is the .te file, and this is how it looks like in our sshd google authenticator module (it can be named anything, as long as it ends with .te).

# Name and version, every module should have this.
policy_module(sshd_google_authenticator, 0.0.1)

# List of the types, class and everything else you are going to use in your module that is not defined in this .te file.
# If you are getting any errors when you compile your module that it is unable to find a type, you probably forgot to declare it here.
require {
  type sshd_t;
}

# This is where we define our type. A good practise is to append _t for all types.
# This is the type we are going to give our .google_authenticator file.
type sshd_google_authenticator_t;

# What role our type should have. This is almost always going to be object_r
role object_r types sshd_google_authenticator_t;

# What sshd_t (the context the ssh daemon runs as) should be able to do with our type (sshd_google_authenticator_t),
# as a file. rename, create and unlink are base definitions, rw_file_perms is a set of rules.
# The rw_file_perms group is defined in /usr/share/selinux/devel/include/support/obj_perm_sets.spt with a lot of other
# groups. Reading this files give you a good overview of what they allow.
allow sshd_t sshd_google_authenticator_t:file { rename create unlink rw_file_perms };

# Without this, SELinux will be way too strict as default, as it won't know what this type really is.
# Remember that SELinux doesn’t only deal with files, but sockets and other filetypes as well.
# Leaving this out will still allow sshd_t to do its stuff, but you, in your shell will see a weird file.
# The only thing you will see is the file name. Even permissions will be hidden from you. (a fun trick to pull on your friends.. :] )
# An overview of this is located at http://oss.tresys.com/docs/refpolicy/api/kernel_files.html.
files_type(sshd_google_authenticator_t)

Now, when this file is created, create another file with the same name, but with the .fc extension. This should contain one line:

HOME_DIR/\.google_authenticator     --  gen_context(system_u:object_r:sshd_google_authenticator_t,s0)

This file have 3 parts. Path, type and context.

The first part is the path, on home directories, the home directory is replaced with HOME_DIR, this is all taken care of by SELinux and it generates this out of the home folders in passwd (as default). Don't use stuff like /home/*/.google_authenticator here.. It won't work as expected. However, on other directories not in anyones home directory should be find to enter here. Other than the magic HOME_DIR alias, there is HOME_ROOT, ROLE_…, and user_… (I think). But this are almost never used, and the documentation for this is hard to get..

The second part is the type. -- means that this is a file. -d means it is a directory (there is more options as well). Nothing special about this.

The last part is what context our file should belong to, and the MLS level (the s0 part. Don't worry about this).

Now that you have this two files, check that it compiles using our semake alias. Just type semake. If you see something like this, congrats.

Compiling targeted sshd_google_authenticator module
/usr/bin/checkmodule:  loading policy configuration from tmp/sshd_google_authenticator.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/sshd_google_authenticator.mod
Creating targeted sshd_google_authenticator.pp policy package
rm tmp/sshd_google_authenticator.mod.fc tmp/sshd_google_authenticator.mod

If not, go trough your two files and try to find the error. If your output was similar, you should have a file with the same name as the others with a .pp extension. This is your compiled module!

To enable it, type semodule -i sshd_google_authenticator.pp, and it should load. semodule -r sshd_google_authenticator.pp will remove the module. Also, note that loading the module will not change the context of your file. But selinux will now use your fc file when using restorecon, or other relabelling command. Use restorecon /home/username/.google_authenticator to do that now. ls -lZ /home/username/.google_authenticator to verify that the file have a new context.

And in case you wonder, your module will survive a reboot :)

Our module got a problem

Now that we have a module that work as expected, I have one good and one bad news. The bad one; our module won't solve our sshd google authenticator problem. The good one; you now hopefully know how to create your own basic modules for whatever you want.

The problem is not SELinux related, it works as it should. However, (as you might already have spotted), we need to give sshd unlink and rename permissions to our file. We can verify that the file is replaced with a new one looking at the inode on the file. ls -li /home/username/.google_authenticator before and after logging in using the one time password. The pid changes, the file is regenerated and created. Which means it looses its SELinux context, and ruin the whole plan.

SELinux doesn’t really care about file paths. I don't know if there is anyway to have a file getting it's default context like we want here, without putting it in another folder firs and make a rule that says "every file created in this folder should get our_context_t". New files created in your home folder will get the user_home_t context, which is NOT something we want sshd to have full control over. That would break the whole idea…

Update 18. Nov 2012: Look at the section We can use plan A after all to solve this problem without going to plan B.

Plan B

We already know that /home/username/.ssh is a folder that acts the way we want. So our plan b is to put our .google_authenticator file in that instead.

In the google authenticator libpam README, it said that we can manually set the location of the secret. Looks like our solution is simpler than first thought.

Replace the entry in our pam.d file with:

auth       required     pam_google_authenticator.so secret=/home/${USER}/.ssh/.google_authenticator

That’s all, it should now work perfectly in harmony with SELinux, even without our newly created module! This solution will for obvious reasons not work with the root user, because of the home path. But you should disable root login anyway. Use sudo!

We can use plan A after all (update 18. Nov 2012)

Thanks to Matthew Ife (#MatthewIfe) for pointing out this tip in the comment section.

If we use filetrans_pattern (which is available from Fedora 15+) we can get around the relabelling problem fairly elegant. Add

filetrans_pattern(sshd_t, user_home_dir_t, sshd_google_authenticator_t, file, ".google_authenticator")

To the button of your .te file, and it will make sure that all files that all files sshd_t creates in the folder with the type user_home_dir_t named .google_authenticator is labeled sshd_google_authenticator_t. This is just what we want!

If you do this, remember to

  • Don't add the secret option to pam.d
  • Add type user_home_dir_t; to the require {} section of your .te file.

One extra paranoid tip

Arguably, having the user access to the .google_authenticator file is a security risk. Since an attacker can steal or change the secret if they are able to get to your logged in terminal. Sure, but then you have already lost. But if you are using this for only one user, and want this extra paranoid setup. Check out this blog post at axivo, specially the Dark side part..


SELinux - Finding info

If you look around the interwebs for SELinux information you will probably look around for a while and after some hours you will ask yourself why most of it is from 2006. To be honest, I really don't know. But if you think SELinux is dead because of this, think again. SELinux is very much alive. But one reason might be that SELinux is really stable and have proven so over a long time. That is why my theory is that it is more of less, done. There is no need to add features, its not needed. That being said, having a nice interface to write SELinux policies in will be more than welcome.

I will write a couple of blog articles from time to time about SELinux. So this entry is just to show you where I have found different information, and why I looked for exactly that info.

  • Eli Billauer have a really great blog post about creating SELinux policies. Read it!
  • danwalsh blog is a blog by Dan Walsh mostly about SELinux stuff.
  • tresys refpolicy api is at the one and only reference guide.
  • tresys refpolicy kernel files is a list of interfaces mostly used. Like file_types(), and its siblings. You can't really create policy modules without them as they define how your file/socket * whatever should act as. Deserves its own mention.
  • tresys refpolicy is a reference policy you can download.
  • tresys slide looks like a promising IDE. Is last updated in 2009 or something..
  • nsa docs is a list of resources.
  • nsa policy language is one of the very few places you can actually get a list of available policy module macros.
  • fedora policygentools a list of some tools to help you create policies. Mostly old stuff.

Added: 16. Nov 2012


Python threading example, creating Pinger.py

Update 18. Nov 2012: Cleaned up some comments about cores. To make it clear, this will only run on 1 core!

Threading in Python can be confusing in the beginning. Many examples out there are overly complicated so here is another example that I have tried to keep simple.

Here, I want a fast way to ping every host/ip in a list. As fast as we can, threaded, and then at last return a dict with two items. A list of dead nodes, and a list of nodes who answers on ping.

Example:

In [1]: from pinger import Pinger
In [2]: ping = Pinger()
In [3]: ping.thread_count = 8
In [4]: ping.hosts = ['10.0.0.1', '10.0.0.255', '10.0.0.100', 'google.com', 'nonexisting', '*not able to ping!*', '8.8.8.8']
In [5]: ping.start()
Out[5]: 
{'alive': ['10.0.0.255', '10.0.0.1', 'google.com', '8.8.8.8'],
 'dead': ['*not able to ping!*', 'nonexisting', '10.0.0.100']}

The example above will ping 8 hosts at the time and saving the results to the end. We are using 8 thread_count in this example. Which means that python will have 8 ping command running at the same time.

The whole source of the Pinger class looks like this, read the comments and you will see how it works:

#!/usr/bin/env python

import subprocess
import threading

class Pinger(object):
    status = {'alive': [], 'dead': []} # Populated while we are running
    hosts = [] # List of all hosts/ips in our input queue

    # How many ping process at the time.
    thread_count = 4

    # Lock object to keep track the threads in loops, where it can potentially be race conditions.
    lock = threading.Lock()

    def ping(self, ip):
        # Use the system ping command with count of 1 and wait time of 1.
        ret = subprocess.call(['ping', '-c', '1', '-W', '1', ip],
                              stdout=open('/dev/null', 'w'), stderr=open('/dev/null', 'w'))

        return ret == 0 # Return True if our ping command succeeds

    def pop_queue(self):
        ip = None

        self.lock.acquire() # Grab or wait+grab the lock.

        if self.hosts:
            ip = self.hosts.pop()

        self.lock.release() # Release the lock, so another thread could grab it.

        return ip

    def dequeue(self):
        while True:
            ip = self.pop_queue()

            if not ip:
                return None

            result = 'alive' if self.ping(ip) else 'dead'
            self.status[result].append(ip)

    def start(self):
        threads = []

        for i in range(self.thread_count):
            # Create self.thread_count number of threads that together will
            # cooperate removing every ip in the list. Each thread will do the
            # job as fast as it can.
            t = threading.Thread(target=self.dequeue)
            t.start()
            threads.append(t)

        # Wait until all the threads are done. .join() is blocking.
        [ t.join() for t in threads ]

        return self.status

if __name__ == '__main__':
    ping = Pinger()
    ping.thread_count = 8
    ping.hosts = [
        '10.0.0.1', '10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.0', '10.0.0.255', '10.0.0.100',
        'google.com', 'github.com', 'nonexisting', '127.0.1.2', '*not able to ping!*', '8.8.8.8'
        ]

    print ping.start()

Blog technology

After going back and forth to what technology I wanted behind my blog I decided on;

Pelican as the static blog generator

Pelican is written in python, is very extendible with plugins and easy to create themes. It is also easy to configure and use. The main reason I went with pelican is its simplicity, and possibility to customize.

There is already other blogs out there that explains pelican advantages and disadvantages and other blogs that have info about using github pages and pelican, so that is not something I will spend time on here. But if you like to blog using plain-text, python, html/js/css customization and a power full generator to put it into a blog, pelican might be something to check out.

Multimarkdown as the writing "format"

Multimarkdown is an extension to markdown. Markdown is a structured way of writing articles, snippets, mail or even whole books. It was created as a way to write plaintext which can later be converted to html/pdf/odt/LaTeX or whatever you want, keeping the structure you want.

To be honest, everyone who sends mail on a daily bases should at least look into this. Or at least thing about it. Getting mails that contains a lot of text, and no structure is painful to read.

When it comes to Markdown vs. reStructuredText, I ended up with markdown because it feels much bigger than rst. Even tough rst is something which the python community uses a lot, it just feels a little dead. I have even tried to use rst for a long time, but it is missing some love from other people.

Github pages for hosting the generated html files

I love using Github for my opensourced projects, so it felt very natural to use their pages to store the html files for my blog. It is free, easy to publish to, and stable. I don't really have much more to say on this. But if my blog was not going to be a bunch of static files, I would probably have used Heroku.


Another blog is born

Date  Sun 21 October 2012
Category misc.     Tags  blog  

When I am starting a new blog in late 2012, I feel like I am a little too late. But there is several reasons I haven't started blogging before now.

  • I don't think anyone really cares about my day to day stuff, so why write about it..?
  • I think it is more exiting to work on my own projects than writing about them.
  • I don't want to use a lot of time writing something when I lock my writing down deep inside a database.
  • I've always felt like it is too late to start writing a blog. No kidding, I have had that feeling since 2003.

So, the reason I will try to write a couple of blog articles from time to time now is:

  • My blog project on my todo list have been inactive for too long now. I want those todo's out of my head :)
  • I want a personal place I can lookup my own projects. I really want a good place to document problems. Because of that, this blog will be more of an polished notebook for me.
  • I think it is time to learn myself more markdown Hell, everyone should learn some markdown! And/or multimarkdown.
  • Two things usually makes me happy when I try to figure out a problem using Google. And that is finding a stackoverflow post, or a blog post. As I already am contributing to stackoverflow, I figured out a blog will also be nice to have.

Most of the stuff I am going to write about will be in the very geeky genre. All the geeky goodness will be in the geeky category. There will also be a priv and misc category. But not many more.

All tags and categories will have its own ATOM feeds as well.