Organizing C/C++ includes

After starting my new job programming in a big software project I spent some thought on organizing includes and give a recommendation. Here’s what I’ve come up with. As always, some things are obvious, some are not…

  1. You should only include what is necessary to compile the source code. Adding unnecessary
    includes means a longer compilation time, especially in large projects.
  2. Each header and corresponding source file should compile cleanly on its own. That
    means, if you have a source file that includes only the corresponding header, it
    should compile without errors. The header file should include not more
    than what is necessary for that.
  3. Try to use forward declarations as much as possible. If you’re using a
    class, but the header file only deals with pointers/references to
    objects of that class, then there’s no need to include the definition of
    the class. That is what forward declarations are designed for!

    // Forward declaration

    class MyClass;

  4. Note that some system headers might include others. But apart from a few
    exceptions, there is no requirement. So if you need both
    <iostream> and <string> include both, even if you can
    compile only with one of them.
  5. To prevent multiple-inclusions, with loops and all such attendant horrors is having an #ifndef guard.

    #ifndef _FOO_H
    #define _FOO_H
      …contents of header file…

  6. The order in which the includes appear (system includes and user includes) is up to the coding standard you follow.
  7. If you have the choice of picking a coding standard regarding the order at the beginning of a new project, I recommend to go from local to global, each
    subsection in alphabetical order. That way you can avoid introducing
    hidden dependencies. If you reverse the order and i.e. myclass.cpp
    includes <string> then <myclass.h>, there is no way to catch
    at build time that myclass.h may itself depend on string. So if later someone includes myclass.h but does not need string, he’ll
    get an error that needs to be fixed either in the cpp or in the header
  8. So the recommended order would be:
    • header file corresponding to its .cpp file
    • headers from the same component
    • headers from other components
    • system headers

If you use the Eclipse IDE (which I highly recommend), you can use a very nice feature that helps you organizing includes (“Source -> Organize Includes”).

5 Steps: Build a custom kernel in Debian Wheezy (…including NVIDIA drivers)

There may be many reasons to build your custom kernel. Either you have bought a new piece of hardware that is not supported by your current distribution, enable or disable features or just because you can.

Here are five easy steps to build a custom kernel on a debian system including NVIDIA kernel module. There may be way better tutorials on building a custom kernel out there, I just want to give you the essential steps that I use…

As I am writing this the latest kernel from the mainline is version 3.5. The latest stable release, which is what I recommend to build, is 3.4.6. But the following steps are for building the latest release from the mainline. 
So become root by typing “su” and entering your password in the console.
Make sure you have the requirements for building a kernel and the nvidia modules installed. If not you can install them by running:
$ apt-get install -f module-assistant build-essential

1. Download the full source code from and extract it to /usr/src:

$ cd /usr/src/

$ wget
$ tar -jxf linux-3.5.tar.bz2

2. Take the configuration of your current kernel by running:

$ make oldconfig
The script asks you for settings of features that were not available in your current kernel and therefore not available in the current configuration. I recommend to select the default values (by simply pressing ENTER several times).

3. Edit the kernel configuration

make xconfig

This will bring up a GUI that makes it easy to edit the kernel configuration.

4. Build the kernel, install the modules and install the kernel

$ make
$ make modules_install
$ make install

Then reboot the system. Boot the new kernel you installed by selecting it from the GRUB boot menu. The system will not boot to graphical desktop unless you have configured X to use the nouveau driver. So the last step is installing the NVIDIA driver.

5. Install the NVIDIA driver

Log in as root and run the module assistant to compile and install the NVIDIA kernel module.

$ m-a auto-install nvidia-kernel
After the installation is finished reboot the system and you’re done 🙂

Update: How to patch the custom kernel 

Kernel 3.5.1 was released. There is no need to Download the full source. You can simply download the patch and apply it to your source. Become root by typing “su” and your password and change to /usr/src.
Download the patch:
$ wget
Extract it: 
$ bunzip2 patch-3.5.1.bz2
Change to source dir and apply the patch:
$ cd linux-3.5
$ patch -p1 < ../patch-3.5.1
Configure, build and install the updated kernel:
$ make oldconfig
$ make
$ make modules_install
$ make install
You also need to repeat the last step from above to rebuild the NVIDIA driver:
$ m-a auto-install nvidia-kernel
Reboot and enjoy the latest version of Linux 🙂 

TeX Live 2012

Last weekend, TeX Live 2012 has been released. Most Linux distributions are still stuck with TeX Live 2009. If you want to enjoy the latest versions of all TeX packages can download it from here.
Another huge advantage compared to using the package that comes with the Linux distribution is, that you can use tlmgr, the TeX Live package manager.
After unpacking the archive, move to the resulting
install-tl-* subdirectory and start the installation by running install-tl. Leave everything at default values and press “I” to start the installation.
This will take some time since the installer has to download all packages.
Then add the following to either your .bashrc or .zshrc depending if you’re using bash or zshell.
export PATH=”/usr/local/texlive/2012/bin/x86_64-linux:$PATH”
export MANPATH=”/usr/local/texlive/2012/texmf/doc/man:$MANPATH”
export INFOPATH=”/usr/local/texlive/2012/texmf/doc/info:$INFOPATH”
export TEXMFHOME=”/usr/local/texlive/2012/texmf”
export TEXMFCNF=”/usr/local/texlive/2012/texmf/web2c”
I suggest to add these lines also to the .bashrc or .zshrc of root.
If you want to update your Tex Live distribution later you now can simply type:
tlmgr update –all

Spotify on Debian Wheezy (testing)

As I have a premium account for Spotify I was disappointed when I discovered that the client does not install on a recent Debian testing installation.
It depends on libssl-0.9.8 but this is not available in Wheezy anymore. The problem is solved by adding a source for Debian stable additionally to the Spotify source to the file /etc/apt/sources.list:
    deb stable non-free
    deb stable non-free

After fetching the new sources with

    apt-get update


    apt-get install spotify-client

to install Spotify 🙂

Ubuntu One on Debian Wheezy


I tried to update the electron repulsion integral handler libint in our
quantum chemical code AICCM. But the linking against gmp failed on my
Ubuntu 12.04 LTS. After discovering that it would build on a Debian Wheezy I
decided to install Debian on my computer. Since I am using the Ubuntu One cloud
service to sync all my data across various machines, including
smartphone and tablet, I was disappointed to find out that there are no
packages available for Debian. I did not want to abandon the service since I am a paid subscriber. It is only 29.99$ per year for 25GB.

After searching through forums and trying
several failed approaches to use the binary packages from Ubuntu 12.04 I
decided to build it from the sources.

In this blog entry I would like to give a detailed description how to build and install Ubuntu One on a Debian system. If you have any questions or comments please do not hesitate to contact me.

Create a subdirectory in your $HOME where you want to install Ubuntu One:

mkdir $HOME/UbuntuOne
cd $HOME/UbuntuOne

and install the dependencies:

apt-get install python-twisted pyqt4-dev-tools bzr python-lazr.restfulclient python-oauth python-pyinotify python-protobuf gnome-common gobject-introspection xutils-dev libnautilus-extension-dev libgconf2-dev libebook1.2-dev gnome-settings-daemon-dev python-twisted-names python-libproxy python-distutils-extra python-setuptools

There are two ways to obtain the source code (1b is recommended):

1a. From bazaar repository (latest development version)

bzr branch lp:configlue

bzr branch lp:dirspec
bzr branch lp:ubuntuone-client
bzr branch lp:ubuntuone-storage-protocol
bzr branch lp:ubuntu-sso-client

If you are using this option you need to neglect the version numbers in the following.

1b. Download the tarball from launchpad (latest stable version)


If you have downloaded the latest stable version you have to extract all downloaded archives

tar -zxvf configglue-1.0.3.tar.gz
tar -zxvf dirspec-3.0.0.tar.gz
tar -zxvf ubuntuone-client-3.0.1.tar.gz
tar -zxvf ubuntuone-storage-protocol-3.0.0.tar.gz
tar -zxvf ubuntu-sso-client-1.3.3.tar.gz

2. Set the $PYTHONPATH

As most parts of the Ubuntu One client are written in Python you need to add the folders to your  $PYTHONPATH either in the .zshrc if you are using the zshell or .bashrc if you are using bash shell. The files are in your $HOME directory. If you do not know what I am talking about you are using the bash shell. Then you need to replace .zshrc by .bashrc in the following.

So open the file in your favorite editor (I use vim):

vim ~/.zshrc

And add the following lines:
export PYTHONPATH=”$HOME/UbuntuOne/configglue-1.0.3:$PYTHONPATH”
export PYTHONPATH=”$HOME/UbuntuOne/dirspec-3.0.0:$PYTHONPATH”
export PYTHONPATH=”$HOME/UbuntuOne/ubuntuone-client-3.0.1:$PYTHONPATH”
export PYTHONPATH=”$HOME/UbuntuOne/ubuntuone-storage-protocol-3.0.0:$PYTHONPATH”
export PYTHONPATH=”$HOME/UbuntuOne/ubuntu-sso-client-1.3.3:$PYTHONPATH”

The $PATH and $LD_LIBRARY_PATH variable needs modification, too. So also add this:

# Ubuntu One PATH

export PATH=”$HOME/UbuntuOne/ubuntu-sso-client-1.3.3/bin:$PATH”
export PATH=”$HOME/UbuntuOne/ubuntuone-client-3.0.1/bin:$PATH”
export LD_LIBRARY_PATH=”$HOME/UbuntuOne/ubuntuone-client-3.0.1/libsyncdaemon:$LD_LIBRARY_PATH”

Then reload the settings:

source ~/.zshrc

3. Build the client:

The next step is to build the client. Note that you do not need to run make install and pollute your $HOME 🙂

cd $HOME/UbuntuOne/configglue-1.0.3
python config build
cd $HOME/UbuntuOne/dirspec-3.0.0
python config build
cd $HOME/UbuntuOne/ubuntuone-storage-protocol-3.0.0
python config build
cd $HOME/UbuntuOne/ubuntu-sso-client-1.3.3
python config build

cd $HOME/UbuntuOne/ubuntuone-client-3.0.1
./configure && make

3. Get an auth token

Download this script from an Ubuntu One developer (Roman Yepishev) and run it to generate an auth key:

Ubuntu SSO Login: **your Ubuntu SSO Login**
Password: **your Ubuntu SSO Password**

4. Copy the configuration file and add an auth token 

Create the directory and copy the config file of the server:

mkdir ~/.config/ubuntuone
cp $HOME/UbuntuOne/ubuntuone-client-2.0.0/data/syncdaemon.conf $HOME/.config/ubuntuone/syncdaemon.conf
Open the file and add the auth token:

This part (3. & 4.) is best described here.

5. Wrapper files

To be able to run the client on a headless server or via ssh you need to create 3 files and put them to $HOME/bin and add it to your $PATH variable.

The files look like this:

# u1sdtool wrapper for headless Ubuntu One
if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
    eval $(ps xe | grep “[u]buntuone-syncdaemon.*$ENVVAR” |
           sed -E “s/.*($ENVVAR=[^ ]+).*/1/g” )
    if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
        # Ubuntu One is not running and we don’t have a dbus daemon
        eval `dbus-launch –sh-syntax`
exec u1sdtool “$@”
# wrapper for headless Ubuntu One
if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
    eval $(ps xe | grep “[u]buntuone-syncdaemon.*$ENVVAR” |
           sed -E “s/.*($ENVVAR=[^ ]+).*/1/g” )
    if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
        # Ubuntu One is not running and we don’t have a dbus daemon
        eval `dbus-launch –sh-syntax`
exec ubuntuone-syncdaemon $HOME/.config/ubuntuone/syncdaemon.conf &
# ubuntu-sso-login wrapper for headless Ubuntu One
if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
    eval $(ps xe | grep “[u]buntuone-syncdaemon.*$ENVVAR” |
           sed -E “s/.*($ENVVAR=[^ ]+).*/1/g” )
    if [ -z “$DBUS_SESSION_BUS_ADDRESS” ]; then
        # Ubuntu One is not running and we don’t have a dbus daemon
        eval `dbus-launch –sh-syntax`
exec ubuntu-sso-login “$@”

6. Start the syncdeamon


7. Use the u1sdtool-wrapper

This shows you the available commands:

u1sdtool-wrapper –help

These might be tho ones you need the most:

u1sdtool-wrapper –status
u1sdtool-wrapper –current-transfers

u1sdtool-wrapper –list-folders


1. Add a folder to be synched (that is already in the cloud)


u1sdtool-wrapper –list-folders

Folder list:
  id=2ce31368-0a79-411e-XXX subscribed=False path=/home/mpei/Xoom
u1sdtool-wrapper –subscribe-folder=2ce31368-0a79-411e-XXX

Known issues:

1. network-manager / connection: With User Not Network

I have configured my network manually via config files. I got the connection status from u1sdtool-wrapper –status that I was not connected with the network. This was due to the fact that the network manager was telling everybody via dbus that the computer does not have an internet connection.
Uninstalling network-manager via 
apt-get purge network-manager

fixed the problem for me 🙂

IPython adds Web Interface to Python Programs or HowTo Run AICCM (ab initio Cyclic Cluster Model) from a Webbrowser

This December, the long awaited version 0.12 of IPython [1], an interactive Python shell has been released. The major highlight of this release is the IPython Notebook, an interactive Python
interface running in the browser, that connects to a running IPython Kernel. In principle, this adds a web interface to every Python program. To understand how amazing this is, just imagine how much it costs to develop a professional web interface for an application, called Notebooks. The only thing you need is the Tornado Web Server [2].
I will quickly demonstrate how to install Tornado and IPython and then show how to run a Python program from the web interface by running a quantum chemical calculation  with AICCM.
AICCM (ab initio Cyclic Cluster Model) [3] is an open source, object oriented,
educational quantum chemical program written in Python with C extensions. It is developed by me and my colleagues in the group of Prof. Bredow at the Mulliken Center for Theoretical Chemistry at the University of Bonn. In the following HowTo it is assumed that AICCM is installed on your system. For download and installation instructions on AICCM please go to the AICCM website.
1. Installation of Tornado:
– Get the latest version (2.1.1) of Tornado from the Tornado website.
– Then unpack and install it.
tar -zxvf tornado-2.1.1.tar.gz
cd tornado-2.1.1
python clean config build install –home=$HOME

– If you haven’t already done so, you need to add lib/python to your PYTHONPATH.

export PYTHONPATH=”$HOME/lib/python:$PYTHONPATH”

To make this permanent, add this line to your .bashrc or .zshrc, depending on the shell you use.

2. Installation of IPython
– Get IPython 0.12
– Install IPython

tar -zxvf ipython-0.12.tar.gz
cd  ipython-0.12
python clean config build install –home=$HOME

3. Start IPython with enabled web interface
ipython notebook
IPython starts a webbrowser displaying the dash board where all Notebooks are listed.

Click on new Notebook. A new window opens where you can enter your Python code.

4. Running a calculation
A minimal input for AICCM calculating the Hartree-Fock total energy of the Nitrogen molecule is:

import ase
import aiccm
molecule = ase.Atoms(‘2N’, [(0., 0., 0.), (0., 0., 1.1)])
calc_AICCM = aiccm.aseinterface.AICCM()
e_molecule = molecule.get_potential_energy()
print ‘Total Energy: %5.2f Hartree’ % e_molecule

Copy and paste this into the shell into the browser. The code is executed by pressing CTRL+ENTER.

If the session is saved it appears in your IPython dashboard after you hit the reload button!

This is awesome! A big thanks to the developers of IPython!


Update to Ubuntu 11.10 breaks i386 applications (Crossover Office, Skype, etc)

The reason is that there are x86_64 packages for applications like Skype, Crossover Office, etc that are in reality for the i386 architecture. Ubuntu does not know how to resolve the dependencies.
One solution is to use the i386 packages and try to let Ubuntu resolve the dependencies.
The second solution, which is the one I tried is to enable multiarch. And this is how it is done:

echo foreign-architecture i386 | sudo tee /etc/dpkg/dpkg.cfg.d/multiarch

Then install the required libraries for the i386 architecture:

sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386

Logitech MX1000 bluetooth mouse fails after system update

My good old bluetooth mouse “Logitech MX1000 Laser” stopped working after the last system upgrade to Ubuntu 11.10. I had this already after the last system upgrade and this time I decided to blog about it so it might help others.
The problem is, that the dongle switches to bluetooth mode and somehow the system does not manage to successfully pair it.
This is why it might be best to disable bluetooth for this mouse. Mine is very old, so I wonder if this is better for newer models.
If one gets tired of disabling bluetooth in the applet everytime with the help of a second mouse 😉 just do the following:

1. As root edit the file:

sudo vim /lib/udev/rules.d/62-bluez-hid2hci.rules

2. Find the lines:
# Logitech devices
KERNEL==”hiddev*”, ATTRS{idVendor}==”046d”, ATTRS{idProduct}==”c70[345abce]|c71[34bc]”,
  RUN+=”hid2hci –method=logitech-hid –devpath=%p”

and change them into:
KERNEL==”hidraw*”, ATTRS{idVendor}==”046d”, ATTRS{idProduct}==”c70[345abce]|c71[34bc]”,
  RUN+=”hid2hci –method=logitech-hid –devpath=%p”

After a reboot everything should be fine!

Winner of the Nobel Prize in Physics 2011 Saul Paulmutter in Bonn

The American astrophycicist and winner of  the Nobel Prize in Physics 2011 Saul Paulmutter visited Bonn as part of the collaboration ‘Supernova Factory’ in that
Prof. Marek Kowalski of the Institute of Physics in Bonn is being a part of.
He gave a stunning lecture in the overcrowded Wolfgang Paul Lecture Hall.
It was his first public lecture after receiving this outstanding award together with
Brian P. Schmidt and Adam G. Riess “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae”.