vibe-qc day 2: closing out Ewald and laying the symmetry foundation

vibe-qc day 2: closing out Ewald and laying the symmetry foundation

vibe-qc is an open-source quantum chemistry and solid-state code with a C++17 numerical backend, a Python frontend via pybind11, and an ASE Calculator interface for geometry optimization and structure I/O. It is being written in the open, one coding session at a time, with Claude Opus as the implementation engine and me steering at the architecture level. Day 1 shipped the molecular HF/DFT/MP2 stack (v0.1.0, April 18) and then pushed into periodic territory: one-electron lattice integrals, Gamma-only and multi-k RHF, multi-k KS-DFT, and the first Ewald infrastructure through Madelung-cancellation helpers. Day 2 closed out the v0.2.0 Ewald milestone and planted the first stake for v0.2.5 space-group symmetry. Four deliverables, 26 new tests (720 to 746 total), Sphinx docs building clean.

Phase 12e-c-4: end-to-end EWALD_3D dispatch

The previous session had the Ewald machinery in place but not yet wired to the user-facing entry points. Today that changed. Both run_rhf_periodic_scf(...) and run_rhf_periodic_gamma_scf(...) now branch on options.lattice_opts.coulomb_method: pass DIRECT_TRUNCATED and you get the existing C++ driver unchanged; pass EWALD_3D and you get the new Python driver, with nuclear_repulsion_per_cell automatically rerouted through Ewald summation as well. That last part matters: an Ewald electronic energy paired with a direct-truncation nuclear repulsion would be self-consistent in neither the long-range cancellation nor the ω-dependence, and the resulting total energy would be meaningless. Electronic and nuclear sides now always agree on which summation scheme is in use.

The benchmark suite that shipped alongside covers H₂ crystal, LiH rocksalt, MgO rocksalt, and Ne FCC — 11 tests in total, checking convergence, $\omega$-invariance (energy independence of the Ewald splitting parameter), an equation-of-state scan, and the equivalence of a [1,1,1] multi-k mesh with a Gamma-only calculation. All pass.

One finding worth documenting: atom positions relative to the FFT Poisson grid origin are not numerically irrelevant. Atoms sitting at box corners, rather than centered in the cell, inflate the $\omega$-invariance residual by roughly two orders of magnitude. The physics is correct either way, but the numerical conditioning is not. This is now flagged in the docs and the benchmark suite uses centered geometries as the reference.

Phase 12e-c-4c-iii-c: multi-k Pulay DIIS

The multi-k Ewald driver shipped without DIIS. For loose cells and [1,1,1] meshes that was acceptable — pure density damping converged, slowly. For tight cells with denser k-meshes it was not. An H₂ chain at a = 10 bohr with a [2,2,2] mesh simply plateaued under damping and never converged.

The fix is Pulay DIIS extended to the Brillouin zone. The commutator error vector at each k-point is:

$$\mathbf{e}(\mathbf{k}) = \mathbf{F}(\mathbf{k})\mathbf{D}(\mathbf{k})\mathbf{S}(\mathbf{k}) – \mathbf{S}(\mathbf{k})\mathbf{D}(\mathbf{k})\mathbf{F}(\mathbf{k})$$

and the Pulay B matrix couples iterations i and j with a k-weighted inner product:

$$B_{ij} = \sum_{\mathbf{k}} w_{\mathbf{k}}\, \mathrm{Re}\,\mathrm{tr}\!\left[\mathbf{e}_i(\mathbf{k})^\dagger\, \mathbf{e}_j(\mathbf{k})\right]$$

The k-weights are the standard Monkhorst-Pack weights that already appear in the energy and density assembly, so no new parameters appear. The DIIS coefficients solve the same linear system as in the molecular case; the only change is that the scalar error measure is now a sum over the zone rather than a single matrix norm.

The numbers are clean. H₂ chain, [1,1,1]: 13 iterations under damping, 7 with DIIS. H₂ chain, [2,2,2]: previously non-converging under damping, now 4 iterations with DIIS. This is the same qualitative behavior the molecular DIIS delivered on H₂O back on day 1 (52 iterations to 9), just extended to k-space.

Phase 12f: periodic Becke partition for tight-cell DFT

The molecular Becke fuzzy-cell partition (J. Chem. Phys. 88, 2547, 1988) assigns each grid point a weight that partitions unity across atomic cells using a smooth step function of interatomic distances. In a periodic system with a tight unit cell, image atoms from neighboring cells fall within the smoothing radius of the step function, and ignoring them breaks the partition: the weights no longer sum to the cell volume.

The fix is to extend the partition denominator over home atoms plus image atoms within a user-specified radius. The implementation is wired into run_rks_periodic via two new options: PeriodicKSOptions.use_periodic_becke (boolean, default False) and becke_image_radius_bohr (float). Default behavior is unchanged from v0.1.0; the periodic extension is an explicit opt-in for users who know they are working in a tight cell.

The empirical witness is a 5-bohr cubic H₂ cell. Without periodic Becke, the molecular partition produces a total grid weight of 18,526 — wildly wrong for a cell with volume 125 bohr³. With periodic Becke, the total grid weight comes out as 126, matching V_cell = 125 bohr³ to within the grid quadrature error. The discrepancy between the two modes (18,526 vs 126) is not a subtle numerical issue; it is a categorical correctness failure of the molecular partition in tight periodic geometry. Any DFT calculation on a tight crystal without this correction is integrating a density that does not integrate to the correct electron count per cell.

Phase SYM3a: orbit-reduced storage for lattice integrals

The v0.2.5 symmetry milestone starts here. The one-electron lattice integrals $S(\mathbf{g})$, $T(\mathbf{g})$, $V(\mathbf{g})$ are indexed by a cell offset vector $\mathbf{g} = (g_1, g_2, g_3)$. For a system with space-group symmetry, many (g, atom-pair) combinations are related by point-group operations and carry the same numerical content up to a known rotation. SYM3a exploits this to compress storage.

The machinery builds on SYM2c, which already identifies atom-pair and cell-index orbits for arbitrary (including non-origin-fixed) structures like NaCl. SYM3a takes those orbits and stores one representative sub-block of shape $(n_{\mathrm{AO},a},\, n_{\mathrm{AO},b})$ per orbit, rather than a full $(n_{\mathrm{bf}} \times n_{\mathrm{bf}})$ block at every cell triple. A round-trip compress/reconstruct test on simple cubic Pm-3m He at STO-3G, with a 10 bohr cutoff, partitions 33 cell triples into 5 orbits and achieves 6.6× memory reduction. Reconstruction is exact to 10⁻¹⁶.

What SYM3a does not yet do is reduce compute: the integral kernels still evaluate every shell-pair times cell-triple combination before handing off to the compressor. That is SYM3b, the next item on the list. SYM3b is where the |G|-fold reduction hits wall-clock time rather than just peak memory, because the kernel skips shell-pair/cell-triple combinations that are equivalent under the point group instead of evaluating and then discarding them. The storage handle SYM3a provides is what SYM3b will plug into.

Where v0.2.0 stands

The v0.2.0 contract is “ω-invariant 3D periodic SCF that reduces to molecular HF in the loose-box limit and integrates the unit-cell volume correctly in tight cells.” As of today that contract is end-to-end testable with two option flags:

options.lattice_opts.coulomb_method = EWALD_3D
options.ks_opts.use_periodic_becke = True

Internal-consistency benchmarks are in place across four crystal systems. The remaining gating item for the v0.2.0 tag is a cross-check pass against published CRYSTAL reference energies on LiH, NaCl, MgO, and Si at published geometries. That is a validation exercise against an external reference rather than new implementation work, and it is next on the list.

What is next

Immediately: SYM3b, the kernel-level compute reduction that makes the |G|-fold saving from space-group symmetry show up in wall-clock time. The substrate — Wigner D-matrices (SYM1), AO permutation matrices and orbit identification (SYM2a/b), atom-pair-resolved orbits for non-origin-fixed structures (SYM2c), and now the compressed storage layer (SYM3a) — is all in place. SYM3b is the payoff.

Further out, the headline feature of the entire project remains the cyclic cluster model. The roadmap runs from v1.0 (feature-complete molecular and periodic HF/DFT/MP2) through v2.0 (HF-CCM in 3D) and on through a sequence of independently publishable steps: MP2-CCM, local-MP2-CCM, CCSD-CCM, and eventually projection-based embedding where a CCM-correlated central cluster sits inside a DFT-treated periodic environment. The capstone is multi-scale defect chemistry at chemical accuracy on systems with hundreds of atoms in the cluster. Every piece shipped in the first two days of this project is load-bearing for that goal.

The code is at vibe-qc.com. Licensed MPL 2.0.

I vibe-coded a quantum-chemical program in one day

I vibe-coded a quantum-chemical program in one day

During my PhD at the Mulliken Center for Theoretical Chemistry in Bonn, I wrote a quantum-chemical code called AICCM, an object-oriented educational implementation of the Cyclic Cluster Model (CCM) at the Hartree-Fock level. If you want the formal reference, it is described in:

M. F. Peintinger, T. Bredow, The cyclic cluster model at Hartree-Fock level, Journal of Computational Chemistry 35 (11), 839-846 (2014). DOI: 10.1002/jcc.23550

That work took a good chunk of my PhD. Years of implementation, debugging, validating against CRYSTAL and other periodic codes, reading obscure papers on Wigner-Seitz integration and boundary handling of three- and four-center integrals. Real work. The kind that leaves scars.

Alongside AICCM I also developed the pob-TZVP basis sets, a consistent triple-zeta valence family tuned specifically for periodic solid-state calculations. They were built for CRYSTAL, but from the start I had AICCM in mind as a second home for them:

M. F. Peintinger, D. V. Oliveira, T. Bredow, Consistent Gaussian basis sets of triple-zeta valence with polarization quality for solid-state calculations, Journal of Computational Chemistry 34 (6), 451-459 (2013). DOI: 10.1002/jcc.23153

These basis sets matter for the story later in this post, because vibe-qc already has a slot in its basis library waiting for them.

This week I listened to a podcast on vibe-coding, and the thought stuck in my head: what happens if I hand a modern coding agent a task I genuinely understand, something deeply non-trivial, and then resist the urge to help code? Not a toy CRUD app. Not a weekend script. A quantum chemistry program. The kind of thing where a single wrong sign in a density matrix silently gives you garbage for three hours of debugging before you notice.

So I created a blank git repo on my GitLab server, called it vibe-qc, and gave Claude the task. Just the task. No starter code, no reference implementation to crib from, no hand-holding on the math. I steered at the architecture level and made scoping decisions, but I did not write code. Not one line.

What I expected vs. what happened

I honestly expected it to crash out somewhere around the two-electron integrals, or to produce a plausible-looking SCF loop that silently disagreed with the reference by a few millihartree. That is the usual failure mode when someone writes this code for the first time: it runs, it looks right, and the total energy is wrong in the fourth digit.

Instead, by the end of the day we had a working program that agreed with PySCF to machine precision. It did fall into one trap I found amusing: at one point it tried to run restricted Hartree-Fock on lithium. Lithium has three electrons. You cannot do closed-shell RHF on an odd number of electrons. You need an open-shell method. We added UHF a few commits later and the problem went away, but it is telling that an AI coding agent can reproduce the exact kind of mistake a first-year grad student makes.

What got built in one day

16 commits. Here is the inventory.

Architecture

Python frontend with a C++17 numerical core, bridged by pybind11. CMake plus scikit-build-core for the build system, so pip install -e . just works. About 3500 lines of C++ for integrals, SCF drivers, gradients, and DFT machinery. libint2 v2.13.1 built from source under third_party/libint/ with max_am=5 and first-order derivatives enabled. Homebrew libxc 7.0 for the 500+ exchange-correlation functionals.

The GIL is released during native work, so the Python frontend is not a bottleneck. I had asked about this up front and was reassured that Python would not become a restriction for multi-core scaling later.

Hartree-Fock, closed and open shell

  • RHF with Hcore or SAD initial guess, symmetric orthogonalization via $\mathbf{S}^{-1/2}$, optional density damping.
  • UHF with separate alpha and beta densities, reporting <S²> as a spin-contamination diagnostic. Closed-shell UHF collapses to RHF at 1e-14.
  • Pulay DIIS convergence accelerator. On H₂O we measured 52 iterations (with damping 0.5) collapsing to 9 iterations (with DIIS). Per-spin DIIS for UHF.
  • SAD initial guess (Superposition of Atomic Densities) that runs a fractional-occupation atomic SCF per unique element. This was added specifically after UHF hit a wrong local minimum on OH in 6-31G*. The fix was immediate.
  • Analytic nuclear gradients for both RHF and UHF via the Pople-Binkley formula with an energy-weighted density matrix. Verified against PySCF at 1e-8 Ha/bohr.

Density Functional Theory

  • Numerical integration grid: Treutler-Ahlrichs M4 radial (Chebyshev-2nd-kind nodes) combined with a Gauss-Legendre-in-cos(θ) and uniform-φ angular product, partitioned across atoms by Becke fuzzy cells with the iterated switch function. The default medium grid integrates the H-1s density to 1e-10.
  • AO evaluation on the grid: both $\chi_\mu(\mathbf{r})$ and $\nabla\chi_\mu(\mathbf{r})$ at every grid point, supporting Cartesian or pure spherical shells. Analytic AO gradients verified against finite differences at 1e-10.
  • libxc wrapper accepting alias names (“LDA”, “PBE”, “B3LYP”) or explicit comma-separated XC_… IDs, with hybrid fractions detected automatically.
  • RKS SCF driver with full energy decomposition $(E_{\mathrm{core}},\, E_J,\, \alpha E_K,\, E_{\mathrm{xc}},\, E_{\mathrm{nuc}})$. Matches PySCF to 5e-11 Ha on LDA and to grid accuracy (~1e-6 Ha) on B3LYP and PBE.

ASE integration

This was the decision I am most happy about in retrospect. Rather than write a custom input parser, we wired the code into ASE (the Atomic Simulation Environment) as a proper Calculator subclass. That gave us, for free:

  • Input readers for XYZ, CIF, PDB, POSCAR, Gaussian inputs, and basically every format chemists actually use.
  • Geometry optimization via ase.optimize.BFGS. H₂O in HF/STO-3G with a distorted starting geometry converges in 7 BFGS steps to r(OH) = 0.989 Å and angle HOH = 100.0°, exactly the known literature value.
  • Implicit method routing: HF with multiplicity 1 goes to RHF with forces, HF with multiplicity > 1 goes to UHF with forces, DFT with multiplicity 1 goes to RKS (energy-only at day’s end).
  • Clean unit handling via ase.units.Bohr and ase.units.Hartree.

Basis set library

90 standard Gaussian .g94 basis sets shipped by libint are exposed automatically (STO-3G through aug-cc-pVQZ, the def2 family, plus the JKFIT, JFIT, and CABS auxiliary sets). On top of that there is a basis_library/custom/ directory where a user can drop their own .g94 files. A setup script assembles the union, with custom files overriding standard ones by name. This path is ready for dropping in my own pob-TZVP and pob-TZVP-D basis sets (the ones I developed for solid-state calculations) once I dig up the files.

Tests

131 tests across 13 files, full suite runs in about 10 seconds. Every numerical feature is cross-checked against PySCF as the reference. A representative slice of what actually passes:

  • H₂ / STO-3G at R = 1.4 bohr: E_HF = −1.116714325063 Ha, difference from PySCF = −1.87e-14.
  • H₂O / STO-3G at experimental geometry: E_HF = −74.964453863067 Ha, difference from PySCF = −2.84e-14.
  • OH doublet / STO-3G UHF: <S²> = 0.7533 (ideal 0.75, about 0.3% spin contamination).
  • H₂O / STO-3G / LDA: 5e-11 Ha vs PySCF.
  • H₂O / STO-3G / B3LYP: 1.3e-7 Ha (grid-accuracy limited).
  • CH₄ / 6-31G* / PBE: under 3e-6 Ha.

Documentation

A README with quick-start examples for both macOS and Linux, a detailed installation doc covering Homebrew, Debian/Ubuntu, Fedora/RHEL, and Conda paths, a quickstart tour, and a basis-library README. All written by the same agent, in one day, alongside the code.

What I deliberately did not add yet

UKS (unrestricted Kohn-Sham), analytic DFT gradients, ROHF, OpenMP inside the Fock and ERI-gradient loops, MPI for multi-node, post-HF correlation methods like MP2 and CCSD, and of course the actual cyclic cluster model. Every foundation for CCM is now in place: libint with periodicity hooks available, ASE’s CIF reader already wired in, and a validated HF/DFT SCF stack on finite molecules that I trust.

How I steered it

If you are thinking about trying this yourself, the interesting part was not the code, it was the scoping. A few patterns I noticed in my own behavior over the day:

Anticipatory infrastructure. Early in the morning, when the code was still just producing one-electron integrals, I made the decision to build libint from source with first-order derivatives enabled. Homebrew’s libint would have been faster to set up, but it did not include the derivative integrals. I knew we would need gradients for geometry optimization later, and I did not want to tear down and rebuild the install halfway through the day. That single 10-minute decision paid off twice: once for HF gradients, once for UHF gradients.

Validation before features. After RHF produced a plausible number, I pushed for formal pytest coverage against PySCF before we added DIIS. That caught two real bugs immediately: a Cartesian-vs-spherical d-orbital basis-purity issue, and a last-iteration MO self-consistency bug. Both were silent errors. Neither would have shown up if I had just eyeballed the total energy.

End-user surface before internals. We wired in the ASE Calculator and the logging integration before adding forces, before DFT, before SAD. The moment I caught the calculator running silently without emitting any trace, I paused the next phase and inserted a logging phase. It is much easier to debug a 50-iteration SCF when you can see the DIIS error norm falling.

Realistic descoping. My opening plan was “UHF and ROHF, then DFT.” By mid-afternoon it was “UHF only, ROHF can wait.” On the DFT grid, I accepted a Gauss-Legendre product angular grid instead of a proper Lebedev grid for the MVP. These were not compromises on correctness, they were compromises on sophistication, and they let us ship a working DFT implementation the same day.

Commit cadence. Every phase got its own focused commit, with a descriptive message, passing the full test suite. No force-pushes, no amends. 16 clean revertable checkpoints.

Standing on shoulders

None of this would have been possible in a day if the hard parts had not already been solved by other people. The integral engine is libint (Ed Valeev), the 500+ DFT functionals come from libxc (Susi Lehtola and co.), linear algebra is Eigen, the Python bindings are pybind11, the user-facing surface is ASE (Larsen et al.), and every single number in the test suite is cross-checked against PySCF (Qiming Sun et al.) as the reference oracle. On the methods side we are implementing formulas from Pulay (DIIS), Pople, Krishnan, Schlegel and Binkley (analytic HF gradients), Treutler and Ahlrichs (radial grid), and Becke (atomic partitioning). Full citations and a guided tour of the stack are in the companion post.

Watching it work was the best part

I need to say this plainly: I had a blast. I spent the day with Anthropic’s Claude Opus 4.7 as a pair-programmer, and watching the model work through this problem was genuinely thrilling. Every time I came back from a coffee or a meeting, there were new commits, clean ones, with descriptive messages, and a test suite that had grown and still passed. The SCF converged on H₂O. Then DIIS landed and the iteration count dropped from 52 to 9. Then gradients showed up and BFGS walked the water molecule to its equilibrium geometry. Then DFT arrived and B3LYP matched PySCF to grid accuracy. Each of those moments felt like a little gift.

What took me multiple years during my PhD, Opus 4.7 produced in a day. Not because vibe-qc is AICCM, it is not. AICCM implements real periodic CCM with four-center integrals at Wigner-Seitz boundaries, which is a genuinely hard problem and is not in vibe-qc yet. But the molecular HF + DFT + gradients + ASE integration layer, which in a traditional PhD timeline is itself a one- or two-year project, collapsed into a single afternoon. That is a kind of leverage I have never experienced before, and I am grinning about it.

The PhD was not wasted, quite the opposite. The reason I could steer this build effectively, and the reason I knew to demand a SAD guess the moment UHF stalled, and the reason I caught the lithium-on-RHF mistake instantly, is because I spent those years learning quantum chemistry from the inside. The expertise did not become obsolete. The expertise became the steering wheel, and it turns out steering is a deeply satisfying way to use what you know.

The really exciting part is what this unlocks. Ideas that used to be “interesting but I cannot justify six months to try it” suddenly become “let us see what happens this weekend.” The research questions I filed away after my PhD, the ones I always meant to come back to, are now within reach in a way they were not last year. That is a good day.

The next step, when I find another day, is the cyclic cluster model. That is the part where I find out whether the model can actually extend beyond well-trodden textbook ground into the corner of the literature that I personally carved out a decade ago. I am genuinely curious. I will report back.

The code lives at vibe-qc.com. If you want to reproduce the experiment yourself, the recipe is: a blank repo, a clear scoping brief, and the discipline to let the model cook.

Converting Microsoft Word Files (doc, docx) to reStructuredText (rst)

This article describes how to convert Microsoft Word documents to reStructuredText. Everything should be done within a temporary directory with simplified filenames. So let’s assume you want to convert ‘am.docx’ to reStructuredText. The Word document can contain images.
You need:
A few simple steps:
  1. On the command line (either the old cmd or the PowerShell) go to the temporary directory that contains the Word document (e.g. C:\temp):
    cd c:\temp
  2. Convert ‘am.docx’ to ‘am.rst’ using pandoc
    pandoc.exe -f docx am.docx -t rst -o am.rst
  3. Extract the media files (e.g. images) from the Word document
    unzip .\am.docx

    and move it to current working directory

    mv .\word\media .
  4. All image files should be in the same file format, so convert eml and gif files to png.
    cd media

    to jump into the directory

    dir (to list all files)

    a) Either by hand:

    convert .\image2.gif .\image2.png
    convert .\image1.emf .\image1.png

    b) Or automatically by using mogrify (also part of ImageMagick):

    mogrify.exe -format png *.emf
    mogrify.exe -format png *.gif

    And clean up:

  5. rm *.gif
    rm *.emf
  6. Do not forget to search and replace .emf and .gif with .png in the .rst file with the editor of your choice (gvim or notepad++)
  7. Check the build by creating a quick Sphinx:
    run sphinx-quickstart (and follow the instructions)
    copy the file over the main doc in the source dir
    copy the media folder to source
    run “make.bat html” to create the a website and check the result.

Python 3.4 and Django on Ubuntu 13.10

Why bother about Python versions? 

I recently started a new project creating a web application. As I have a lot of Python programming experience I chose Python with Django over Ruby on Rails. At the beginning of a new project I prefer using the latest versions of the frameworks the application will depend on. Starting now with Python 2.7 would mean that sooner or later there would be additional work porting the codebase to Python 3. Yesterday, Python 3.4 was released. One of the biggest improvements is that it has pip already included which makes handling virtual environments and installing the latest release of Django really easy.

Building Python from source

The downside is, that Linux distributions do not include the latest Python release yet. Most of them still ship with Python 2.7 as default version. The next Fedora and Ubuntu releases might change that, but for now you need to compile it from source. Luckily that is not a hard task. Go to the download page and grab the latest Python release (recommended if you read the post later and a newer version was released) or past the following command into a terminal.

First make sure you have everything installed to compile Python from source.

sudo apt-get install build-essential

Before downloading create a temporary directory to make the cleanup easier. At the end you can just delete “tmpPython”.

mkdir tmpPython
cd tmpPython
wget --no-check-certificate https://www.python.org/ftp/python/3.4.0/Python-3.4.0.tgz
tar xvf Python-3.4.0.tgz

After the archive is extracted, cd into the source directory. Create a directory to install to, then run configure, build and install.

cd Python-3.4.0
sudo mkdir /opt/Python34
./configure --prefix=/opt/Python34 && make -j4
sudo make install

Now you have Python 3.4 installed on your system.
Add the the path containing the executable to your environment.

export PATH=/opt/Python34/bin:$PATH

Also make sure to add this line to your .bashrc file (or .zshrc if you’re using zsh).

echo "export PATH=/opt/Python3.4/bin:$PATH" >> $HOME/.bashrc

Creating a virtual environment

Go to the directory where you want to create the virtual environment. I recommend /opt if you collaborate with others within the environment (you have to create everything with sudo) or your home directory if you work alone. Then run pyvenv to create it.
pyvenv-3.4 djangoEnv
source djangoEnv/bin/activate

The bash prompt changes to

(djangoEnv) mpei@earth /opt

and that means that you are now within this virtual environment.
This command shows you what you have installed:

pip freeze

Installing Django

Just use pip to install the latest version of Django and its extension:
sudo pip install django django-extensions

And you’re done! You can check the installed versions by running “pip freeze” again. Maybe another blog post on Django and databases? Or the first steps in Django? We’ll see… bye bye!

Organizing C/C++ includes

After starting my new job programming in a big software project I spent some thought on organizing includes and give a recommendation. Here’s what I’ve come up with. As always, some things are obvious, some are not…

  1. You should only include what is necessary to compile the source code. Adding unnecessary
    includes means a longer compilation time, especially in large projects.
  2. Each header and corresponding source file should compile cleanly on its own. That
    means, if you have a source file that includes only the corresponding header, it
    should compile without errors. The header file should include not more
    than what is necessary for that.
  3. Try to use forward declarations as much as possible. If you’re using a
    class, but the header file only deals with pointers/references to
    objects of that class, then there’s no need to include the definition of
    the class. That is what forward declarations are designed for!

    // Forward declaration

    class MyClass;

  4. Note that some system headers might include others. But apart from a few
    exceptions, there is no requirement. So if you need both
    <iostream> and <string> include both, even if you can
    compile only with one of them.
  5. To prevent multiple-inclusions, with loops and all such attendant horrors is having an #ifndef guard.

    #ifndef _FOO_H
    #define _FOO_H
      …contents of header file…
    #endif

  6. The order in which the includes appear (system includes and user includes) is up to the coding standard you follow.
  7. If you have the choice of picking a coding standard regarding the order at the beginning of a new project, I recommend to go from local to global, each
    subsection in alphabetical order. That way you can avoid introducing
    hidden dependencies. If you reverse the order and i.e. myclass.cpp
    includes <string> then <myclass.h>, there is no way to catch
    at build time that myclass.h may itself depend on string. So if later someone includes myclass.h but does not need string, he’ll
    get an error that needs to be fixed either in the cpp or in the header
    itself. 
  8. So the recommended order would be:
    • header file corresponding to its .cpp file
    • headers from the same component
    • headers from other components
    • system headers

If you use the Eclipse IDE (which I highly recommend), you can use a very nice feature that helps you organizing includes (“Source -> Organize Includes”).

Updating Eclipse

Eclipse 3.7 Indigo has been released! I had a lot of add-ons and did not want to reinstall all of them.

Is an update from 3.6 to 3.7 possible?

Yes! Simply go to “Install Software” and add 

“http://download.eclipse.org/releases/indigo” 


to “Available Software Sites”. Check existing repositories and change them from “Helios” to “Indigo”. Then check for updates.


It worked like a charm!