vibe-qc day 3: setting the public-release bar and the first convergence fix

Left: SCF energy vs iteration showing oscillation without level shift and smooth convergence with level shift b=0.3. Right: vibe-qc v0.4.0 public release roadmap.

vibe-qc is an open-source quantum chemistry and solid-state code — C++17 backend, Python frontend via pybind11, ASE Calculator interface — being written in the open one session at a time. Day 1 shipped the molecular stack and the first Ewald infrastructure. Day 2 closed out v0.2.0 — end-to-end 3D Ewald dispatch, bulk benchmarks, multi-k DIIS, periodic Becke partition — and opened v0.2.5 with SYM3a orbit-reduced lattice-integral storage. Day 3 was narrower by design: a release-target decision and the first piece of the next milestone. 746 tests in, 754 out.

Setting the public-release bar at v0.4.0

The first question of the day was where to draw the public-release line. Three candidates.

v0.2.0 is technically coherent — “3D periodic bulk HF/DFT with quantitative Ewald Coulomb” is a complete story, and we are 95% of the way there (the remaining piece is tightening the CRYSTAL cross-check witness bounds on LiH, NaCl, MgO, and Si). But shipping there would hand a user a code that oscillates the moment they try a metal or a narrow-gap oxide. The DIIS accelerator that works beautifully on insulators stalls on systems with near-degenerate occupied and virtual orbitals near the Fermi level. That is not a missing feature — it is a known failure mode with known remedies, and shipping before those remedies exist would damage the project’s credibility faster than it builds it.

v0.3.0 adds visualisation polish: properties, cube files, better band plots. Useful, but it does not fix the convergence problem. A code that looks nicer while still oscillating on MgO is not a more useful code.

v0.4.0 is where vibe-qc becomes “I can do real solid-state chemistry with this.” The convergence-tooling track (C1: level shifting, Fermi-Dirac smearing, second-order SCF) handles the metals and tight-gap systems. Phase 14 adds effective-core potentials via libecpint, opening the pob-* ECP basis sets for Rb through Lu — metal oxides, perovskites, lanthanoids. Phase 15 adds periodic UHF/UKS for magnetic systems and spin-polarised transition-metal compounds. That combination covers the tutorial cases that currently bounce off the wall, and it matches the capability level where researchers in solid-state chemistry will actually trust the results enough to publish. That is the bar. The full roadmap is on vibe-qc.com.

Left: SCF energy vs iteration showing oscillation without level shift and smooth convergence with level shift b=0.3. Right: vibe-qc v0.4.0 public release roadmap.
Left: the level-shift effect on a tight-gap periodic cell — unshifted SCF oscillates; $b = 0.3$ converges in 9 iterations to the same energy. Right: the v0.4.0 public-release roadmap, with today’s C1a already green.

Phase C1a: Saunders-Hillier level shift in periodic Ewald SCF

The first piece of the v0.4.0 convergence track shipped today. The Saunders-Hillier level shift (Saunders and Hillier, 1973; also Goedecker, 1996) modifies the Fock matrix before diagonalisation:

$$\tilde{\mathbf{F}} = \mathbf{F} + b\left(\mathbf{S} – \tfrac{1}{2}\mathbf{S}\mathbf{D}\mathbf{S}\right)$$

where $b$ is the shift parameter and $\mathbf{D}$ is the density matrix. The effect is to raise virtual MO eigenvalues by $b$ while leaving the occupied block untouched. This widens the effective HOMO-LUMO gap seen by the diagonaliser each iteration, suppressing the near-degenerate orbital swaps that cause oscillation in small-gap systems. The SCF fixed point is unchanged — at convergence, $\mathbf{F}\mathbf{D} = \mathbf{D}\mathbf{F}$ (in the $\mathbf{S}$-metric), so the shift term vanishes and the converged density is the same regardless of $b$.

That last point is the correctness contract that the test suite validates directly. A tight H₂ chain at $a = 8$ bohr (Γ point) and at $a = 10$ bohr with a $[2,2,2]$ k-mesh: $b = 0.3$ reproduces the $b = 0$ total energy to within $10^{-9}$ Ha. The reported mo_energies in the result object are the un-shifted physical orbital energies — the final self-consistency pass at convergence omits the shift, so the orbital spectrum is independent of the $b$ value chosen during iteration. Choosing $b$ too large just slows convergence; choosing it too small provides no benefit on the difficult cases. A value between 0.2 and 0.5 Ha covers most practical situations.

The implementation is wired into both the Γ-Ewald (run_rhf_periodic_gamma_ewald3d) and multi-k Ewald (run_rhf_periodic_multi_k_ewald3d) drivers via a new level_shift field on PeriodicRHFOptions, PeriodicSCFOptions, and PeriodicKSOptions. The default is 0.0 — the field is purely opt-in, and existing callers see bit-for-bit identical results. The SCF convergence section of the user guide covers when and how to use it. Eight new tests: field exposure on all three option structs, Γ-Ewald and multi-k inertness contracts, MO eigenvalues physical at convergence, default behaviour preserved. The molecular RHF/UHF/RKS/UKS level-shift wiring (C1a-2) needs a small parallel-code change in four C++ drivers and is deferred to a follow-up commit.

A segfault, triaged but not fixed

Honest engineering means mentioning this. A crash surfaced in the molecular gradient code: compute_gradient(Molecule, BasisSet, RHFResult)overlap_gradient_contribution → null function pointer inside __kmp_invoke_microtask. Multiple OpenMP worker threads hitting pc = 0x0 is a strong signal of either a libint2 Engine dispatch table that was not initialized when freshly-copied engines raced into .compute() from worker threads, or an out-of-bounds access into the engine pool when nested or dynamic OpenMP ran more threads than the pool was sized for.

The existing 32 gradient tests all pass cleanly, so the crash triggers on a specific input pattern — most likely a basis set with shells exceeding the prototype’s max_l, or a thread-count change between pool construction and the parallel region. The crash binary’s offsets shifted after rebuild, making direct symbolisation unhelpful. This is deferred: waiting on a reproducer that pins the exact shell configuration. It will be fixed before v0.4.0 tags, because the gradient code is load-bearing for geometry optimisation and eventually periodic forces.

What is next

Two immediate priorities. First, C1b — Fermi-Dirac smearing with fractional occupations and an electronic-entropy contribution to the free energy. This is the single biggest credibility-unblocker for v0.4.0: it covers metals, oxide surfaces, and defect cells with mid-gap states that DIIS plus level shifting still cannot handle. Second, the segfault reproducer — once we have an input that reliably triggers the crash, the fix should be straightforward.

SYM3b (kernel-level compute reduction — the $|G|$-fold wall-clock speedup from space-group symmetry, not just the memory saving from SYM3a) remains open on the v0.2.5 track and will slot in once C1b is done. The CRYSTAL cross-check pass that formally closes v0.2.0 is also still outstanding.

The code is at vibe-qc.com. MPL 2.0.

When the tutorial is the product: how vibe-qc treats documentation as a first-class deliverable

Split view: theory section of a vibe-qc tutorial on the left, running Python code on the right

vibe-qc is an open-source quantum chemistry and solid-state code: a C++17 numerical backend, a Python frontend via pybind11, and an ASE Calculator interface that connects it to the wider atomistic simulation ecosystem. The code is being written in the open, one session at a time, with the cyclic cluster model — a method for treating solid-state defects at post-HF accuracy — as its eventual destination. But the thing that distinguishes vibe-qc from other projects at this stage is not its roadmap length. It is that the tutorials are treated as first-class deliverables, written concurrently with the code they document, and held to the same quality gate as the implementation itself.

This post is about why that choice was made and what it looks like in practice.

What a tutorial is, in this project

Every tutorial in vibe-qc follows the same structure: a working Python script, representative output, an embedded figure where the calculation produces something visual, a theory section (roughly half a page, three to six equations), a references block, and a pointer to the next tutorial in the sequence. None of these sections are optional. A tutorial without a theory section is a recipe. A tutorial without a working script is an essay. The combination is what makes it an educational artefact rather than documentation.

The theory sections are typeset with explicit equations. Since peintinger.com does not currently run a MathJax or KaTeX plugin, the equations in the docs themselves are rendered by MathJax inside the Sphinx build at vibe-qc.com. For example, the molecular Hartree-Fock tutorial works through the Roothaan equations in matrix form, $\mathbf{F}\mathbf{C} = \mathbf{S}\mathbf{C}\boldsymbol{\varepsilon}$, where $\mathbf{F}$ is the Fock matrix, $\mathbf{C}$ the MO coefficient matrix, $\mathbf{S}$ the overlap matrix, and $\boldsymbol{\varepsilon}$ the diagonal matrix of orbital energies:

Split view: theory section of a vibe-qc tutorial on the left, running Python code on the right
The dual character of a vibe-qc tutorial: theory on the left (Bloch theorem, dispersion relation, DOS formula, foundation references), running code on the right. Both sections are required; neither is optional.

The citation discipline matters as much as the equations. Foundation-era papers — Roothaan 1951, Bloch 1928, Becke 1988, Pulay 1980 — are cited with full journal-volume-page triples drawn directly from the original. Post-2010 results that are harder to verify from memory carry a [Ref: verify] marker in the draft and are only promoted to full citation after the volume and page have been confirmed against the journal. This is not the standard practice for informal software documentation. It is standard practice for a journal article, and the tutorials are being written to that standard deliberately.

The parity matrix as a dual roadmap

The roadmap page carries two distinct structures. The first is an engineering milestone list tracking Ewald phases, symmetry sub-phases, and versioned capabilities. The second is a tutorial parity matrix that pins vibe-qc against ORCA’s molecular tutorial set and CRYSTAL’s solid-state tutorial set — the two reference programs that cover the two halves of what vibe-qc is trying to do.

Table showing vibe-qc tutorial parity against ORCA and CRYSTAL reference programs
The tutorial parity matrix from vibe-qc.com/roadmap.html. Green means a tutorial covering this capability is shipped and cross-checked against the reference program. Yellow means partial. Red means the capability is on the roadmap but the tutorial does not yet exist.

The value of the parity matrix is that it answers two different questions from a single table. For a user evaluating whether vibe-qc can support their work, it gives a one-page answer: “can I reproduce the ORCA tutorial on geometry optimisation with this code?” For a developer deciding what to build next, it gives a prioritised queue: the red cells in the CRYSTAL column are where the periodic capability is missing and the tutorials cannot yet be written. The engineering roadmap and the documentation roadmap are the same document.

The framing is deliberately not competitive. The parity matrix does not claim vibe-qc exceeds ORCA or CRYSTAL at anything. It says, plainly, where vibe-qc is today relative to what those programs can teach, and what it would take to close the gap. That honesty is part of the project’s credibility.

Fourteen tutorials, all shipped today

As of the current tutorial index, fourteen tutorials are live. They run from molecular Hartree-Fock — the entry point, which teaches the Roothaan equations, basis set input, and how to read an SCF trace — through open-shell methods, geometry optimisation, vibrational frequencies, thermodynamics at finite temperature, orbital visualisation, basis set convergence, and dispersion corrections, and then into the periodic side: periodic HF on a 1D hydrogen chain, periodic KS-DFT, Madelung constants via Ewald summation, and band structure and density of states.

The band structure tutorial is worth looking at as the canonical example of what the format can carry. It delivers a full Γ-to-X k-path calculation on the hydrogen chain, plots bands and DOS in a single combined figure, explains the Bloch theorem, $\psi_{n\mathbf{k}}(\mathbf{r}) = e^{i\mathbf{k}\cdot\mathbf{r}}u_{n\mathbf{k}}(\mathbf{r})$, and the origin of band dispersion in terms of tight-binding bonding and antibonding combinations, cites Bloch 1928 and Monkhorst-Pack 1976, and ends with a pointer forward to the Peierls distortion. The code block is short enough to read in under a minute. The theory section is long enough to understand what the code is computing and why the result looks the way it does.

That rhythm — code, output, figure, theory, references, next — is the same across all fourteen tutorials. A student who works through them in order gets a coherent course in molecular and periodic quantum chemistry. A researcher who arrives at a specific tutorial looking for a recipe gets that too, because the code blocks are self-contained. The two use cases are not in tension; they are served by the same document structure.

The no-code-required backlog

One discipline the roadmap makes explicit is the distinction between tutorials that require new code and tutorials that require only documentation work on existing capability. Several entries in the tutorial backlog fall into the second category: the capability is already in the code, the API is stable, and the only work is writing the tutorial itself — the explanation, the example script, the figure, the theory section, the references.

This is worth naming because it is easy to defer. The code works; the tutorial can wait. What the project has committed to instead is treating those deferred tutorials as a first-class backlog item, tracked in the parity matrix, with the same visibility as unimplemented features. A shipped capability without a tutorial is a capability that most users will never find.

The user guide plays a complementary role here. It covers the reference-level detail — how to specify k-point meshes, how to configure Ewald parameters, how to read the memory budget report — that would clutter a tutorial but that a user needs once they move past the introductory examples. The split between tutorial and user guide is deliberate: tutorials teach phenomena and procedures, the user guide documents options and behaviour.

Plots as the lesson, not decoration

Every tutorial that produces a figure ships that figure embedded in the page. The figures are not screenshots pasted in after the fact. They are generated by scripts in the repository, regenerable by any user who runs the tutorial code, and updated whenever the underlying calculation changes.

This matters because the figures are doing pedagogical work that the code blocks cannot. A code block teaches syntax and API. A band structure plot teaches what band dispersion looks like, where the gap sits, how the DOS integrates the band structure into a density, and what it means for a state at the zone boundary to be antibonding. The band structure tutorial includes both panels — bands and DOS in a single combined figure — because showing them side by side is how a reader learns to interpret both simultaneously.

The same logic applies to the dispersion tutorial, which shows a binding curve with and without D3-BJ correction, making visible the underbinding failure of uncorrected GGAs on weakly-bound systems. The figure is the argument. The code block shows how to reproduce it.

The doc build as a gate

The documentation is built with sphinx-build -W --keep-going: warnings are errors, and the build does not finish if any cross-reference is broken or any code block fails to parse. This is the same discipline applied to the test suite — 746 passing tests as of the most recent commit — extended to the documentation layer. A tutorial that links to a non-existent API page, or a code block that uses a function name that was renamed, fails the build and blocks the merge.

This is not the default posture for scientific software projects. The default is to build docs as a separate step, treat failures as non-blocking, and accumulate rot over time. The cost of running docs under the same gate as code is low — a few minutes added to CI. The benefit is that the tutorials stay synchronized with the implementation without anyone having to remember to update them.

Where this fits in the larger project

vibe-qc is aiming at the cyclic cluster model: a method that treats a finite cluster of atoms embedded in a periodic Madelung field, enabling post-HF correlation calculations on solid-state defects at a cost that scales with cluster size rather than unit cell size. The cyclic cluster model is v2.0 on the roadmap. Every tutorial from the molecular HF entry point through the periodic band structure is load-bearing preparation for the moment when a researcher can open a CIF file, define a vacancy cluster, run CCSD(T) on it, and understand what the result means.

The tutorials exist not to fill a documentation requirement but because the project’s audience includes people who need to understand what they are computing, not just how to invoke the code. That is a different audience from “users who already know periodic HF/DFT and just need the API reference.” The user guide and API reference serve that second audience. The tutorials serve everyone, and they are the harder document to write well.

The code lives at vibe-qc.com. MPL 2.0.

vibe-qc day 2: closing out Ewald and laying the symmetry foundation

vibe-qc day 2: closing out Ewald and laying the symmetry foundation

vibe-qc is an open-source quantum chemistry and solid-state code with a C++17 numerical backend, a Python frontend via pybind11, and an ASE Calculator interface for geometry optimization and structure I/O. It is being written in the open, one coding session at a time, with Claude Opus as the implementation engine and me steering at the architecture level. Day 1 shipped the molecular HF/DFT/MP2 stack (v0.1.0, April 18) and then pushed into periodic territory: one-electron lattice integrals, Gamma-only and multi-k RHF, multi-k KS-DFT, and the first Ewald infrastructure through Madelung-cancellation helpers. Day 2 closed out the v0.2.0 Ewald milestone and planted the first stake for v0.2.5 space-group symmetry. Four deliverables, 26 new tests (720 to 746 total), Sphinx docs building clean.

Phase 12e-c-4: end-to-end EWALD_3D dispatch

The previous session had the Ewald machinery in place but not yet wired to the user-facing entry points. Today that changed. Both run_rhf_periodic_scf(...) and run_rhf_periodic_gamma_scf(...) now branch on options.lattice_opts.coulomb_method: pass DIRECT_TRUNCATED and you get the existing C++ driver unchanged; pass EWALD_3D and you get the new Python driver, with nuclear_repulsion_per_cell automatically rerouted through Ewald summation as well. That last part matters: an Ewald electronic energy paired with a direct-truncation nuclear repulsion would be self-consistent in neither the long-range cancellation nor the ω-dependence, and the resulting total energy would be meaningless. Electronic and nuclear sides now always agree on which summation scheme is in use.

The benchmark suite that shipped alongside covers H₂ crystal, LiH rocksalt, MgO rocksalt, and Ne FCC — 11 tests in total, checking convergence, $\omega$-invariance (energy independence of the Ewald splitting parameter), an equation-of-state scan, and the equivalence of a [1,1,1] multi-k mesh with a Gamma-only calculation. All pass.

One finding worth documenting: atom positions relative to the FFT Poisson grid origin are not numerically irrelevant. Atoms sitting at box corners, rather than centered in the cell, inflate the $\omega$-invariance residual by roughly two orders of magnitude. The physics is correct either way, but the numerical conditioning is not. This is now flagged in the docs and the benchmark suite uses centered geometries as the reference.

Phase 12e-c-4c-iii-c: multi-k Pulay DIIS

The multi-k Ewald driver shipped without DIIS. For loose cells and [1,1,1] meshes that was acceptable — pure density damping converged, slowly. For tight cells with denser k-meshes it was not. An H₂ chain at a = 10 bohr with a [2,2,2] mesh simply plateaued under damping and never converged.

The fix is Pulay DIIS extended to the Brillouin zone. The commutator error vector at each k-point is:

$$\mathbf{e}(\mathbf{k}) = \mathbf{F}(\mathbf{k})\mathbf{D}(\mathbf{k})\mathbf{S}(\mathbf{k}) – \mathbf{S}(\mathbf{k})\mathbf{D}(\mathbf{k})\mathbf{F}(\mathbf{k})$$

and the Pulay B matrix couples iterations i and j with a k-weighted inner product:

$$B_{ij} = \sum_{\mathbf{k}} w_{\mathbf{k}}\, \mathrm{Re}\,\mathrm{tr}\!\left[\mathbf{e}_i(\mathbf{k})^\dagger\, \mathbf{e}_j(\mathbf{k})\right]$$

The k-weights are the standard Monkhorst-Pack weights that already appear in the energy and density assembly, so no new parameters appear. The DIIS coefficients solve the same linear system as in the molecular case; the only change is that the scalar error measure is now a sum over the zone rather than a single matrix norm.

The numbers are clean. H₂ chain, [1,1,1]: 13 iterations under damping, 7 with DIIS. H₂ chain, [2,2,2]: previously non-converging under damping, now 4 iterations with DIIS. This is the same qualitative behavior the molecular DIIS delivered on H₂O back on day 1 (52 iterations to 9), just extended to k-space.

Phase 12f: periodic Becke partition for tight-cell DFT

The molecular Becke fuzzy-cell partition (J. Chem. Phys. 88, 2547, 1988) assigns each grid point a weight that partitions unity across atomic cells using a smooth step function of interatomic distances. In a periodic system with a tight unit cell, image atoms from neighboring cells fall within the smoothing radius of the step function, and ignoring them breaks the partition: the weights no longer sum to the cell volume.

The fix is to extend the partition denominator over home atoms plus image atoms within a user-specified radius. The implementation is wired into run_rks_periodic via two new options: PeriodicKSOptions.use_periodic_becke (boolean, default False) and becke_image_radius_bohr (float). Default behavior is unchanged from v0.1.0; the periodic extension is an explicit opt-in for users who know they are working in a tight cell.

The empirical witness is a 5-bohr cubic H₂ cell. Without periodic Becke, the molecular partition produces a total grid weight of 18,526 — wildly wrong for a cell with volume 125 bohr³. With periodic Becke, the total grid weight comes out as 126, matching V_cell = 125 bohr³ to within the grid quadrature error. The discrepancy between the two modes (18,526 vs 126) is not a subtle numerical issue; it is a categorical correctness failure of the molecular partition in tight periodic geometry. Any DFT calculation on a tight crystal without this correction is integrating a density that does not integrate to the correct electron count per cell.

Phase SYM3a: orbit-reduced storage for lattice integrals

The v0.2.5 symmetry milestone starts here. The one-electron lattice integrals $S(\mathbf{g})$, $T(\mathbf{g})$, $V(\mathbf{g})$ are indexed by a cell offset vector $\mathbf{g} = (g_1, g_2, g_3)$. For a system with space-group symmetry, many (g, atom-pair) combinations are related by point-group operations and carry the same numerical content up to a known rotation. SYM3a exploits this to compress storage.

The machinery builds on SYM2c, which already identifies atom-pair and cell-index orbits for arbitrary (including non-origin-fixed) structures like NaCl. SYM3a takes those orbits and stores one representative sub-block of shape $(n_{\mathrm{AO},a},\, n_{\mathrm{AO},b})$ per orbit, rather than a full $(n_{\mathrm{bf}} \times n_{\mathrm{bf}})$ block at every cell triple. A round-trip compress/reconstruct test on simple cubic Pm-3m He at STO-3G, with a 10 bohr cutoff, partitions 33 cell triples into 5 orbits and achieves 6.6× memory reduction. Reconstruction is exact to 10⁻¹⁶.

What SYM3a does not yet do is reduce compute: the integral kernels still evaluate every shell-pair times cell-triple combination before handing off to the compressor. That is SYM3b, the next item on the list. SYM3b is where the |G|-fold reduction hits wall-clock time rather than just peak memory, because the kernel skips shell-pair/cell-triple combinations that are equivalent under the point group instead of evaluating and then discarding them. The storage handle SYM3a provides is what SYM3b will plug into.

Where v0.2.0 stands

The v0.2.0 contract is “ω-invariant 3D periodic SCF that reduces to molecular HF in the loose-box limit and integrates the unit-cell volume correctly in tight cells.” As of today that contract is end-to-end testable with two option flags:

options.lattice_opts.coulomb_method = EWALD_3D
options.ks_opts.use_periodic_becke = True

Internal-consistency benchmarks are in place across four crystal systems. The remaining gating item for the v0.2.0 tag is a cross-check pass against published CRYSTAL reference energies on LiH, NaCl, MgO, and Si at published geometries. That is a validation exercise against an external reference rather than new implementation work, and it is next on the list.

What is next

Immediately: SYM3b, the kernel-level compute reduction that makes the |G|-fold saving from space-group symmetry show up in wall-clock time. The substrate — Wigner D-matrices (SYM1), AO permutation matrices and orbit identification (SYM2a/b), atom-pair-resolved orbits for non-origin-fixed structures (SYM2c), and now the compressed storage layer (SYM3a) — is all in place. SYM3b is the payoff.

Further out, the headline feature of the entire project remains the cyclic cluster model. The roadmap runs from v1.0 (feature-complete molecular and periodic HF/DFT/MP2) through v2.0 (HF-CCM in 3D) and on through a sequence of independently publishable steps: MP2-CCM, local-MP2-CCM, CCSD-CCM, and eventually projection-based embedding where a CCM-correlated central cluster sits inside a DFT-treated periodic environment. The capstone is multi-scale defect chemistry at chemical accuracy on systems with hundreds of atoms in the cluster. Every piece shipped in the first two days of this project is load-bearing for that goal.

The code is at vibe-qc.com. Licensed MPL 2.0.

The giants vibe-qc stands on: a guided tour of the stack

In my previous post I wrote about vibe-coding a quantum-chemical program in one day. The code lives at vibe-qc.com. That post closed with a compact acknowledgment that none of it would have been possible without decades of prior work by a lot of people. This post is where I pay that debt properly: every library we used, what it does, why it matters, and how to cite it.

If you are building anything similar, treat this as a reading list. If you are just curious what sits underneath a modern quantum chemistry code, keep scrolling. I have organized things by role rather than alphabetically, because the interesting story is the layered structure of the stack.

The numerical core

libint

This is the beating heart of vibe-qc. Every one- and two-electron Gaussian integral, and every analytic derivative of those integrals, is computed by libint. It is a C++ library whose source is partly handwritten and partly emitted by its own code generator (which is itself a substantial piece of C++). We build version 2.13.1 from source with max_am=5 (angular momentum up to g-functions) and first-order derivatives enabled. The source build takes about 10 minutes on an M1, and you only do it once per checkout.

Without libint, implementing two-electron repulsion integrals with the right recurrence relations would consume weeks on its own. With libint, it is a function call.

Valeev, E. F. Libint: A library for the evaluation of molecular integrals of many-body operators over Gaussian functions, Version 2.13.1, 2024. https://libint.valeyev.net/

libxc

libxc is the library that turns vibe-qc’s DFT layer from “LDA only” into a proper general-purpose DFT code with 500+ functionals available via aliases like "PBE", "B3LYP", or "BLYP". Under the hood it exposes a uniform C API for evaluating exchange-correlation energy densities and their derivatives with respect to density and density gradient. Hybrid functionals report their exact-exchange fraction through xc_hyb_exx_coef, which is exactly what a SCF driver needs to know.

The “right” way to expose functionals in a new DFT code is to wrap libxc, not to reimplement functionals yourself. We did the right thing here.

Lehtola, S.; Steigemann, C.; Oliveira, M. J. T.; Marques, M. A. L. Recent developments in libxc, a comprehensive library of functionals for density functional theory. SoftwareX 7, 1-5 (2018). https://doi.org/10.1016/j.softx.2017.11.002

Marques, M. A. L.; Oliveira, M. J. T.; Burnus, T. Libxc: a library of exchange and correlation functionals for density functional theory. Comput. Phys. Commun. 183, 2272-2281 (2012). https://doi.org/10.1016/j.cpc.2012.05.007

Eigen

All dense linear algebra in vibe-qc goes through Eigen: matrix products, eigendecompositions (for the Fock diagonalization), linear solves (for the DIIS coefficient equation), and the energy-weighted density matrix construction in the gradient code. Eigen is header-only, which keeps the build simple, and the expression-template design means the code reads like math.

Guennebaud, G.; Jacob, B.; et al. Eigen v3. 2010. https://eigen.tuxfamily.org

OpenMP

Included in the library list because the infrastructure is wired, although the hot loops in vibe-qc are currently serial. Adding OpenMP to the Fock build and the ERI-gradient loops is on the day-two todo list. OpenMP is the de-facto standard for shared-memory parallelism in scientific C++ and it is what we will use.

Bindings and build

pybind11

pybind11 is what makes vibe-qc feel like a Python library even though all the numerical work is in C++. The bindings expose the C++ classes (Molecule, BasisSet, RHFResult, etc.) to Python with zero-copy NumPy interop for the heavy arrays, and the native work releases the GIL via py::gil_scoped_release, so the Python frontend is not a scaling bottleneck.

Jakob, W.; Rhinelander, J.; Moldovan, D. pybind11: Seamless operability between C++11 and Python. 2017. https://github.com/pybind/pybind11

scikit-build-core

The modern Python build backend that lets pip install -e . drive a CMake build under the hood. This is the glue that makes vibe-qc feel like a normal Python package even though it compiles ~3500 lines of C++ and links against three native libraries.

CMake and Ninja

CMake generates the build, Ninja runs it. Ninja is considerably faster than make for projects with thousands of small files (libint’s code generator emits a lot of them).

Boost and GMP

These are libint’s build-time dependencies, not vibe-qc’s. Boost provides header-only utilities (MPL, Type Traits, Preprocessor) that libint’s code generator relies on. GMP provides arbitrary-precision rational arithmetic, which the generator needs to emit exact coefficients in the integral recurrence relations. You install them, libint builds, and you never think about them again.

Granlund, T.; the GMP development team. GNU MP: The GNU Multiple Precision Arithmetic Library, Edition 6.3.0, 2023. https://gmplib.org

Python runtime

NumPy

Every matrix and vector that crosses the C++/Python boundary becomes a NumPy array, usually as a zero-copy view of the underlying Eigen data. NumPy’s array protocol is the reason pybind11 can do that zero-copy handoff cleanly.

Harris, C. R. et al. Array programming with NumPy. Nature 585, 357-362 (2020). https://doi.org/10.1038/s41586-020-2649-2

ASE (Atomic Simulation Environment)

ASE deserves its own paragraph because it is the decision that turned vibe-qc from “a Python library with an SCF loop” into “a program chemists can actually use.” By subclassing ase.calculators.calculator.Calculator, vibe-qc inherits:

  • Structure file readers for XYZ, CIF, PDB, POSCAR, Gaussian inputs, and basically every format chemists touch.
  • Geometry optimizers (BFGS, FIRE, LBFGS, conjugate gradient, etc.) that just work once forces are implemented.
  • Consistent unit handling via ase.units.
  • Visualization through ase.visualize.
  • An interface that every other major code in the ecosystem (VASP, Quantum ESPRESSO, GPAW, etc.) also implements, so users who already know ASE know vibe-qc.

If I were starting another scientific code tomorrow, I would wire it into ASE on day one, same as we did here.

Larsen, A. H. et al. The atomic simulation environment, a Python library for working with atoms. J. Phys.: Condens. Matter 29, 273002 (2017). https://doi.org/10.1088/1361-648X/aa680e

Reference and validation

PySCF

PySCF is the cross-check oracle for everything vibe-qc does. Every RHF, UHF, and RKS energy, every MO coefficient, every gradient component in the 131-test suite is compared against a PySCF calculation on the same system with the same basis set. HF agreement is at machine precision (around 1e-14 Ha), DFT at grid accuracy (down to 5e-11 Ha for LDA, 1.3e-7 Ha for B3LYP on the default medium grid).

If you are writing a new quantum chemistry code in 2026, you should be validating against PySCF. It is open, well-tested, and actively maintained. There is no excuse for shipping numbers that have not been cross-checked.

Sun, Q. et al. Recent developments in the PySCF program package. J. Chem. Phys. 153, 024109 (2020). https://doi.org/10.1063/5.0006074

Sun, Q. et al. PySCF: the Python-based simulations of chemistry framework. WIREs Comput. Mol. Sci. 8, e1340 (2018). https://doi.org/10.1002/wcms.1340

pytest

Methods we implemented from the literature

These are not libraries, they are the papers whose formulas are transcribed into vibe-qc’s source code. If you look at the gradient code and wonder where the energy-weighted density matrix came from, the answer is Pople, Krishnan, Schlegel and Binkley 1979. If you look at the Becke partitioning in the grid module, the answer is Becke 1988.

Hartree-Fock and Roothaan equations

Szabo, A.; Ostlund, N. S. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory. Dover, 1996.

The Szabo-Ostlund textbook is what most of us learned this from. It is the book that makes the Roothaan equations, $\mathbf{F}\mathbf{C} = \mathbf{S}\mathbf{C}\boldsymbol{\varepsilon}$, actually click.

DIIS (Direct Inversion of the Iterative Subspace)

Pulay, P. Convergence acceleration of iterative sequences. The case of SCF iteration. Chem. Phys. Lett. 73, 393-398 (1980).

Pulay, P. Improved SCF convergence acceleration. J. Comput. Chem. 3, 556-560 (1982).

The commutator error measure $\mathbf{e} = \mathbf{F}\mathbf{D}\mathbf{S} – \mathbf{S}\mathbf{D}\mathbf{F}$ drives the extrapolation. The 5-to-6x speedup we measured on H₂O (52 iterations down to 9) is pure Pulay.

Analytic HF gradients (Pople-Binkley formulation)

Pople, J. A.; Krishnan, R.; Schlegel, H. B.; Binkley, J. S. Derivative studies in Hartree-Fock and Moller-Plesset theories. Int. J. Quantum Chem. Symp. 13, 225-241 (1979).

The gradient formula from this paper is

$$\frac{dE}{dR_A} = \frac{dE_{\mathrm{nuc}}}{dR_A} + \mathrm{tr}\!\left(\mathbf{D}\frac{d\mathbf{H}}{dR_A}\right) + \frac{1}{2}\mathrm{tr}\!\left(\mathbf{D}\frac{d\mathbf{G}(\mathbf{D})}{dR_A}\right) – \mathrm{tr}\!\left(\mathbf{W}\frac{d\mathbf{S}}{dR_A}\right)$$

where $\mathbf{W}$ is the energy-weighted density matrix. If you have ever wondered why the gradient code wants $\mathrm{tr}(\mathbf{W}\cdot d\mathbf{S})$ instead of something involving orbital derivatives directly, this is why.

DFT integration grid

Treutler, O.; Ahlrichs, R. Efficient molecular numerical integration schemes. J. Chem. Phys. 102, 346-354 (1995). https://doi.org/10.1063/1.469408

The M4 radial mapping with Chebyshev-2nd-kind nodes. Treutler-Ahlrichs is the standard choice for a reason.

Becke, A. D. A multicenter numerical integration scheme for polyatomic molecules. J. Chem. Phys. 88, 2547-2553 (1988). https://doi.org/10.1063/1.454033

Becke’s fuzzy-cell partitioning with the iterated (3/2)μ − (1/2)μ³ switch function is what makes the grid work for polyatomics.

Basis sets

Pritchard, B. P.; Altarawy, D.; Didier, B.; Gibson, T. D.; Windus, T. L. New Basis Set Exchange: An Open, Up-to-date Resource for the Molecular Sciences Community. J. Chem. Inf. Model. 59, 4814-4820 (2019). https://doi.org/10.1021/acs.jcim.9b00725

libint ships a snapshot of the Basis Set Exchange collection. 90 sets including the STO-3G, 3-21G, 6-31G, 6-311G, cc-pVXZ, and def2 families, plus the JKFIT, JFIT, and CABS auxiliary sets.

vibe-qc also has infrastructure for dropping in the pob-TZVP family, which was developed by our group for periodic solid-state calculations but applies equally well on the molecular side:

Peintinger, M. F.; Oliveira, D. V.; Bredow, T. Consistent Gaussian basis sets of triple-zeta valence with polarization quality for solid-state calculations. J. Comput. Chem. 34, 451-459 (2013). https://doi.org/10.1002/jcc.23153

Once the raw .g94 files are dropped into basis_library/custom/, the setup script picks them up automatically.

Exchange-correlation functionals (exposed as named aliases)

  • Slater exchange: Slater, J. C. A Simplification of the Hartree-Fock Method. Phys. Rev. 81, 385 (1951).
  • VWN5 correlation: Vosko, S. H.; Wilk, L.; Nusair, M. Accurate spin-dependent electron liquid correlation energies for local spin density calculations. Can. J. Phys. 58, 1200 (1980).
  • PBE: Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 77, 3865 (1996).
  • B88 exchange: Becke, A. D. Density-functional exchange-energy approximation with correct asymptotic behavior. Phys. Rev. A 38, 3098 (1988).
  • LYP correlation: Lee, C.; Yang, W.; Parr, R. G. Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density. Phys. Rev. B 37, 785 (1988).
  • B3LYP: Stephens, P. J.; Devlin, F. J.; Chabalowski, C. F.; Frisch, M. J. Ab Initio Calculation of Vibrational Absorption and Circular Dichroism Spectra Using Density Functional Force Fields. J. Phys. Chem. 98, 11623 (1994).

Development tooling

  • Claude Code (Anthropic). The coding agent that paired with me on this. https://www.anthropic.com/claude-code
  • GitLab (self-hosted at gitlab.peintinger.com). Version control host.
  • Homebrew. macOS package manager used for libxc, Eigen, Boost, GMP, CMake, and Ninja.

Closing thought

The list above is long, and that is the point. “Vibe-coding a quantum chemistry program in one day” is a true description of what happened, but it is a misleading one if you read it as “a quantum chemistry program was created from nothing in one day.” What actually happened is that a coding agent (pairing with someone who knows the field) threaded together the work of hundreds of authors spanning seventy-five years of physics, chemistry, numerical analysis, and software engineering.

The compressible part was the glue. The giants were already there, standing still, waiting to be stood on.

I vibe-coded a quantum-chemical program in one day

I vibe-coded a quantum-chemical program in one day

During my PhD at the Mulliken Center for Theoretical Chemistry in Bonn, I wrote a quantum-chemical code called AICCM, an object-oriented educational implementation of the Cyclic Cluster Model (CCM) at the Hartree-Fock level. If you want the formal reference, it is described in:

M. F. Peintinger, T. Bredow, The cyclic cluster model at Hartree-Fock level, Journal of Computational Chemistry 35 (11), 839-846 (2014). DOI: 10.1002/jcc.23550

That work took a good chunk of my PhD. Years of implementation, debugging, validating against CRYSTAL and other periodic codes, reading obscure papers on Wigner-Seitz integration and boundary handling of three- and four-center integrals. Real work. The kind that leaves scars.

Alongside AICCM I also developed the pob-TZVP basis sets, a consistent triple-zeta valence family tuned specifically for periodic solid-state calculations. They were built for CRYSTAL, but from the start I had AICCM in mind as a second home for them:

M. F. Peintinger, D. V. Oliveira, T. Bredow, Consistent Gaussian basis sets of triple-zeta valence with polarization quality for solid-state calculations, Journal of Computational Chemistry 34 (6), 451-459 (2013). DOI: 10.1002/jcc.23153

These basis sets matter for the story later in this post, because vibe-qc already has a slot in its basis library waiting for them.

This week I listened to a podcast on vibe-coding, and the thought stuck in my head: what happens if I hand a modern coding agent a task I genuinely understand, something deeply non-trivial, and then resist the urge to help code? Not a toy CRUD app. Not a weekend script. A quantum chemistry program. The kind of thing where a single wrong sign in a density matrix silently gives you garbage for three hours of debugging before you notice.

So I created a blank git repo on my GitLab server, called it vibe-qc, and gave Claude the task. Just the task. No starter code, no reference implementation to crib from, no hand-holding on the math. I steered at the architecture level and made scoping decisions, but I did not write code. Not one line.

What I expected vs. what happened

I honestly expected it to crash out somewhere around the two-electron integrals, or to produce a plausible-looking SCF loop that silently disagreed with the reference by a few millihartree. That is the usual failure mode when someone writes this code for the first time: it runs, it looks right, and the total energy is wrong in the fourth digit.

Instead, by the end of the day we had a working program that agreed with PySCF to machine precision. It did fall into one trap I found amusing: at one point it tried to run restricted Hartree-Fock on lithium. Lithium has three electrons. You cannot do closed-shell RHF on an odd number of electrons. You need an open-shell method. We added UHF a few commits later and the problem went away, but it is telling that an AI coding agent can reproduce the exact kind of mistake a first-year grad student makes.

What got built in one day

16 commits. Here is the inventory.

Architecture

Python frontend with a C++17 numerical core, bridged by pybind11. CMake plus scikit-build-core for the build system, so pip install -e . just works. About 3500 lines of C++ for integrals, SCF drivers, gradients, and DFT machinery. libint2 v2.13.1 built from source under third_party/libint/ with max_am=5 and first-order derivatives enabled. Homebrew libxc 7.0 for the 500+ exchange-correlation functionals.

The GIL is released during native work, so the Python frontend is not a bottleneck. I had asked about this up front and was reassured that Python would not become a restriction for multi-core scaling later.

Hartree-Fock, closed and open shell

  • RHF with Hcore or SAD initial guess, symmetric orthogonalization via $\mathbf{S}^{-1/2}$, optional density damping.
  • UHF with separate alpha and beta densities, reporting <S²> as a spin-contamination diagnostic. Closed-shell UHF collapses to RHF at 1e-14.
  • Pulay DIIS convergence accelerator. On H₂O we measured 52 iterations (with damping 0.5) collapsing to 9 iterations (with DIIS). Per-spin DIIS for UHF.
  • SAD initial guess (Superposition of Atomic Densities) that runs a fractional-occupation atomic SCF per unique element. This was added specifically after UHF hit a wrong local minimum on OH in 6-31G*. The fix was immediate.
  • Analytic nuclear gradients for both RHF and UHF via the Pople-Binkley formula with an energy-weighted density matrix. Verified against PySCF at 1e-8 Ha/bohr.

Density Functional Theory

  • Numerical integration grid: Treutler-Ahlrichs M4 radial (Chebyshev-2nd-kind nodes) combined with a Gauss-Legendre-in-cos(θ) and uniform-φ angular product, partitioned across atoms by Becke fuzzy cells with the iterated switch function. The default medium grid integrates the H-1s density to 1e-10.
  • AO evaluation on the grid: both $\chi_\mu(\mathbf{r})$ and $\nabla\chi_\mu(\mathbf{r})$ at every grid point, supporting Cartesian or pure spherical shells. Analytic AO gradients verified against finite differences at 1e-10.
  • libxc wrapper accepting alias names (“LDA”, “PBE”, “B3LYP”) or explicit comma-separated XC_… IDs, with hybrid fractions detected automatically.
  • RKS SCF driver with full energy decomposition $(E_{\mathrm{core}},\, E_J,\, \alpha E_K,\, E_{\mathrm{xc}},\, E_{\mathrm{nuc}})$. Matches PySCF to 5e-11 Ha on LDA and to grid accuracy (~1e-6 Ha) on B3LYP and PBE.

ASE integration

This was the decision I am most happy about in retrospect. Rather than write a custom input parser, we wired the code into ASE (the Atomic Simulation Environment) as a proper Calculator subclass. That gave us, for free:

  • Input readers for XYZ, CIF, PDB, POSCAR, Gaussian inputs, and basically every format chemists actually use.
  • Geometry optimization via ase.optimize.BFGS. H₂O in HF/STO-3G with a distorted starting geometry converges in 7 BFGS steps to r(OH) = 0.989 Å and angle HOH = 100.0°, exactly the known literature value.
  • Implicit method routing: HF with multiplicity 1 goes to RHF with forces, HF with multiplicity > 1 goes to UHF with forces, DFT with multiplicity 1 goes to RKS (energy-only at day’s end).
  • Clean unit handling via ase.units.Bohr and ase.units.Hartree.

Basis set library

90 standard Gaussian .g94 basis sets shipped by libint are exposed automatically (STO-3G through aug-cc-pVQZ, the def2 family, plus the JKFIT, JFIT, and CABS auxiliary sets). On top of that there is a basis_library/custom/ directory where a user can drop their own .g94 files. A setup script assembles the union, with custom files overriding standard ones by name. This path is ready for dropping in my own pob-TZVP and pob-TZVP-D basis sets (the ones I developed for solid-state calculations) once I dig up the files.

Tests

131 tests across 13 files, full suite runs in about 10 seconds. Every numerical feature is cross-checked against PySCF as the reference. A representative slice of what actually passes:

  • H₂ / STO-3G at R = 1.4 bohr: E_HF = −1.116714325063 Ha, difference from PySCF = −1.87e-14.
  • H₂O / STO-3G at experimental geometry: E_HF = −74.964453863067 Ha, difference from PySCF = −2.84e-14.
  • OH doublet / STO-3G UHF: <S²> = 0.7533 (ideal 0.75, about 0.3% spin contamination).
  • H₂O / STO-3G / LDA: 5e-11 Ha vs PySCF.
  • H₂O / STO-3G / B3LYP: 1.3e-7 Ha (grid-accuracy limited).
  • CH₄ / 6-31G* / PBE: under 3e-6 Ha.

Documentation

A README with quick-start examples for both macOS and Linux, a detailed installation doc covering Homebrew, Debian/Ubuntu, Fedora/RHEL, and Conda paths, a quickstart tour, and a basis-library README. All written by the same agent, in one day, alongside the code.

What I deliberately did not add yet

UKS (unrestricted Kohn-Sham), analytic DFT gradients, ROHF, OpenMP inside the Fock and ERI-gradient loops, MPI for multi-node, post-HF correlation methods like MP2 and CCSD, and of course the actual cyclic cluster model. Every foundation for CCM is now in place: libint with periodicity hooks available, ASE’s CIF reader already wired in, and a validated HF/DFT SCF stack on finite molecules that I trust.

How I steered it

If you are thinking about trying this yourself, the interesting part was not the code, it was the scoping. A few patterns I noticed in my own behavior over the day:

Anticipatory infrastructure. Early in the morning, when the code was still just producing one-electron integrals, I made the decision to build libint from source with first-order derivatives enabled. Homebrew’s libint would have been faster to set up, but it did not include the derivative integrals. I knew we would need gradients for geometry optimization later, and I did not want to tear down and rebuild the install halfway through the day. That single 10-minute decision paid off twice: once for HF gradients, once for UHF gradients.

Validation before features. After RHF produced a plausible number, I pushed for formal pytest coverage against PySCF before we added DIIS. That caught two real bugs immediately: a Cartesian-vs-spherical d-orbital basis-purity issue, and a last-iteration MO self-consistency bug. Both were silent errors. Neither would have shown up if I had just eyeballed the total energy.

End-user surface before internals. We wired in the ASE Calculator and the logging integration before adding forces, before DFT, before SAD. The moment I caught the calculator running silently without emitting any trace, I paused the next phase and inserted a logging phase. It is much easier to debug a 50-iteration SCF when you can see the DIIS error norm falling.

Realistic descoping. My opening plan was “UHF and ROHF, then DFT.” By mid-afternoon it was “UHF only, ROHF can wait.” On the DFT grid, I accepted a Gauss-Legendre product angular grid instead of a proper Lebedev grid for the MVP. These were not compromises on correctness, they were compromises on sophistication, and they let us ship a working DFT implementation the same day.

Commit cadence. Every phase got its own focused commit, with a descriptive message, passing the full test suite. No force-pushes, no amends. 16 clean revertable checkpoints.

Standing on shoulders

None of this would have been possible in a day if the hard parts had not already been solved by other people. The integral engine is libint (Ed Valeev), the 500+ DFT functionals come from libxc (Susi Lehtola and co.), linear algebra is Eigen, the Python bindings are pybind11, the user-facing surface is ASE (Larsen et al.), and every single number in the test suite is cross-checked against PySCF (Qiming Sun et al.) as the reference oracle. On the methods side we are implementing formulas from Pulay (DIIS), Pople, Krishnan, Schlegel and Binkley (analytic HF gradients), Treutler and Ahlrichs (radial grid), and Becke (atomic partitioning). Full citations and a guided tour of the stack are in the companion post.

Watching it work was the best part

I need to say this plainly: I had a blast. I spent the day with Anthropic’s Claude Opus 4.7 as a pair-programmer, and watching the model work through this problem was genuinely thrilling. Every time I came back from a coffee or a meeting, there were new commits, clean ones, with descriptive messages, and a test suite that had grown and still passed. The SCF converged on H₂O. Then DIIS landed and the iteration count dropped from 52 to 9. Then gradients showed up and BFGS walked the water molecule to its equilibrium geometry. Then DFT arrived and B3LYP matched PySCF to grid accuracy. Each of those moments felt like a little gift.

What took me multiple years during my PhD, Opus 4.7 produced in a day. Not because vibe-qc is AICCM, it is not. AICCM implements real periodic CCM with four-center integrals at Wigner-Seitz boundaries, which is a genuinely hard problem and is not in vibe-qc yet. But the molecular HF + DFT + gradients + ASE integration layer, which in a traditional PhD timeline is itself a one- or two-year project, collapsed into a single afternoon. That is a kind of leverage I have never experienced before, and I am grinning about it.

The PhD was not wasted, quite the opposite. The reason I could steer this build effectively, and the reason I knew to demand a SAD guess the moment UHF stalled, and the reason I caught the lithium-on-RHF mistake instantly, is because I spent those years learning quantum chemistry from the inside. The expertise did not become obsolete. The expertise became the steering wheel, and it turns out steering is a deeply satisfying way to use what you know.

The really exciting part is what this unlocks. Ideas that used to be “interesting but I cannot justify six months to try it” suddenly become “let us see what happens this weekend.” The research questions I filed away after my PhD, the ones I always meant to come back to, are now within reach in a way they were not last year. That is a good day.

The next step, when I find another day, is the cyclic cluster model. That is the part where I find out whether the model can actually extend beyond well-trodden textbook ground into the corner of the literature that I personally carved out a decade ago. I am genuinely curious. I will report back.

The code lives at vibe-qc.com. If you want to reproduce the experiment yourself, the recipe is: a blank repo, a clear scoping brief, and the discipline to let the model cook.

Ginger Grale Juice Recipe

I recently bought a juicer by an Austrian brand that has slow cold press masticating squeezer mechanism. Amazon has currently a promotion going on offering a $20 coupon making it really cheap compared to others. The results are great and it is easy to clean. Finally i can drink my veggies. I have played around with recipes from the web and created a few on my own (which is really not hard, you just end up every once in a while with juice you just want to drink later…). I make a large enough amount to drink one glass and store about two bottles.

Here’s the recipe. All ingredients are also weighted in metric units.

  • Bowl of grapes (500g)
  • Two stalks of celery (75g)
  • 1 cucumber (280g)
  • 4 limes (125g)
  • Piece of ginger root (25g)
  • 3 apples (350g)
  • 1 kiwi (70g)

  • Pictures:

    Previous Image
    Next Image

    info heading

    info content


    This is the juicer:


    These are the glass bottles i use:

    Upgrading NextCloud on PLESK Server

    I encountered timeouts and problems with the web updater of NextCloud. Since it is super easy to use the command line updater and nothing like that has appeared here I think this is the way to go. I run a root server with PLESK. I use PHP 7.3 installed via plesk for the cloud subdomain.

    Here are all the commands:

    Change directory to wherever your NextCloud installation is located:

    [code] cd /var/www/vhosts/DOMAIN/cloud [/code]

    Your USER would be www-data or the owner of the domain in PLESK:

    [code]
    sudo -u USER /opt/plesk/php/7.3/bin/php updater/updater.phar
    [/code]

    Say “yes” to start the upgrade.

    Say no here! The system php maybe different from the one used by Plesk.

    [code]
    sudo -u USER /opt/plesk/php/7.3/bin/php occ upgrade
    [/code]

    This checks if some maintenance on db side (not indexed columns) needs to be taken care of.

    [code]
    sudo -u USER /opt/plesk/php/7.3/bin/php ./occ db:add-missing-indices
    [/code]

    Then switch off the maintenance mode.

    [code]
    sudo -u USER /opt/plesk/php/7.3/bin/php occ maintenance:mode –off
    [/code]

    Wine (Crossover Office 18) and Microsoft Office 2016

    I use Ubuntu (18.10) for work. For RDP I use a Virtual Machine running Windows 10. While LibreOffice is great for many things, its Microsoft Office comparability is limited. Especially PowerPoint presentations do not look great. Google Slides does a great job here. As long as I have an internet connection it is my preferred choice for office, also due to its collaboration features. Sometimes I still need to use original Microsoft Office applications. Running them in a VM is one option, but the integration into the filesystem, especially with my Nextcloud sucks though.

    Years ago I was running an older version of office with WINE. I was not in the mood to play around with the config so I bought Crossover Office 2018 and give Microsoft Office 2016 (32 bit) a try. The good thing is the money will find its way towards the WINE project.

    Out of the box Word, Excel, Access, Outlook and Publisher seem to work. PowerPoint did not. It took me a while to figure it out and Google did not help much. But after disabling ‘Performance Enhanced Graphics’ for the bottle it launches.

    Yabause 0.9.15 and (K) Ubuntu 18.10

    I own a Sega Saturn which I did not bring to the US. I really do not want to mess around with PAL/NTSC and the power adapter.
    Luckily there is a great emulator for that old gaming console: Yabause.

    Ubuntu and its variants ship Yabause 0.9.14. Since 2016 0.9.15 is out and it has several bug fixes which are listed in the changelog.

    I decided to build it from the source code. That required a few adjustments and I am writing them down more or less for myself for future reference.

    The source code can be downloaded from here.Extract archive and create build directory

    Extract archive and create build directory:
    cd Downloads
    tar xvf ~/Downloads/yabause-0.9.15.tar.gz
    cd yabause-0.9.15
    mkdir build
    cd build
    Install dependencies:
    sudo apt install freeglut3-dev libglew-dev libopenal-dev qtbase5-dev qtmultimedia5-dev cmake cmake-gui libqt5multimedia5 libqt5gamepad5-dev 


    Note that I might be missing some here that were already installed on my system but might not be installed on yours. If you encounter errors I am happy to help out in the comments.

    Add missing includes to header files:

    Line 25 of ../src/qt/ui/UICheats.h add
    #include <QButtonGroup>

    Line 23 of ../src/qt/ui/UICheats.h add
    #include <QButtonGroup>

    Line 25 of ../src/qt/ui/UIHexInput.h add
    #include <QValidator>
    Configure:
    cmake-gui
    Click configure and generate. This is where missing libraries show up.
    Run cmake with options (otherwise linking will fail)  and qt:
    cmake -DYAB_PORTS=qt -DSH2_DYNAREC=OFF ../
    Run make and install: 
    make
    sudo make install
    /usr/local/bin/yabause
    Dungeons & Dragons: TheTower of Doom