<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en" xmlns="http://www.w3.org/2005/Atom"><title>Recent posts to blog</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/" rel="alternate"/><link href="https://sourceforge.net/p/wavepacket/cpp/blog/feed.atom" rel="self"/><id>https://sourceforge.net/p/wavepacket/cpp/blog/</id><updated>2024-05-12T10:32:00.846000Z</updated><subtitle>Recent posts to blog</subtitle><entry><title>Perils of compiling software under Windows</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2024/05/perils-of-compiling-software-under-windows/" rel="alternate"/><published>2024-05-12T10:32:00.846000Z</published><updated>2024-05-12T10:32:00.846000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net0953689b78cc12b36620c7531a8e26ea9416da20</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I have been planning for years to write something in praise of vcpkg as making Windows compilation less unbearable. While &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/wiki/WindowsBuild"&gt;building a binary package of&lt;br/&gt;
Wavepacket&lt;/a&gt; recently, I found, however, that there are also other issues when porting software from Unix, hence a shift in focus.&lt;/p&gt;
&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Only after programming C++ under Windows for some time did I fully grasp how much of a hacker's system Unix is. You only need to master a few concepts, such as how the linker works or which package to install for the library headers, then adding external dependencies to your code becomes almost trivial, also for others. And things just work; you might spend ages blissfully unaware of problems such as ABI compatibility, symbol versioning, linker maps or the details of the ELF symbol resolution.&lt;/p&gt;
&lt;p&gt;This merry life comes to an end as soon as you decide to compile anything under Windows. Fundamentally, this is caused by different requirements and ecosystems.&lt;/p&gt;
&lt;p&gt;Unix distributions thrive on Open Source Software (OSS). At their heart, they are a set of build scripts that download and build thousands of software packages. These packages are easily available through some package manager. Because of that, you can usually assume that every competent user has reasonably up-to-date system package versions and can install all dependencies.&lt;/p&gt;
&lt;p&gt;Windows, on the other hand, was meant as an operating system for proprietary software. You have some version of Windows, possibly outdated, on which you install possibly outdated proprietary software that may have been built for an earlier version of Windows. Even worse, the proprietary software may contain&lt;br/&gt;
proprietary components that are themselves outdates and have been built for yet another version of Windows.&lt;/p&gt;
&lt;p&gt;The net result of many of the subsequent design decisions is that building software on Windows is a moderate pain. This is, by the way, not restricted to OSS, commercial software also has to fight with the limitations. Let us have a look at a few consequences.&lt;/p&gt;
&lt;h3 id="problems-and-solutions-when-compiling-under-windows"&gt;Problems (and solutions) when compiling under Windows&lt;/h3&gt;
&lt;h5 id="1-no-package-management"&gt;1. No package management&lt;/h5&gt;
&lt;p&gt;Under Windows, the user is assumed to buy software and install it through the software's installer. Package management is not offered by the system itself.&lt;/p&gt;
&lt;p&gt;If you want to efficiently manage dependencies, your first step should be the use of a package manager. Windows-native managers such as chocolatey do not cover most development needs. For C/C++ development, you want to use a specific package manager that just builds your direct and indirect dependencies with the latest sources and makes them available to your build. From personal experience, I can recommend vcpkg, but I would assume that Conan is an equally good choice.&lt;/p&gt;
&lt;p&gt;The general approach of vcpkg is to offer a CMake (and Meson etc.) integration that mitigates many problems. For example, it sets prefix paths such that the vcpkg are easily found by a package search or copies dlls into your output folder (see the next item).&lt;/p&gt;
&lt;h5 id="2-the-dynamic-loader-is-comparably-dumb"&gt;2. The dynamic loader is comparably dumb&lt;/h5&gt;
&lt;p&gt;If your code depends on other libraries, you need a dynamic loader to make these libraries available to your code at run time. As a rule of thumb, the Windows dynamic loader loads dlls only from the current directory. It does not understand runpaths, nor does Windows have common directories like /usr/lib where the libraries that you need may have been  installed by default (slight simplification, but not relevant for your needs). Of course, user overrides for the dynamic loader like LD_LIBRARY_PATH do not exist, either.&lt;/p&gt;
&lt;p&gt;The only general solution for his limitation is to copy all dependencies into the directory of an executable. Even if you build a testrunner, you need to make sure that your library under test as well as all its dependencies are copied into the same directory as the testrunner. Welcome to the state of the art for professional software development under Windows!&lt;/p&gt;
&lt;p&gt;A common workaround is to build everything into one common output folder. Then you only need to copy the dependencies once and have them available for all programs. However, this requires of course additional build script infrastructure.&lt;/p&gt;
&lt;h5 id="3-the-dynamic-loader-may-get-in-the-way"&gt;3. The dynamic loader may get in the way&lt;/h5&gt;
&lt;p&gt;There is another problem with the dynamic loader and plugin architectures. You  will usually encounter those if you build a (Python / Java / ...) extension module. Again skipping some complexities, the problem goes like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The dynamic loader identifies a dll by its filename.&lt;/li&gt;
&lt;li&gt;Dll filenames usually have no version number suffix; symlinks and the like also do not exist.&lt;/li&gt;
&lt;li&gt;A program can load every dll only once.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now imagine Python (which depends on sqlite.dll) loading an extension module that brings an own sqlite.dll. In general, his setup will crash because either Python or the extension module gets the wrong sqlite.dll. The same can even happen if two different extension modules need the same dll.&lt;/p&gt;
&lt;p&gt;The solution for this problem is called namespacing. If the dynamic loader is too dumb to distinguish different libraries with the same name, you make the library names unique. Hence the extension module would rename its sqlite.dll into something like sqlite_js9x81nsj.dll.&lt;/p&gt;
&lt;p&gt;Note that this situation is usually not a problem under Unix. Ignoring some details, each ELF shared object has an own list of dependencies to search for symbols, so a Python extension module would first look into its own sqlite.so before going up the hierarchy and look into the dependencies of Python.&lt;/p&gt;
&lt;h5 id="4-licenses"&gt;4. Licenses&lt;/h5&gt;
&lt;p&gt;It should be clear by now that you cannot reasonably expect even advanced users to build your code under Windows. Instead, you have to build the code yourself, and distribute the resulting binary including all external dependencies.&lt;/p&gt;
&lt;p&gt;This opens a new can of worms: Licenses. You might not have noticed under Unix because &lt;em&gt;you&lt;/em&gt; only ever distribute your own source code. As soon as you distribute your binary, however, you need to fulfill all license obligations of the external dependencies.&lt;/p&gt;
&lt;p&gt;Make sure you supply all used licenses with the binary package. And make sure that you fulfill them. While doing so, you may come across enjoyable special cases; for a harmless example the MSVC C++ standard libraries require you to adhere to the export restrictions of the United States. There may be worse conditions.&lt;/p&gt;
&lt;h5 id="5-miscellaneous-other-issues"&gt;5. Miscellaneous other issues&lt;/h5&gt;
&lt;p&gt;Last, but not least, Windows is simply a different platform, so you need to change the programming style.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When compiling a library, the Unix default is to expose all symbols (functions and globals) in the resulting shared object, while under Windows the default is to not expose any symbols. This difference in behavior makes programs difficult to port. You can change the defaults with flags (for example the CMake target property WINDOWS_EXPORT_ALL_SYMBOLS). However, if the code exports data, such as static variables, this workaround does not cut it, and you will need to expicitly declare the export. as &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/wiki/WindowsBuild/"&gt;I had to do with my underlying tensor library&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;By default, what is a shared object file "mylib.so" under Unix is split into three different files under Windows: the runtime dependency "bin/mylib.dll", the debugging information "bin/mylib.pdb" and the import library with information for the linker "lib/mylib.lib" For additional confusion, the latter has the same extension ".lib" as a static library.&lt;/li&gt;
&lt;li&gt;If you use the Windows API directly, always convert your strings into UTF16 wide strings and use the "W" version of the Windows API functions. This guarantees consistent behavior, the narrow string "Ansi" functions may depend on the user localization. This is a leftover artefact of Windows' early support for Unicode.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some other issues that I have never come across personally, but that you should be aware of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Never pass memory ownership over Dll boundaries, allocation and deallocation must always be done in the same module. If you build a library that hands out allocated memory, offer an API to free this memory again. The only exception is if you can guarantee that all affected modules are build against the same runtime. Failure to do so can lead to mystery crashes.&lt;br/&gt;
    The background is that different modules (possibly clients of your library) can be built against different runtimes that have different ideas about how to allocate memory. This problem is rather esoteric under Unix; by default all shared objects use the same global libc. These differences can be a major pain when porting a library to Windows.&lt;/li&gt;
&lt;li&gt;You are more likely to encounter ABI compatibility problems under Windows, because the system has &lt;em&gt;not&lt;/em&gt; been compiled with essentially the same compiler. Plus, there are different calling conventions in use. Most likely, you will be fine if you at consume or expose only C interfaces, but be aware&lt;br/&gt;
  of this issue.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="conclusions"&gt;Conclusions&lt;/h3&gt;
&lt;p&gt;This list of problems is by no means exhaustive, but should cover the main pain points. I hope it gives a reasonable overview of the main pitfalls and enough&lt;br/&gt;
pointers and keywords to solve the issue.&lt;/p&gt;
&lt;p&gt;Personally, all the problems that I have also seen at work have made me into a very practical advocate of OSS. Once they are packaged, OSS components are easy to use and upgrade, usually make sense as a package, and do not require additional build infrastructure. This is notably different from issues that we have at work with commercial software.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Wavepacket 0.3.6 released</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2024/05/wavepacket-036-released/" rel="alternate"/><published>2024-05-09T14:39:21.283000Z</published><updated>2024-05-09T14:39:21.283000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net1ddec3552b016897506c7abe8434315f6bd4ba12</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;The main focus of this release was an improved compilation. In particular, being able to provide Windows binaries, given that this is a recurring issue.&lt;/p&gt;
&lt;h5 id="1-windows-binaries-and-simpler-compilation"&gt;1. Windows binaries and simpler compilation&lt;/h5&gt;
&lt;p&gt;Version 0.3.6 is the first to ship a Windows distributable, consisting of a Python package with associated documentation and Demos. Everything is still somewhat crude (documentation, setup, required preparation), but it works at least.&lt;/p&gt;
&lt;p&gt;To get there, a host of other changes needed to be done. Wavepacket now depends on an upgraded version of the tensor library (which builds rather easily on Windows). Another side effect was that I had to revisit how to build Wavepacket easily under any operating system. The upgraded Vagrantfile now provides a template for compiling all dependencies that should be rather portable to arbitrary systems.&lt;/p&gt;
&lt;h5 id="2-2d-contour-plots"&gt;2. 2D contour plots&lt;/h5&gt;
&lt;p&gt;To slowly improve the plotting capabilities, this release adds another plotting type. The plot is still somewhat simple compared to the Matlab version, but it is a first step towards more complex plotting setups.&lt;/p&gt;
&lt;h5 id="3-a-solver-based-on-faber-polynomials"&gt;3. A solver based on Faber polynomials&lt;/h5&gt;
&lt;p&gt;This one has been on my personal agenda for years. I wanted to implement a polynomial solver similar to the Chebychev solver, but suitable for open quantum systems. The new Faber solve is  moredifficult to use  than the Chebychev solver because we need to specify a domain of convergence in the complex plane, not just an interval on the real/imaginary axis. However, the final result should be reasonably straight forward to use. See &lt;a href="https://wavepacket.sourceforge.io/cpp-doc/current/demo_faber.html" rel="nofollow"&gt;https://wavepacket.sourceforge.io/cpp-doc/current/demo_faber.html&lt;/a&gt; for the ugly details.&lt;/p&gt;
&lt;p&gt;Because most of the effort in using the Faber solver is spent on figuring out the support of the Hamiltonian under study, this issue also spawned some additional functionality to get extremal potential values, to truncate simple operators (DvrOperator / FbrOperator, which are diagonal in the DVR or FBR), and to estimate the maximum eigenvalue with the power method.&lt;/p&gt;
&lt;h5 id="4-smaller-features"&gt;4. Smaller features&lt;/h5&gt;
&lt;p&gt;As usual, other issues got fixed along the way I would just refer to the detailed changelog under &lt;a href="https://sourceforge.net/p/wavepacket/cpp/git/ci/master/tree/NEWS"&gt;https://sourceforge.net/p/wavepacket/cpp/git/ci/master/tree/NEWS&lt;/a&gt; instead of repeating it here.&lt;/p&gt;
&lt;h5 id="outlook"&gt;Outlook&lt;/h5&gt;
&lt;p&gt;I currently plan to finish the API, which will introduce some breaking changes over the next releases, and to implement the final functionality to be roughly en par with the Matlab version in terms of basic functionality. I also have a mild interest in fleshing out the Python interface more, because Python offers a more natural trial&amp;amp;error approach to running Wavepacket simulations.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Wavepacket 0.3.5 released</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2023/04/wavepacket-035-released/" rel="alternate"/><published>2023-04-10T21:05:58.021000Z</published><updated>2023-04-10T21:05:58.021000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.netc9cedfd17b282d0bce0715606e97cfd4c55c68ff</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;This release took unusually long; I usually strive for about two releases per year. This one was overdue due to some assistance with getting the underlying tensor library to use CMake, and a rather discouraging set of issues that I mostly dropped for an alternative approach. Anyway. What &lt;em&gt;did&lt;/em&gt; change?&lt;/p&gt;
&lt;h5 id="1-much-better-support-for-open-system-dynamics"&gt;1. Much better support for open-system dynamics&lt;/h5&gt;
&lt;p&gt;Wavepacket can propagate a density matrix with some system-bath coupling completely in the DVR. That is pretty neat, but, as it turned out, mostly useless.&lt;/p&gt;
&lt;p&gt;When you use a reduced density matrix for open system dynamics, you always set up your Lindblad or Redfield Liouvillian in the energy eigenstate basis. This has two reasons. One is that you generally go for the Caldeira-Leggett model where the bath consists of uncoupled harmonic oscillators at different frequencies, which defines the coupling in an energy eigenstate basis. The other reason is that you normally want to have relaxation to thermal equlibrium as limiting case, which is difficult to express outside of an energy eigenstate basis.&lt;/p&gt;
&lt;p&gt;Version 0.3.5 offers functionality where you only need a system Hamiltonian and specify a spectral density and get a Lindblad Liouvillian with a moderate amount of work. See the example MolVibration/OH/3 for a slightly more involved example of that. This scheme, however, does not deal with the Lamb shift, which can be relevant, but is tricky to calculate in any case, so I will probably stop at the current level of assistance.&lt;/p&gt;
&lt;h5 id="2-more-consistent-names"&gt;2. More consistent names&lt;/h5&gt;
&lt;p&gt;I was not happy with some of the names, sometimes even order of arguments. So there was a first round of API-breaking changes: consistent naming and ordering.&lt;/p&gt;
&lt;p&gt;For the complete list of breaking changes, see the NEWS file. Just to highlight a few:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;"PropagatorPrimitives" are now called "Solvers", so for example ChebychevPrimitive =&amp;gt; ChebychevSolver&lt;/li&gt;
&lt;li&gt;dropped useless prefixes and suffixes. For example, "Interval1D" is now called "Interval", or "PlotObserver" in the Python module "wavepacket.plot" is now called "Observer" (the module name says all about plotting)&lt;/li&gt;
&lt;li&gt;switched the order of arguments in the constructor of "State": first grid, then the coefficients. That is the order used everywhere else, and I always mistyped that.&lt;/li&gt;
&lt;/ul&gt;
&lt;h5 id="3-build-cleanups"&gt;3. Build cleanups&lt;/h5&gt;
&lt;p&gt;I went over the (CMake) build and cleaned it up in a lot of places. There should be fewer test errors, generally more stability, a few less dependencies etc.&lt;/p&gt;
&lt;p&gt;The most noticeable change is that I now try to use the installed googletest version before downloading something.&lt;/p&gt;
&lt;h5 id="4-smaller-features"&gt;4. Smaller features&lt;/h5&gt;
&lt;p&gt;As usual, some things cropped up along the way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;new functors: Lorentzian and SoftRectangularFunction&lt;/li&gt;
&lt;li&gt;function to get the lowest N functions from an eigenstate solver, function to orthonormalize a set of states&lt;/li&gt;
&lt;li&gt;function to construct the adjoint of an operator&lt;/li&gt;
&lt;li&gt;made Python demos always produce plot output where possible&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="outook"&gt;Outook&lt;/h4&gt;
&lt;p&gt;At work, one of highlights in the last year has been to add a Python interface to some code. As a consequence, I am now way more familiar with Python than before, and would like to redo the Python integration. Also, the underlying tensor library has been adapted to CMake, which makes it way easier to compile, but requires code adjustments. Put together, I hope to finally provide a Python wheel, also for Windows, for the next release.&lt;/p&gt;
&lt;p&gt;Breaking changes will continue, there is one large set of changes (Python is very opinionated about snake_case), and a few smaller issues with consistency, but I hope to get over these within the next two releases and finally keep the API mostly stable.&lt;/p&gt;
&lt;p&gt;Otherwise, I am going to add a few more usability goodies. Wavepacket is already pretty usable, except for being slow when doing open systems and poor in the plotting department, so these would be the other focus areas.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Wavepacket 0.3.4 released</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2022/01/wavepacket-034-released/" rel="alternate"/><published>2022-01-11T20:13:23.124000Z</published><updated>2022-01-11T20:13:23.124000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net510585e7061ea6535865e501ab044700c9909507</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;New features in this version:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;There is now a much simpler approach to build Wavepacket with a&lt;br/&gt;
  virtual machine.&lt;/p&gt;
&lt;p&gt;Install two programs (Vagrant + Virtualbox), modify a config file,&lt;br/&gt;
open a console, type three to four commands, and you can execute&lt;br/&gt;
Wavepacket Python code in your Web browser. For more details, see&lt;br/&gt;
README_VirtualMachine in the root directory. Needs some more&lt;br/&gt;
polishing, but should be reasonably plug&amp;amp;play.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Plotting in Python was improved considerably.&lt;/p&gt;
&lt;p&gt;Plots are still limited to one dimension right now, but you can set&lt;br/&gt;
the units to use for coordinates, energies etc., and it is possible&lt;br/&gt;
to save the plots as animations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added exponentiation of operators.&lt;/p&gt;
&lt;p&gt;This feature is still in its infancy, but at least basic&lt;br/&gt;
functionality for certain operators is there. This will allow&lt;br/&gt;
absorbing boundaries for the efficient Chebychev propagators or the&lt;br/&gt;
split operator method.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I also removed some temporaries to speed up the Chebychev&lt;br/&gt;
propagators, and did some cleanup that will help with a  Windows&lt;br/&gt;
release.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Dependency management with DotNet or Why you should use Paket</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/09/dependency-management-with-dotnet-or-why-you-should-use-paket/" rel="alternate"/><published>2021-09-09T21:17:20.795000Z</published><updated>2021-09-09T21:17:20.795000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.nete606bc641e2ab7130cb721aef6990c1fb2c3cfcc</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Today something completely different. I wanted to write about this topic for a long time, and it also has nice overlap with vcpkg (which I also wanted to write up for some time now), so I finally decided to just do it.&lt;/p&gt;
&lt;p&gt;This blog post was inspired by me learning more DotNet internals than I would normally care about. One of the rather frustrating experiences during this process was the lack of readily available information on how DotNet deals with dependencies. Hence this blog post.&lt;/p&gt;
&lt;p&gt;On second thought, maybe the information was there, but perfectionist that I am it took me some time to accept the state of the art.&lt;/p&gt;
&lt;p&gt;One word of warning: I have setup our DotNet code at work to use Paket for dependency management and never again looked back to Nuget. It might be that the situation discussed here has improved by now.&lt;/p&gt;
&lt;p&gt;The whole article is licensed under the &lt;a class="" href="https://creativecommons.org/publicdomain/zero/1.0/" rel="nofollow"&gt;CC0 license&lt;/a&gt;. Copy parts, whatever. It would be nice to receive attribution, but in the end I do not care.&lt;/p&gt;
&lt;h3 id="shortest-possible-summary"&gt;Shortest possible summary&lt;/h3&gt;
&lt;p&gt;The three takeaway messages (and sections) of this post are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Dependency management is hard.&lt;br/&gt;
    Be careful and make conscious decisions, or you will be hurt in places you did not even know.&lt;/li&gt;
&lt;li&gt;Dependency management in DotNet is harder.&lt;br/&gt;
    You are limited in your options, and it is rather easy to break things.&lt;/li&gt;
&lt;li&gt;Ditch Nuget (the tool), choose Paket instead.&lt;br/&gt;
    Nuget is complicated, lacks functionality, and is terribly brittle in edge cases.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="1-dependency-management-is-hard"&gt;1. Dependency management is hard&lt;/h3&gt;
&lt;p&gt;When you have a project, you typically decompose it into coherent modules. These can be executables, test runners, libraries etc. The modules expose a public API that allows other, consuming modules to use the encapsulated functionality. This public API changes over time, either in syntax or in behavior. Hence, modules always reference other modules by a specific version, either implicitly (e.g. "the version from this commit") or explicitly through some version number, e.g., from &lt;a class="" href="https://semver.org" rel="nofollow"&gt;semantic versioning&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In almost all non-trivial cases, not all code will be written by you / your team, and you will rely on external modules for certain functionality. In this case, you need to make sure that these provide the correct API version for error-free consumption by your code. That is, you need to &lt;em&gt;manage&lt;/em&gt; external dependencies of your code.&lt;/p&gt;
&lt;p&gt;Note that in the following, I will switch to VisualStudio / DotNet speak. VisualStudio calls projects "solutions", and the individual modules "projects" with "assemblies" as build output. Finally, assemblies can be wrapped in a (usually Nuget) "package" that contains additional metadata, in particular dependencies on other packages.&lt;/p&gt;
&lt;h5 id="example-1"&gt;Example 1&lt;/h5&gt;
&lt;p&gt;Consider the trivial case of a program P depending on an external assembly E, as depicted in figure 1&lt;/p&gt;
&lt;p&gt;&lt;img alt="Figure1" src="./attachment/figure1.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 1:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;simple dependency of a program P on an external assembly E in a specific version.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;All you need to do here is to choose the correct version of E and you are done.&lt;/p&gt;
&lt;p&gt;The dependency is evaluated in two contexts: During the build, and at run time. During the build, the compiler looks for the assembly E and essentially verifies that all referenced symbols with the correct signature exist. When running the program, the DotNet runtime loads the assembly E with the correct version when needed and redirects calls to the corresponding symbols in E. In-between, the symbols are stored essentially as strings.&lt;/p&gt;
&lt;h5 id="example-2"&gt;Example 2&lt;/h5&gt;
&lt;p&gt;Let us choose a more complex example, where P depends on two assemblies A, B, and these in turn depend on an external assembly E. See figure 2 for the scheme.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Figure2" src="./attachment/figure2.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure2:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;Fundamental diamond dependency conflict, where the program P depends on two assemblies A, B that reference an external dependency in different versions x.y.z and a.b.c, respectively.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In practical situations, the dependency graph can be more complex, with transitive assemblies in between or more than two assemblies requiring E, but the circular structure as central feature is commonly found.&lt;/p&gt;
&lt;p&gt;To understand the principal problem, consider the following scenario. Let A be the output of an internal project that is under your control, while B is an external assembly that you only get in compiled form. You compile A against E in version x.y.z, and at runtime you also only provide E in version x.y.z. to fulfill both dependencies from A and B. Let the versions x.y.z and a.b.c be different.&lt;/p&gt;
&lt;p&gt;Now assembly B calls a particular API (E in version a.b.c) but gets a &lt;em&gt;different&lt;/em&gt; API (E in version x.y.z). In particular, symbols or the underlying behavior may have changed between the two versions. Whenever B calls one of the changed symbols in E, bad things may happen. In the best case function signatures changed, then the runtime cannot find the requested symbol and throws an exception. This can cripple your application because some functionality is not reachable or even crashes the application, but at least you can localize the error. In the worse case E behaves differently from what B expects. Then you have a very obscure, hard to trace down bug.&lt;/p&gt;
&lt;p&gt;What is particularly insidious in this case is that all the code is perfectly sound. No amount of static code analysis and almost no review will highlight an error. However, your &lt;em&gt;total&lt;/em&gt; system, composed of P, A, B, E(x.y.z) is broken, and this only at runtime. In the literature you will find such situations labelled with the term "hell", as in "dll hell" or "jar hell" or, as I will expand below, "assembly hell".&lt;/p&gt;
&lt;h5 id="what-to-do-with-a-diamond-dependency"&gt;What to do with a diamond dependency&lt;/h5&gt;
&lt;p&gt;There are basically three approaches:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Pick &amp;amp; Pray&lt;/em&gt;&lt;br/&gt;
    The simplest solution uses a common version to satisfy both dependencies. However, as discussed above, this is largely programming by coincidence and not guaranteed to work.&lt;/p&gt;
&lt;p&gt;There is one exception, though. If and only if the assembly E is trustworthy not to change the public API without notice, and if no such notice has been given between x.y.z and a.b.c, the two versions should be interchangeable. For example, if E uses semantic versioning and the two versions differ only in the minor or micro version number, then you can pick the higher version to fulfill both dependencies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Duplicate the assembly&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Another way to solve the diamond dependency is to resolve the diamond like in figure 3.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Figure3" src="./attachment/figure3.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 3:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;Possible solution to the dependency problem of figure 2, by supplying assembly E twice with two distinct versions.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you &lt;em&gt;depend&lt;/em&gt; on the assembly E in two different versions, well, you just &lt;em&gt;supply&lt;/em&gt; the assembly in two different versions. This generally works, however, there are a few caveats.&lt;/p&gt;
&lt;p&gt;If the assembly E has internal state, this state will be duplicated, which can be a problem. For example if E is a logging framework, duplicating E gives you &lt;em&gt;two&lt;/em&gt; logging frameworks that work independently. These can then interfere in arbitrary ways, for example by overwriting each other's files.&lt;/p&gt;
&lt;p&gt;More importantly, this solution is only available if the runtime supports it. Some runtimes that do &lt;em&gt;not&lt;/em&gt; support this setup are the ELF format (linux shared libraries) or Sun Java before version 9 (though you can work around both restrictions with modest extra effort). In both cases, the application holds a list of modules, either jar files or shared libraries, that can be searched for missing symbols (functions or variables). When a symbol is requested, the runtime or dynamic loader takes the symbol name essentially as a string, and goes through the list of modules until it finds a matching entry. If multiple entries exist, only the first is ever returned, hence duplication does not work.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Enforce a common assembly version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;You can try to ensure a.b.c == x.y.z, that is, that all code only ever requests a single version of the external assembly. This should be your preferred solution whenever possible.&lt;/p&gt;
&lt;p&gt;Of course, as usual, this approach is not always feasible. It only works if you have complete control over the affected assemblies A, B in figure 2. But often, at least one of the assemblies is itself an external assembly; then you would need to modify and compile foreign code, which is considerable extra work and complexity. Even if the assemblies A, B are &lt;em&gt;in principle&lt;/em&gt; under your control, they might be maintained by different teams. Or they may be used by several internal clients with different requirements. Requiring a consistent version of all dependencies can then add a lot of inertia to the development process. Instead of a "hey, let's quickly update this package" approach, you need to involve and synchronize multiple teams etc., which can also translate into considerable costs and effort.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="2-dependency-management-in-dotnet-is-harder"&gt;2. Dependency management in DotNet is harder&lt;/h3&gt;
&lt;p&gt;Now how does the DotNet environment fare with regard to the dependency problem? Unfortunately not too well.&lt;/p&gt;
&lt;p&gt;Apparently the &lt;em&gt;runtime&lt;/em&gt; seems to support the layout of figure 3. You can have the same symbol name referring to different code in different assemblies. However, the assembly &lt;em&gt;loader&lt;/em&gt; effectively does not support this layout. The details are listed &lt;a class="" href="https://docs.microsoft.com/en-us/dotnet/framework/deployment/how-the-runtime-locates-assemblies" rel="nofollow"&gt;here&lt;/a&gt;, but in essence every assembly has a name (and version, but this can be overridden), and assemblies are referred to by their name. The DotNet assembly loader can load an assembly with a given name only once, unless you use rather arcane features like &lt;a class="" href="https://docs.microsoft.com/en-us/dotnet/framework/deployment/best-practices-for-assembly-loading" rel="nofollow"&gt;loader contexts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In essence, unless you go quite some extra miles with renaming assemblies, rebuilding other assemblies to refer to the new names etc., the duplication of assemblies does not work. You may spend the extra effort for one or two crucial assemblies that have been shown to be problematic, but not for the dozens of dependencies that a serious DotNet application may depend on. So in general you can choose: Try to get common dependency versions or pray.&lt;/p&gt;
&lt;p&gt;While DotNet is not alone with this problem, its impact is exacerbated by the fact that dependency trees can quickly become deep and wide. In particular Nuget packages like the various Azure components have a tendency to drag in the whole world in arbitrary versions.&lt;/p&gt;
&lt;p&gt;To give some numbers from work: We have a simulation, probably 50-100k lines for the kernel, that allows simulation (and corresponding tracking for billing, authentication etc.) in Azure, which is why it depends on various Azure packages. The program has more than 200 external dependencies. In contrast a multi-million-line, highly complex desktop simulation has less than 50 dependencies, many of them highly specialized and conflict-free.&lt;/p&gt;
&lt;p&gt;For the versioning problem, a notable example is Newtonsoft.Json, probably the most widely used DotNet Json parser.&lt;br/&gt;
Various Azure packages depend on Newtonsoft.Json in version 6.x, while other packages required it in a version like 12.x. That is a whooping difference of six in the major version! It seems safe to assume a priori that these dependencies are incompatible.&lt;/p&gt;
&lt;p&gt;A third problem comes from the way Nuget packages tend to deal with version constraints. The default version constraint, and what feels like the only one I have ever seen is a minimum version. So if you have a Nuget package/assembly X that has been built against a package/assembly Y in version, say, 2.1.0, then the Nuget package for X declares a dependency on the package Y with any version &amp;gt;= 2.1.0. This dependency can be fulfilled by 2.1.1 (ok), 2.5.0 (probably ok), but also 10.0 (not ok). While this approach has its virtues, it makes dependency management harder because you lack warnings that would indicate potentially incompatible versions.&lt;/p&gt;
&lt;h3 id="3-ditch-nuget-the-tool-choose-paket-instead"&gt;3. Ditch Nuget (the tool), choose Paket instead.&lt;/h3&gt;
&lt;p&gt;Given the state of the art of DotNet dependency management, there are a few features that a DotNet package manager should have:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When you have a solution, consisting of libraries, test runners, programs etc., you generally want a single version of each external assembly for everything.&lt;/p&gt;
&lt;p&gt;This is pretty important. Consider a test runner using a different version of some external assembly from your production program. You may have, for example, a dependency error in your program that is never caught by a test or different behavior in test and production. Good luck debugging that.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You should be able to get a dependency graph for your solution.&lt;/p&gt;
&lt;p&gt;The graph shows all dependencies from your projects to Nuget packages and transitive dependencies between Nuget packages. It allows you to review the versions of all dependencies, so that you can identify potentially problematic dependencies. Once you have "signed off" the dependency graph, it should be the immutable reference for the packages/assemblies to use, and should only change if you add/remove or update dependencies.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Unfortunately, Nuget has the very fundamental design decision; Nuget manages dependencies project-wise, not solution-wise. In theory, this offers additional flexibility, and is probably the only way to integrate with the build system. However, it adds a lot of overhead to the dependency management, because you need to manually ensure that all projects use the same version of a given package. I have fond memories of a coworker spending hours and an endless stream of curses on this task for a comparably trivial program (~50k lines), and people fearing an upgrade because of the days of trouble it brings. Also, while Nuget nowadays offers something like a dependency graph, you get one for each project, which limits its use.&lt;/p&gt;
&lt;p&gt;Nuget has a few further downsides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;No C++/CLI support.&lt;/p&gt;
&lt;p&gt;If you combine DotNet and native code, you might sometimes like to consume Nuget packages in the C++/CLI glue code. Unfortunately, Nuget has no C++/CLI support whatsoever. C++/CLI projects cannot have Nuget-managed dependencies, and when looking for potential incompatibilities between packages, Nuget never looks past a C++/CLI project. That is, to spell it out, you do not even get a warning if your dependencies are entirely broken.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Integration into the build system.&lt;/p&gt;
&lt;p&gt;By default, when you use Nuget, it integrates into MSBuild. Superficially, that sounds pretty cool, and allows, for example, to just specify the Nuget packages in your project files. In practice, it is not.&lt;/p&gt;
&lt;p&gt;MSBuild is simply a pretty complex monster, and bolting down yet another layer of functionality does not make it more robust. I would recommend to turn on detailed logging for a project with some dependencies, and read up where and how MSBuild looks for potential assemblies to link your code against.The result should convince you that you want to override MSBuild, not integrate. There are also some exotic cases, where MSBuild can "lose" assemblies in interesting ways that suggest you do not want to integrate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Entering Paket, an alternative package manager. It is not perfect: the documentation took some time to get used to, and the way it manipulates project files feels a bit dirty. But Paket supports the work flow as it should be, and it has this down-to-earth feeling of Unix tools: doing one thing properly.&lt;/p&gt;
&lt;p&gt;To use Paket, you logically proceed in two steps. First, you define what external dependencies with which version constraints you need globally for your solution, and run Paket on this input. The result is a dependency graph in a file called paket.lock that tells you every single package and version that is needed. This in itself is already pretty cool: Just by parsing this file, you can see which package requires what other package. If you are not happy with the dependency graph, you can fine-tune by adding more requirements or changing required versions.&lt;/p&gt;
&lt;p&gt;Once Paket has created the dependency graph it avoids touching it. When you add or remove dependencies, it will build a new dependency graph using the existing one as starting point, and you can also update packages, which apparently changes dependencies. But as a general rule, once you have created a dependency graph and are happy with that, you never need to worry about breaking something by accident. Afterwards, you need to add a pre-build step that downloads all the packages in the dependency graph and puts them in some specific directory.&lt;/p&gt;
&lt;p&gt;The second logical step is defining for each project what dependencies are needed. Paket will take a look at these definitions and modify the VisualStudio project files. For every dependency, it adds an &amp;lt;assemblyreference&amp;gt; tag to the project file with the location of the downloaded package as HintPath. This is about the strongest coupling you can have to an external assembly in MSBuild. MSBuild will always look in the HintPath first, and copy the assembly to the output directory in any case etc.&amp;lt;/assemblyreference&amp;gt;&lt;/p&gt;
&lt;p&gt;That is all that Paket does. No complicated project manipulations, no complex interaction with the build system, just downloading packages and forcing dependencies to the downloads. And assembly references also work perfectly well with C++/CLI, so with Paket you can add Nuget packages to C++/CLI projects.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Wavepacket 0.3.3 released</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/08/wavepacket-033-released/" rel="alternate"/><published>2021-08-08T11:14:46.870000Z</published><updated>2021-08-08T11:14:46.870000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net8c34baeac1c1f1128d1eeb773a42141487a08479</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;The version features two major improvements towards the goal of being more usable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Python interface has a plotting observer.&lt;br/&gt;
    There is now an Observer (written in Python and used in the C++ code) that utilizes Python's matplotlib to plot the wavefunction on the screen. This is currently not that impressive because it works only for one-dimensional wave functions . However, with the technological barriers now overcome, you can expect more plotting options for the Python interface in the future.&lt;br/&gt;
    See the DoubleWell Python demo for an example.&lt;/li&gt;
&lt;li&gt;The operator hierarchy has been rectified a little.&lt;br/&gt;
    This may give a minor speed boost in certain situations, because real-valued and complex-valued operators are now disentangled and because real algebra is faster than complex algebra. The main gain is that it should be easier to reason about most operators in code, which is a prerequisite for things like better optimization of operators, or the split-operator scheme&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Besides these two major improvements there have been a couple of smaller usability or special-usage improvements. The most noticable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Removed the OdeRelax propagator, since it should be always slower than the Chebychev relaxation; the latter can also propagate in negative imaginary time now.&lt;/li&gt;
&lt;li&gt;Diagonalization always returns explicitly real-values results.&lt;/li&gt;
&lt;li&gt;You no longer need to construct a projection operator to project() onto a coupled channel; you can now simply give the index of the coupled channel.&lt;/li&gt;
&lt;li&gt;Most Python functions no longer need casting between Numpy arrays and the C++ tensors.&lt;/li&gt;
&lt;li&gt;ProjectionND can take a state as input to project onto, not a tensor + grid (which defines a State)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unfortunately, I failed another time at providing a Windows package, but there will be a much more convenient Virtual machine with the next release.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Convergence #2: time steps</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/04/convergence-2/" rel="alternate"/><published>2021-04-19T21:15:32.585000Z</published><updated>2021-04-19T21:15:32.585000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net95b29cda27f9e0ba21e50e5201932f4517ebf3f8</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Note: This post should probably be merged with the Wiki, in particular &lt;a class="" href="https://sourceforge.net/p/wavepacket/wiki/Numerics.TDSE/"&gt;the page on the time-dependent Schödinger equation&lt;/a&gt;. The second part should be easy, but the first part uses a pretty different angle, so I first post the whole thing here.&lt;/p&gt;
&lt;p&gt;This document describes how to propagate a wavefunction in time. The question should be answered are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What time step should you use?&lt;/li&gt;
&lt;li&gt;How can you monitor during a propagation that your chosen time step is ok?&lt;/li&gt;
&lt;li&gt;What are advantages and disadvantages of different propagation schemes and when should you choose which?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To restrict the scope, I mostly skip convergence issues with respect to the grid. So either you already converged the grid, or I just take Hamiltonian plus numerical errors from the finite grid as The System and disregard deviations from reality until they come back for revenge.&lt;/p&gt;
&lt;p&gt;Furthermore, I will only consider time-independent systems here. Basically, if you can converge a time-independent system, you can also converge a time-dependent system: Guess a slightly smaller time step and adjust until you are done. However, introducing the required formalities for proper reasoning (cavity-dressed states etc.) would bloat this already long description even more.&lt;/p&gt;
&lt;h2 id="some-theory"&gt;Some theory&lt;/h2&gt;
&lt;h4 id="model-system"&gt;Model system&lt;/h4&gt;
&lt;p&gt;As a model system throughout the text, I take one of the Wavepacket demos as blueprint. The demo, HarmOscillator/Gaussian_1D/1, describes a one-dimensional harmonic oscillator (omega = m = 1 a.u.), where the initial state is a displaced Gaussian wavepacket.&lt;br/&gt;
The native time scale is the classical oscillation period, which is &lt;img alt="2pi" src="./attachment/2pi.png"/&gt; a.u. for the given parameters. Apart from an offset of 1/2, the eigenenergies are spaced by the energy omega, hence the time scale for the n-th quantum eigenstates is approximately the n-th fraction of the classical oscillation period.&lt;/p&gt;
&lt;p&gt;I keep the total simulation time to about one classical period. Longer times would make the simulations slightly slower, but do not contribute qualitatively new features.&lt;/p&gt;
&lt;p&gt;&lt;img src="./attachment/figure1.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 1&lt;/em&gt;&lt;/strong&gt;: &lt;em&gt;Projection of the initial state onto the energy eigenstates as a function of the eigenstate's energy. Red crosses: 128 grid points, interval [-10,10] a.u.. Blue dots: 512 points, interval [-20,20] a.u.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The first figure shows the projection of the initial state onto the eigenstates of the harmonic oscillators, that is, their populations as red crosses. The Python script that generated this plot can be found &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/04/convergence-2/attachment/figure1.py"&gt;here&lt;/a&gt;. The populations decay roughly exponentially down to about 10^-9 at an energy of 50 a.u. and remain on this level. This latter feature is not what we would expect from an analytic solution, instead the populations should decrease monotonically with energy. In fact, this feature is an artefact from the finite grid, which is particularly pronounced for high energies. To demonstrate this, we can use a brute force approach and double the size and density of the grid (increasing the number of points fourfold). In this case (blue dotted line), the populations decrease further as expected.&lt;/p&gt;
&lt;p&gt;This observation highlights two issues that we will encounter again further below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We have an unnaturally high population of high-energy states. This will cause trouble during propagation.&lt;/li&gt;
&lt;li&gt;We can safely truncate the energies of the Hamiltonian at 50 a.u. This will perturb all higher energy states, but they are garbage anyway.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a final remark, this system lacks some of the real-world complications for simplicity. The Hamiltonian is Hermitian and time-independent, there are neither open-system terms and no absorbing boundary conditions. We will briefly touch some of these complications later when discussing different solution schemes.&lt;/p&gt;
&lt;h4 id="finding-a-good-time-step-and-monitoring-convergence"&gt;Finding a good time step and monitoring convergence&lt;/h4&gt;
&lt;p&gt;A first intuitive approach to determine a good time step is based on a qualitative understanding of the system dynamics. If you plot the spectrum of your initial state, as in figure 1, the fastest dynamics can be expected to come from populated eigenstates with the highest energies. So if you choose a time step that is considerably smaller than the oscillation, say one tenth, you would expect the simulation to converge. For the harmonic oscillator example, you could generously say that states above 50 a.u. have negligible population.&lt;br/&gt;
An energy of 50 a.u. corresponds to an oscillation time of &lt;img alt="2pi / 50 a.u.~= 0.1 a.u" src="./attachment/oscillation.png"/&gt;. Taking one tenth of that, you could reasonably guess that a timestep of 0.01 a.u. should be good enough for reasonably accurate numerical schemes. &lt;/p&gt;
&lt;p&gt;This approach is certainly useful to get a good order of magnitude guess, however, it is also not correct. We will go into the details when we examine our example system in the next section.&lt;/p&gt;
&lt;p&gt;Another standard approach is to try out different parameters, that is, time steps, and to check how much the results differ as a function of the time steps. The differences should become smaller as we go to smaller time steps, i.e., more accurate solutions. Hence, we only need to set some qualitative target when we are satisfied, and run a couple of calculations.&lt;/p&gt;
&lt;p&gt;As a simple measure for convergence, we can take the induced metric of the L2 scalar product. If v_1 and v_2 are two results with different time steps, then their distance is&lt;/p&gt;
&lt;p&gt;&lt;img alt=" ||v1 - v2|| = sqrt{ &amp;lt;v1-v2|v1-v2&amp;gt; } = sqrt{ &amp;lt;v1|v1&amp;gt; - &amp;lt;v1|v2&amp;gt; - &amp;lt;v2|v1&amp;gt; + &amp;lt;v2|v2&amp;gt; } = sqrt{2} sqrt{ 1 - Re &amp;lt;v1|v2&amp;gt; } approx sqrt{1 - Re &amp;lt;v1|v2&amp;gt;}" src="./attachment/distance.png"/&gt;&lt;/p&gt;
&lt;p&gt;Here, I assume that the norm of the states is approximately one. Since only relative quantities and order-of-magnitude estimates are important here, I also dropped the prefactor sqrt{2} for simplicity.&lt;/p&gt;
&lt;p&gt;However, there is also a much simpler metric available. For this, we first note that in general the norm of a wavefunction is constant over time. This is a direct consequence of the time evolution operator &lt;img alt="exp(-iHt" src="./attachment/time_evolution.png"/&gt; being unitary. All numerical schemes formally produce approximations to the time evolution operator, and unless they are carefully crafted, these approximations are not unitary. Hence, most numerical schemes fail to preserve the norm, and we can easily find an ok time step by checking that the norm of our wavepacket does not change.&lt;/p&gt;
&lt;p&gt;However, like all good ideas, there are certain cases where this scheme fails. In particular, as soon as we use absorbing boundary conditions, the norm ceases to be a preserved quantity.&lt;/p&gt;
&lt;h4 id="application-to-model-system"&gt;Application to model system&lt;/h4&gt;
&lt;p&gt;Now let us see what happens when we play around with the harmonic oscillator model system. Figure 2 shows the norm of the wavefunction as a function of time for different time steps ( The Python script can be found &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/04/convergence-2/attachment/figure2.py"&gt;here&lt;/a&gt;). We used essentially a fifth-order Runge-Kutta solver to evolve the wave function in time.&lt;/p&gt;
&lt;p&gt;&lt;img src="./attachment/figure2.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 2:&lt;/em&gt;&lt;/strong&gt; *Norm of the wavepacket as a function of time for different time steps.&lt;/p&gt;
&lt;p&gt;Recall that we estimated a good time step as 0.01 a.u. For a too large time step of 0.1 a.u., the norm diverges essentially immediately, showing the immediate use of the norm as a monitoring tool. However, even for our estimate of 0.01 a.u., the norm diverges rapidly at some point, while a slightly smaller time step of 0.005 a.u. shows no divergence.&lt;/p&gt;
&lt;p&gt;So, what has happened here?&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;dt = 0.01 a.u.&lt;/th&gt;
&lt;th&gt;dt = 0.005 a.u.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="./attachment/figure3_0.01.png"/&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="./attachment/figure3_0.005.png"/&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 3:&lt;/em&gt;&lt;/strong&gt; *Projection of the time-dependent state onto eigenstates of the Hamiltonian, similar to figure 1, for different times. Left plot: dt = 0.01. Right plot: dt = 0.005.&lt;/p&gt;
&lt;p&gt;To gain some further insight, we calculate spectra similar to figure 1, but for different times. This is done in figure 3 for the two relevant time steps of 0.01 a.u and 0.005 a.u. (Python script &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/04/convergence-2/attachment/figure3.py"&gt;here&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;These spectra tell a story. Apparently, the numerical errors that lead to the norm deviation accumulate not in the energy range where most of the wavepacket resides (&amp;lt; 50 a.u.), but at the highest energies. Now if we have an eigenstate for a given energy, we try to approximate the correct time evolution, exp(-iEt), by discrete time steps, which gives an error. An example is shown below for the exceedingly bad but simple Euler method. Fifth-order Runge-Kutta methods are qualitatively similar, although their error is much smaller. In particular, you need to sample one oscillation by multiple steps, say 5 or so, otherwise, you cannot hope to describe the dynamics accurately.&lt;/p&gt;
&lt;p&gt;&lt;img src="./attachment/euler.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 3b:&lt;/em&gt;&lt;/strong&gt; *Approximation of the correct time evolution (black circle) with the Euler method for different time steps as fractions of the oscillation time T.&lt;/p&gt;
&lt;p&gt;Now for the highest energy states, we have E about 200 a.u., and hence an oscillation period of about 0.03 a.u. Such an oscillation is poorly sampled by the time step of 0.01 a.u., leading to a rather large error in the norm. For the half as large time step, we sample an oscillation with about 6 discrete time steps, which is just enough for reasonably accurate  results. Note that if you look closely, the highest energies also show a norm increase for dt=0.005 a.u., but much smaller, maybe one order of magnitude every 10 a.u. Hence, if we would simulate much longer, say, for 100 a.u., the norm would also diverge for the smaller timestep.&lt;/p&gt;
&lt;h4 id="corollary"&gt;Corollary&lt;/h4&gt;
&lt;p&gt;The main result from trying out the theory is something like&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Not the energy / dynamics of your initial state dictate the required time step, but the &lt;/em&gt;&lt;em&gt;whole spectrum&lt;/em&gt;&lt;em&gt; of your system.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This observation has an important consequence: If you are interested in performance, that is, large time steps, you should always truncate the spectrum of your Hamiltonian or its components! There are two reasons why truncating operators should not bother you too much:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;On the purely numerical side, these high-energy states are often garbage anyway, as discussed in figure 1. Hence, there is little harm done if you screw them up completely by truncating the operators.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the physics side, you need to recall that most systems are approximations. For our harmonic oscillator example, we observed severe numerical errors at energies larger than E = 50 a.u., which corresponds to the fiftieth harmonic oscillator eigenstate. Recall that harmonic oscillators are low-order Taylor expansions of the true potential. So whatever the real system is, we can safely assume that, say, the fiftieth excited state is not going to look like a harmonic oscillator eigenstate. &lt;/p&gt;
&lt;p&gt;No matter how accurately you model your system, you will always have such severe approximations. Maybe you use the Born-Oppenheimer approximation and ignore non-adiabatic couplings: This approximation fails for large energies. Maybe you use a model for your thermal bath: Rest assured that your model is not quantitatively correct at high energies.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So: always know and pick the energy range that you consider important. Then be brave and truncate the spectrum of your operators (typically kinetic energy and potential) outside of this range, knowing full well that you limit the validity of your calculations to sufficiently low energies. What you loose in correctness, you will definitely gain in computation speed and numerical robustness.&lt;/p&gt;
&lt;p&gt;&lt;img src="./attachment/figure4.png"/&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Figure 4:&lt;/em&gt;&lt;/strong&gt; *Norm as a function of time for different time steps. The kinetic and potential energy have been truncated at 50 a.u. each, that is, values larger than 50 a.u. have been set to exactly 50 a.u. in the corresponding DVR/FBR grid representations.&lt;/p&gt;
&lt;p&gt;For the model system, figure 4 shows the effect of truncating the kinetic energy and potential to 50 a.u. each (Python script &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2021/04/convergence-2/attachment/figure4.py"&gt;here&lt;/a&gt;).&lt;br/&gt;
We can clearly see that the norm no longer diverges for the time step of 0.01 a.u., giving us a factor of 2 decrease in computation cost.&lt;br/&gt;
In practice we might truncate at considerably lower energies (20 a.u. or so); here we chose 50 a.u. because that is where the obvious numerical artefacts start.&lt;/p&gt;
&lt;h2 id="solution-schemes"&gt;Solution schemes&lt;/h2&gt;
&lt;p&gt;To solve the time-dependent Schroedinger equation or Liouville-von-Neumann equation, different methods or method classes exist. In the following we will present some important solvers and discuss their advantages and disadvantages&lt;/p&gt;
&lt;h4 id="runge-kutta-and-other-ode-solvers"&gt;Runge-Kutta and other ODE solvers&lt;/h4&gt;
&lt;p&gt;The basic idea for a Runge-Kutta or similar scheme is to expand the time-dependent wavefunction in some orthonormal basis&lt;/p&gt;
&lt;p&gt;&lt;img alt="psi(t) = \sum_n c_n(t) \varphi_n" src="./attachment/expansion.png"/&gt;&lt;/p&gt;
&lt;p&gt;Note that the grid representation is such an expansion with the coefficients c_n(t) being the values of the wavefunction at the grid points psi(x_n, t). If we plug this expansion into the time-dependent Schroedinger equation, we get&lt;/p&gt;
&lt;p&gt;&lt;img alt="i \dot c_n(t) = \sum_m &amp;lt;\varphi_n| \hat H |\varphi_m&amp;gt; c_m(t)" src="./attachment/expanded_tdse.png"/&gt;&lt;/p&gt;
&lt;p&gt;The latter is a set of ordinary differential equations (ODEs) coupled by the matrix elements of the Hamiltonian.&lt;/p&gt;
&lt;p&gt;There is a long history of numerical approaches to solve ODEs, and many solvers exist. Prominent general-purpose solvers are the Runge-Kutta schemes, the Bulirsch-Stoer method or various multistep methods.&lt;/p&gt;
&lt;p&gt;As a particular case, there are also adaptive solvers, for example adaptive Runge-Kutta methods. Here, a Runge-Kutta method of order N gives you the solution for a given timestep, then you check with a special other method of order N+1 if the solution looks reasonably converged. If the errors are outside of a given bound, the solution is recalculated with a smaller time step. If, on the other hand, the solution seems extremely well converged, the time step can be increased instead.&lt;/p&gt;
&lt;p&gt;Typical parameters that you supply are the order N of the scheme, and the timestep \Delta t. Alternatively, adaptive methods require an error bound, usually consisting of a relative and an absolute error. Generally, for a (Runge-Kutta) scheme of order N, the error scales as &lt;img alt="O(\Delta t^{N+1})" src="./attachment/O_n1.png"/&gt;.&lt;/p&gt;
&lt;h6 id="advantages"&gt;Advantages&lt;/h6&gt;
&lt;p&gt;ODE solvers are incredibly robust work horses. They work with time-independent problems, time-dependent problems, complex Hamiltonians (absorbing boundary conditions), open systems, ... Just throw a problem at them, and you get a reasonable solution.&lt;/p&gt;
&lt;p&gt;If you take an adaptive solver, you do not even need to bother with the timestep directly. Just supply an error bound, and the adaptive  solver spits out a reasonable result.&lt;/p&gt;
&lt;p&gt;In short: ODE solvers tend to always work, and they are very easy to use.&lt;/p&gt;
&lt;h6 id="disadvantages"&gt;Disadvantages&lt;/h6&gt;
&lt;p&gt;ODE solvers do not use any knowledge about your system or quantum mechanics in general. This is boon because they are robust, but also a bane because it makes them the slowest of all solvers. There are also some cases where convergence is poor, which again translates into small time steps and low performance.&lt;/p&gt;
&lt;p&gt;In short: If computation time is an issue, you should generally avoid ODE solvers.&lt;/p&gt;
&lt;h4 id="split-operator-method"&gt;Split-operator method&lt;/h4&gt;
&lt;p&gt;The basic idea of the Split operator method is to separate the Hamiltonian into components that you can diagonalize easily,&lt;/p&gt;
&lt;p&gt;&lt;img alt="\hat H = \hat H_1 + \hat H_2 + ... + H_n" src="./attachment/hamilt_sum.png"/&gt;&lt;/p&gt;
&lt;p&gt;If you can diagonalize an operator, that is decompose it into a basis transformation T and a diagonal operator D in this basis, &lt;img alt="\hat H_i = T_i^_1 \hat D_i \hat T_i" src="./attachment/diagonal.png"/&gt;, you can also easily exponentiate it as&lt;/p&gt;
&lt;p&gt;&lt;img alt="exp(\hat H_i) = T_i^{-1} exp(\hat D_i) T_i" src="./attachment/diagonal_exp.png"/&gt;.&lt;/p&gt;
&lt;p&gt;as can be seen by writing out the Taylor series of the exponential function.&lt;/p&gt;
&lt;p&gt;The split operator then consists of writing the short-term propagator as a product of individual terms. For the common Strang splitting, we get&lt;/p&gt;
&lt;p&gt;&lt;img alt="exp(-i \hat H \Delta t) = exp(-i \hat H_1 \Delta t/2) ... exp(-i H_{n-1} \Delta t/2) exp(-i H_n \Delta t) exp(-i H_{n-1} \Delta t/2)
     ... exp(-i \hat H_1 \Delta t / 2) + O(\Delta t^3)" src="./attachment/split_operator.png"/&gt;&lt;/p&gt;
&lt;p&gt;Note that the right-hand side is by construction unitary. As a consequence, the split operator method preserves the norm of the wavefunction.&lt;/p&gt;
&lt;p&gt;Besides the time step, there are two further "parameters" to choose. One is the order, though in practice, everything beyond the Strang splitting tends to become complex. The other is the choice of operators H_i; again, in practice, this is usually intuitive. You normally choose the potential as one component (diagonal in the grid representation / DVR), the kinetic energy as the second (diagonal in the underlying basis / FBR), and some dipole couplings or so as the third one for time-dependent problems.&lt;/p&gt;
&lt;h6 id="advantages_1"&gt;Advantages&lt;/h6&gt;
&lt;p&gt;The split-operator method has a number of unique advantages.&lt;/p&gt;
&lt;p&gt;First of all, despite the rather low order of &lt;img alt="O(\Delta t^3)" src="./attachment/O_3.png"/&gt;, it is generally rather accurate. As a handwaving argument, you can consider the artificial case where only the Hamiltonian is chosen and diagonalized. Then the split-operator method is actually the exact time propagation. As an alternative hand-waving argument, the conservation of the norm means that the approximate solution is searched only in a reduced-dimensional manifold of states with unit norm, which gives more accurate results.&lt;/p&gt;
&lt;p&gt;A second advantage is that the split-operator method with Strang splitting is rather cheap for simple systems. Consider a Hamiltonian decomposable into two terms, kinetic and potential energy. Consider further that you are generally not interested in the value of the wavefunction at the end of &lt;em&gt;each&lt;/em&gt; small timestep, but only every hundredth or so timestep. Then you can combine the last term of the n-th timestep and the first term of the (n+1)-th timestep into one exponential term. As the exponentiation can be performed beforehand, you need to apply two diagonal operators and perform two FBR &amp;lt;-&amp;gt; DVR transformations for propagating a single timestep \Delta t. Contrast this with a second-order Runge-Kutta-scheme with similar error of &lt;img alt="O(\Delta t^3)" src="./attachment/O_3.png"/&gt;. There you would have to apply the Hamiltonian twice for a total of four diagonal operators and four FBR &amp;lt;-&amp;gt; DVR transformations.&lt;/p&gt;
&lt;p&gt;The split-operator scheme can also be extended to time-dependent systems, but becomes less efficient there. In practice, you then have to exponentiate a diagonal operator for every small time step.&lt;/p&gt;
&lt;h6 id="disadvantages_1"&gt;Disadvantages&lt;/h6&gt;
&lt;p&gt;The split-operator method cannot be applied if your system has terms that cannot be easily diagonalized. This prevents its use with certain Hamiltonians, for example products of momentum and coordinate operators. Also, open-system terms tend not to have an accessible diagonal representation. Hence, although you could in principle generalize the split-operator method to density operators (replace Hamiltonians by Liouvillians), this is not very useful in practice.&lt;/p&gt;
&lt;p&gt;Another issue concerns the implementation. An efficient diagonalization of certain operators like dipole couplings is possible, but requires some fiddling. This is, for example, the reason that the method is not yet implemented in the C++ version of Wavepacket as of version 0.3.2.&lt;/p&gt;
&lt;h4 id="polynomial-methods"&gt;Polynomial methods&lt;/h4&gt;
&lt;p&gt;The basic idea of the various polynomial methods is an expansion of the short-time propagator into some polynomials. In the simplest form, this can be written as&lt;/p&gt;
&lt;p&gt;&lt;img alt="exp(-i H \Delta t) \approx sum_{n=0}^N c_n P_n(-i H \Delta t)" src="./attachment/polynomial.png"/&gt;,&lt;/p&gt;
&lt;p&gt;although in some schemes the coefficients also depend on \Delta t. In particular, for various classical polynomials, the P_n can be calculated through a recursion relation, and the coefficients c_n can be precomputed. In practice, you then apply the Hamiltonian N times with some simple algebra in-between.&lt;/p&gt;
&lt;p&gt;The free parameters are the choice of polynomials, the order of the expansion N, and the time step. The error of an N-th order expansion is generally &lt;img alt="O(\Delta t^{N+1})" src="./attachment/O_n1.png"/&gt;. Polynomials are typically chosen based on some additional properties.&lt;/p&gt;
&lt;p&gt;Particularly useful are Chebychev polynomials. These have the property that they converge &lt;em&gt;continuously&lt;/em&gt; against the exponential function within a certain domain. The details are a bit involved, for example you have to normalize the spectrum of your Hamiltonian to the range &lt;span&gt;[-1, 1]&lt;/span&gt;. However, as this is implemented in Wavepacket, you just need to specify a maximum error, from which the order of the expansion is calculated automatically. The time step for the Chebychev expansion is inversely proportional to the spectrum of your Hamiltonian, hence this method benefits a lot from aggressive truncation.&lt;/p&gt;
&lt;p&gt;Note that the order of expansion should generally be rather large (N &amp;gt; 20) for this method to be efficient. However, if you go to too large values (N &amp;gt; 100), you can experience numerical instabilities. Basically, the polynomials themselves have huge function values that can even overflow the double-precision range. So you should try to stay with not too large polynomials, say N &amp;lt; 60.&lt;/p&gt;
&lt;p&gt;For these large orders, typically, you will choose a timestep that is much larger than for, say, an ODE solver. Often, this is not a problem, as the numerical time steps tend to be much smaller than the usual dynamics. However, if you do need the wavefunction at high time resolution, you can generally get there with additional interpolation steps. These require some work to set up, but are very efficient; in particular you should not need to apply the Hamiltonian to a wavefunction for the interpolation to work.&lt;/p&gt;
&lt;h6 id="advantages_2"&gt;Advantages&lt;/h6&gt;
&lt;p&gt;The huge advantage of polynomial expansions is their efficiency.&lt;/p&gt;
&lt;p&gt;To get an estimate, let us consider the Runge-Kutta procedure with the truncated Hamiltonian of figure 4. At a timestep of 0.01 a.u., and with the fifth-order Runge-Kutta procedure, we need to apply the Hamiltonian 5 x 100 times to propagate the wavefunction for 1 a.u., with a corresponding number of vector additions, multiplications etc. In contrast, if we take an expansion in Chebychev polynomials with a spectral range of 50 a.u., a timestep of 1 a.u., and an accuracy of 10^-8 ( about the order of the numerical errors, see figure 1), we need just 46 polynomials. That is, to propagate for 1 a.u., we need to apply the Hamiltonian about 50 times and require only one tenth of the computation time for the Runge-Kutta method.&lt;/p&gt;
&lt;h6 id="disadvantages_2"&gt;Disadvantages&lt;/h6&gt;
&lt;p&gt;Polynomial methods in general, and the Chebychev expansion in particular, also have severe downsides.&lt;/p&gt;
&lt;p&gt;Most importantly, all polynomial methods fail for time-dependent Hamiltonians. The reason is that the simple Taylor series for the short-time propagator turns into a hideous expression with time-integrations of time-ordered operator products, which does not allow simple numerical schemes. There are so-called Krylov subspace method like the short iterative Lanczos method, which offer a way out here, but these are pretty complex on their own.&lt;/p&gt;
&lt;p&gt;The Chebychev method has some additional limitations. It works only if the Hamiltonian has either a real or a purely imaginary spectrum. This excludes important special cases like absorbing boundary conditions or open quantum systems. Also, the method requires the knowledge of exact lower and upper bounds to the spectrum, otherwise the solution may diverge.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As a general rule, you should always use a polynomial solver if your problem fits. In particular, when you need imaginary-time propagation to relax to the ground state or some thermal state, Chebychev polynomials are &lt;em&gt;the&lt;/em&gt; way to go. For the many cases where polynomial solvers cannot be used, you can try out the split operator method, which tends to be surprisingly efficient, or one of the ODE solvers, which are more convenient to use. For small systems in particular, the convenience of an adaptive Runge-Kutta method easily beats the rather poor performance.&lt;/p&gt;
&lt;p&gt;In any case, you should generally truncate your Hamiltonian. Truncation makes the propagation more stable and allows larger time steps. Plus, in many cases you only distort energy eigenstates that are poorly represented on the grid anyway.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Convergence #1: Equally-spaced grids</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/" rel="alternate"/><published>2020-11-24T21:35:39.361000Z</published><updated>2020-11-24T21:35:39.361000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.netbee095b98d8f6f0af9275aa33bb751ebd4a500ea</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I recently got an email from a user that had tried out Wavepacket with some highly accurate data, and found that the results did not match the data. While answering the email, my answer turned out to be a bit long. And I figured out that I may have accumulated quite some experience with respect to converging (wavepacket) calculations over the years. Also, these sort of issues come up every now and again, and there have been similar questions in the past. So I wanted to shape this knowledge into some blog posts about convergence, and how to minimise errors in a wavepacket calculation.&lt;/p&gt;
&lt;p&gt;While writing this first blog post, it turned out that properly presenting the maths is rather involved. For this reason, in this post, I only discuss a very special problem: What happens if you have an equally-spaced grid and choose the grid parameters incorrectly? The advantage of equally-spaced grids is that there is a simple and intuitive answer to this question. In later posts I would try to consider other grids, where things become a lot less clear, and also consider time-step problems.&lt;/p&gt;
&lt;h3 id="equally-spaced-grids"&gt;Equally-spaced grids&lt;/h3&gt;
&lt;p&gt;An equally-spaced grid is determined by three parameters: The center of the grid x_0, the interval length L, and the number of points N.&lt;br/&gt;
The latter parameter can also be expressed by the grid spacing &lt;img alt="Delta x = L / N" src="./attachment/delta_x.png"/&gt;. The grid points are equally distributed in the interval &lt;img alt="(x_0 - L/2 , x_0 + L/2 - Delta x)" src="./attachment/real_range.png"/&gt;, including the two end points.&lt;/p&gt;
&lt;p&gt;If you sample any wave function on these grid points, you can do a discrete Fourier transformation to get Fourier components also on an equally-spaced grid in the momentum space. The wave vectors range from -k_max to +k_max with &lt;img alt="k_max \approx N / (2 Delta x)" src="./attachment/k_max.png"/&gt; (details depend on whether N is even or odd). For this reason, any wave function that is represented exactly at the grid points (the discrete variable representation DVR) can be equivalently written as a sum of discrete Fourier components (the finite basis representation FBR). For more details, see the &lt;a class="" href="https://sourceforge.net/p/wavepacket/wiki/Numerics.DVR/"&gt;wiki page on DVR methods&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In particular, it is common nowadays to use the Fourier method in wavepacket calculations. Here, the momentum operator is applied by transforming into the Fourier space with a discrete Fourier transformation, multiplying each Fourier component with the appropriate momentum and transforming back. Using the Fourier method automatically implies periodic boundary conditions, since the plane waves are generally not limited to the interval L. &lt;/p&gt;
&lt;p&gt;Now if we assume for a moment that you manage to choose the parameter x_0 wisely, we are left with the two remaining parameters L and N, which lead to different convergence problems.&lt;/p&gt;
&lt;h3 id="effect-of-too-small-interval"&gt;Effect of too small interval&lt;/h3&gt;
&lt;p&gt;If the interval L is chosen too narrow for the dynamics that take place, your wavepacket may travel outside of your grid. When this happens, due to the periodic boundary conditions, the wavepacket may enter the grid from the other side.&lt;/p&gt;
&lt;p&gt;For a simple example, let us take a free particle, give it some finite momentum and propagate. The Python code using Wavepacket can be found &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/free.py"&gt;here&lt;/a&gt;, and the resulting video can be seen &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/free.mpg"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The initial state is a Gaussian with some positive momentum. As expected, it initially broadens and moves with positive velocity. However, as soon as it reaches the grid boundaries, we can see it entering the grid again thanks to the periodic boundary conditions. Also, towards the end of the video, the Gaussian has broadened so much that the right side (roughly corresponding to the components with the larger momentum) overtakes and interferes with the left side (roughly corresponding to components with smaller momentum).&lt;br/&gt;
The resulting oscillations are also typical artefacts if the grid is chosen too small.&lt;/p&gt;
&lt;p&gt;In practice, you often encounter another pattern. As an example, let us dissociate a molecule with a Morse potential parametrized for an OH bond. To have a clear wavepacket, the molecule gets a pretty strong kick initially. Again, you can find &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/morse.py"&gt;the code&lt;/a&gt; and the resulting &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/morse.mpg"&gt;video&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What we see here is an initial state with enough energy to dissociate (i.e., a wavepacket travelling to infinity). However, as it reaches the end of the grid, due to the periodic boundary conditions, the wavepacket suddenly feels the influence of the Coulomb barrier at small internuclear distances. As a result, the wavepacket does not reenter the grid from the other side, but is reflected at the boundary of the grid. The barrier here is rather steep, which is a feature not well supported by an equally-spaced grid, but that we can discuss further below.&lt;/p&gt;
&lt;p&gt;A problem that you may sometimes encounter is that your grid is &lt;em&gt;always&lt;/em&gt; too small. Take the Morse oscillator example, and imagine a laser-driven dissociation of a molecule. The dissociation may take some time, so you may need to propagate for large times. But in these large times, the parts of your wavepacket that dissociated first may reach the boundary of your grid, no matter how large you choose it. What to do then?&lt;/p&gt;
&lt;h4 id="absorbing-boundary-conditions-to-the-rescue"&gt;Absorbing boundary conditions to the rescue&lt;/h4&gt;
&lt;p&gt;The solution for this problem are absorbing boundary conditions, typically taking the form of negative imaginary potentials (NIPs). These absorb the wavepacket at the locations where they are nonzero, before the wavepacket reaches the grid boundaries. There are, however, two major drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the potential is too large, the NIP itself can modify the dynamics.&lt;br/&gt;
    Consider a case where the NIP at some grid point absorbs effectively the whole wavepacket in a small timestep.     This effect is similar to an infinite potential wall, which would also lead to a density of zero at the grid point, and may thus reflect the wavepacket.     A consequence is that you should choose the NIP to be as small and smooth as possible, and you may need to converge this as well.&lt;/li&gt;
&lt;li&gt;With the NIP term, the Hamiltonian is no longer self-adjoint.&lt;br/&gt;
    This has serious consequences, because some propagation schemes may no longer work. In particular, the expansion in Chebychev polynomials is only correct for a self-adjoint operator, otherwise you loose all the guarantees (error bounds) that this expansion offers.     The workaround here is to move the NIP out of the Hamiltonian, and absorb the wavepacket after every propagation step. This is currently not implemented in the C++ version of Wavepacket, though, because it requires exponentiation of operators.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let us take the Morse oscillator from the previous section as an example. We might say that an OH distance of 8 atomic units (4 Angstroem) is almost dissociated. Also, from studying the dynamics, it takes at least 100 atomic units (2.5 fs) for the shoulder to travel from 8 a.u. to about the end of the grid. For a constant potential, the absorption of the density is roughly &lt;img alt="A = exp(-2 * V_0 * t)" src="./attachment/absorption.png"/&gt; (the factor two comes because the absorption acts on the wavefunction, but the density is the square of the wavefunction). If we want to absorb 99% of the density (A = 0.01), we get a value of V_0 approximately 0.025 H. In practice, we might want to use a smooth turn-on (e.g., harmonic potential) that is zero at r = 8 a.u, and about the intended value at r = 10 a.u., that is, in atomic units, &lt;img alt="V(x) = - j * 0.025 / 4.0 * (x - 8)^2 (x &amp;gt; 8 a.u.)" src="./attachment/nip.png"/&gt;.&lt;/p&gt;
&lt;p&gt;Applying this absorbing potential to the Morse oscillator demo gives the results shown in &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/nip.mpg"&gt;this video&lt;/a&gt; (&lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/nip.py"&gt;the Python script&lt;/a&gt;). For clarity, the y-range has been reduced considerably. We see how the wavepacket is heavily absorbed up to about two percent of the total density. You might want to increase the grid or play around with the NIP if this is not enough, but the principle works as the back of the envelope calculation suggested.&lt;/p&gt;
&lt;h3 id="effect-of-too-few-grid-points"&gt;Effect of too few grid points&lt;/h3&gt;
&lt;p&gt;For a given interval length, the equally-spaced grid also describes an equally-spaced grid of points in the Fourier space. The grid in the Fourier space spans roughly the interval &lt;span&gt;[-k_max, k_max]&lt;/span&gt; with &lt;img alt="k_max \approx N / (2 Delta x)" src="./attachment/k_max.png"/&gt;. (Details differ for example between even and odd N.)&lt;/p&gt;
&lt;p&gt;If this grid is not sufficient to contain the wavepacket, you get artefacts known as aliasing. Roughly, the plane wave &lt;img alt="exp(ikx)" src="./attachment/exp_ikx.png"/&gt; and &lt;img alt="exp(i(k+2*k_max)x)" src="./attachment/exp_ik2x.png"/&gt; give exactly the same function values at the grid in the real space. But the discrete Fourier transformation always maps this oscillation to the lower wave number k. As a consequence, we have the same effect as the real-space periodic boundary conditions: If the wavepacket accumulates too large Fourier components k &amp;gt; k_max (k &amp;lt; -k_max), it "reenters" the grid at the other side of the Fourier plane&lt;/p&gt;
&lt;p&gt;As a simple example, let us consider a linear ramp. A Gaussian wavepacket on the linear ramp experiences some broadening, but also accelerates according to classical equations of motion. If the ramp is sufficiently steep, the wavepacket accumulates sufficient momentum to exceed the range in the Fourier plane.&lt;/p&gt;
&lt;p&gt;Let us look at an example, see the code &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/ramp.py"&gt;here&lt;/a&gt;, and the realspace video &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/ramp_real.mpg"&gt;here&lt;/a&gt;. We can see how the initial wavepacket accelerates towards the negative end of the grid. Shortly before reaching the end of the grid, however, something happens and the wavepacket travels back (upwards on the potential ramp!). The effect is easily understood if we look at the wavepacket &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/11/convergence-1-equally-spaced-grids/attachment/ramp_fourier.mpg"&gt;in the Fourier space&lt;/a&gt;. We see how the wavepacket accumulates momentum until it reaches the end of the grid in Fourier space, and reenters on the side with the &lt;em&gt;positive&lt;/em&gt; momenta. After the whole Gaussian has traversed this boundary, we have a Gaussian that has a positive momentum and travels the ramp upwards. This slowly decreases the momentum, and the whole process starts anew.&lt;/p&gt;
&lt;p&gt;Hence, equally-spaced grids provide a very simple measure for convergence: eyeballing. Always make sure that your wavepacket stays sufficiently far away from the grid boundaries in Fourier space. Now what is "sufficiently far away"? In principle, the exact definition is a bit complicated and related to off-diagonal operator elements in Fourier space (see the &lt;a class="" href="https://sourceforge.net/p/wavepacket/wiki/Numerics.DVR"&gt;explanation of the DVR approximation&lt;/a&gt;). However, for most practical cases, "sufficiently far away" is the typical extension of the Fourier transform of the potential.&lt;/p&gt;
&lt;h3 id="takeaway-message"&gt;Takeaway message&lt;/h3&gt;
&lt;p&gt;Checking the quality of an equally-spaced grid requires observation of the dynamics both in real space and in Fourier space. This should always be done. In particular, if you find indications of parts of your wavepacket hitting the grid boundaries, you need to increase the interval or number of grid points. These boundary effects are, however, easy to understand and spot. For multi-dimensional problems, you should probably monitor the reduced density.&lt;/p&gt;
&lt;p&gt;If there are structural problems, and you cannot make the grid large enough, you can always try to use absorbing boundary conditions. In theory, you could also employ them in the Fourier space, although this seems rather uncommon.&lt;/p&gt;
&lt;p&gt;So what is my personal takeaway? &lt;br/&gt;
For one thing, writing up such an article is surprisingly hard work. On the other hand, it came back to me that the C++ version of Wavepacket has very poor support for exactly this use-case. The next version (that would be 0.3.3) should definitely feature some usability improvements for plotting and convergence monitoring.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Building a Python interface to a CMake library</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2020/10/building-a-python-interface-to-a-cmake-library/" rel="alternate"/><published>2020-10-08T20:42:23.007000Z</published><updated>2020-10-08T20:42:23.007000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net798173005b659183c2459ee353499ff4899fd045</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I have recently rewritten the way how the Python module for Wavepacket is built, and learned a lot about Python along the way. The learning curve was rather steep, and it felt a lot like learning the intricate details of Nuget and .Net assembly loading (this is not a compliment). However, while extension libraries for Python may not be ideally documented, the implementation is pretty robust and neat. So I thought, I'd write up what I tried and found out to assist others in a similar situation.&lt;/p&gt;
&lt;p&gt;Side note: I do appreciate feedback.&lt;/p&gt;
&lt;p&gt;To briefly summarize my initial situation: I have a C++ library, WavePacket, for which I want to provide an easy-to-install and easy-to-use Python interface using Pybind11.&lt;/p&gt;
&lt;h3 id="things-start-out-simple"&gt;Things start out simple ...&lt;/h3&gt;
&lt;p&gt;At the end of the day, Python's approach to loading modules, which may include libraries with native code, is pretty robust. Neglecting topics like shadowing where multiple modules have the same name, the algorithm works roughly like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Python has search paths where it looks for modules. You can extend these   paths either by setting the environment variable PYTHONPATH to a colon-separated list of directories, or within Python by adding a path string to the list in sys.path.&lt;/li&gt;
&lt;li&gt;When you import a module, Python looks in every search path directory. It searches for either a library with the module name ("lib&amp;lt;module&amp;gt;.so" with various variations under Unix, "&amp;lt;module&amp;gt;.pyd" under Windows) or a subdirectory with the module name.&amp;lt;/module&amp;gt;&amp;lt;/module&amp;gt;&lt;/li&gt;
&lt;li&gt;When it finds a library, it looks for an exported symbol "PyInit_&amp;lt;module&amp;gt;" with a certain signature, which then tells Python about the functions, classes etc. that it exports.. Such a library can be constructed relatively easily for example with the help of pybind11.&amp;lt;/module&amp;gt;&lt;/li&gt;
&lt;li&gt;If a subdirectory with the name of the module exists, it has to contain a file &lt;strong&gt;init&lt;/strong&gt;.py, which is evaluated on loading the module.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One interesting feature of &lt;strong&gt;init&lt;/strong&gt;.py is that it can load more modules, also relative to the module's path.&lt;/p&gt;
&lt;h3 id="but-they-can-get-pretty-complicated"&gt;...but they can get pretty complicated&lt;/h3&gt;
&lt;p&gt;On top of all this functionality, however, there is the Python package management system (pip), which provides a simple-to-use repository of common&lt;br/&gt;
packages and so on. This part I was actually much less interested in, but it was way easier to find documentation on how to create a Python package then how the whole thing works. So my original approach was to bundle the Python interface in a package.&lt;/p&gt;
&lt;p&gt;Python used to have a big blob called setuptools that would magically do all the building, packaging, installation, distribution and so on for you. Using internet sources, I finally cobbled together the following solution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You build and install Wavepacket using CMake with whatever backend (e.g., make).&lt;/li&gt;
&lt;li&gt;As a side effect of the build, my CMake scripts output a setup.py file for the Python compilation in the build directory.&lt;/li&gt;
&lt;li&gt;You run this setup.py script to build and install the Python bindings&lt;/li&gt;
&lt;li&gt;Afterwards, you can run the various Python integration tests in the build directory using CMake tools again (ctest).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The good thing was that this solution worked. However, it had some glaring deficits. On one hand, if you wanted to test that your Python interface worked, you had to use CMake/make, then Python setuptools, then CMake again, which is ugly. What was worse, especially when compiling under Windows, there were two different build systems (CMake, setuptools) with different compiler flags and potentially compilers and downright different logic.&lt;/p&gt;
&lt;p&gt;So I decided to rewrite this part.&lt;/p&gt;
&lt;h3 id="scikit"&gt;Scikit&lt;/h3&gt;
&lt;p&gt;Fortunately, since my first try, the Python world has moved on. Basically, the moloch setuptools has withdrawn to provide only framework functionality for the packaging. Other tools have sprung up to provide the actual frontend that deals with, for example, the compilation of a Python extension library.&lt;/p&gt;
&lt;p&gt;An interesting approach is used by scikit-build. The basic idea here is that you already have a library that is built with CMake, and add the Python bindings there. Scikit-build then drives the CMake build, which includes the Python bindings, takes the artefacts that come out, and wraps them in a Python package. While this sounds already great, scikit-build goes further. It provides some CMake extensions (/functions) for building and linking against a given Python&lt;br/&gt;
installation, and, when calling CMake, sets the module path so that these extensions are easily found.&lt;/p&gt;
&lt;p&gt;With such nice tooling at my side, my idea was to remove the split between the Python bindings and the C++ library. If the CMake build was driven by scikit-build, I would directly compile the Python bindings together with the actual wavepacket code into a single library, and have it packaged and installed. So I would have a Python setup script on top of my CMake build that would drive the compilation if you want to get out a Python package.&lt;/p&gt;
&lt;p&gt;Alas, while trying, I discovered some stepping stones that eventually made me give up this approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First of all, the documentation does not cover all the functionality, so when you have some edge cases, you are left with trying out things or reading the code.&lt;/li&gt;
&lt;li&gt;One of the issues I learned by trial and error is that scikit-build expectes a very particular directory layout. When you want to install a module named "wavepacket", you must have certain data (such as an &lt;strong&gt;init&lt;/strong&gt;.py file) in a directory called "wavepacket", otherwise things did not turn out well by default. In other words, follow the published examples to the letter.&lt;/li&gt;
&lt;li&gt;Another issue were the CMake extensions themselves. To use the full functionality, I had to compile the Wavepacket library as a module (CMake terminology for a library that is loaded at run-time). But CMake correctly prohibits linking against modules, so for example all my C++ unit tests would have to be disabled as well when compiling for Python. Also, CMake supports two different notations for declaring link dependencies, but you may only pick one per target.  Scikit-build uses the legacy notation, but I prefer the modern one for boost dependencies, which also clashes then.&lt;/li&gt;
&lt;li&gt;setuptools originally had some functionality to run unit tests, which is not present in scikit-build. While I fully understand and support the motivation (different concerns should be addressed by different tools), I need yet another tool (e.g., tox) besides CMake and scikit-build for driving my Python tests.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So while there was no real unsurmountable obstacle, things became annoying enough that I was looking for alternatives. Mind you, if you keep these things in mind, scikit-build does look like a viable alternative.&lt;/p&gt;
&lt;p&gt;However, while playing around with scikit-build, I eventually understood enough of Python's internals to be able to build my own module by hand. Furthermore, pondering the problem, it occured to me that I do not want to provide a Python package at all. Wavepacket has some pretty non-standard dependencies (namely the underlying tensor library) that are not easily included in a distribution. So it seems more honest &lt;em&gt;not&lt;/em&gt; to build a Python package, but only a module that can be loaded from Python.&lt;/p&gt;
&lt;h3 id="the-final-approach"&gt;The final approach&lt;/h3&gt;
&lt;p&gt;And that is where my final concept ended up: I use only a CMake build, which creates the Python bindings as a byproduct. The drawback is that this can&lt;br/&gt;
become tricky for edge cases (multiple Python installations and such), but CMake has facilities to help there.&lt;/p&gt;
&lt;p&gt;As a quick run-down: My top-level CMakeLists.txt offers an option to build the Python bindings and searches for Python&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="nb"&gt;option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;WP_BUILD_PYTHON&lt;/span&gt; &lt;span class="s2"&gt;"Build the Python interface"&lt;/span&gt; &lt;span class="s"&gt;ON&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="nb"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;WP_BUILD_PYTHON&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="nb"&gt;find_package&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;Python3&lt;/span&gt; &lt;span class="s"&gt;COMPONENTS&lt;/span&gt; &lt;span class="s"&gt;Interpreter&lt;/span&gt; &lt;span class="s"&gt;Development&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="nb"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;NOT&lt;/span&gt; &lt;span class="s"&gt;Python3_FOUND&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="nb"&gt;message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;ERROR&lt;/span&gt; &lt;span class="s2"&gt;"Need Python3 interpreter and development environment to compile Python module"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="nb"&gt;endif&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nb"&gt;endif&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c"&gt;#...&lt;/span&gt;
&lt;span class="nb"&gt;add_subdirectory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;src&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;WP_BUILD_PYTHON&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="nb"&gt;add_subdirectory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;python&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;endif&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;It then descends into two subdirectories: src, which builds the C++ library with the Wavepacket functionality, and python, which builds the Python bindings.&lt;/p&gt;
&lt;p&gt;The CMakeLists.txt under python/ does some general setup  (https://sourceforge.net/p/wavepacket/cpp/git/ci/master/tree/python/CMakeLists.txt):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It checks for the existence of certain Python modules. This can be done by importing the module with a Python command line and checking the return value&lt;br/&gt;
  of the interpreter.&lt;/li&gt;
&lt;li&gt;It queries the include directories for Pybind11. These are fortunately  accessible from Python.&lt;/li&gt;
&lt;li&gt;It adds a CMake test to run the various Python tests. This requires some fiddling, because we need to tell Python where to find our Python bindings once they are compiled, and I did not quite grasp the idea of Python's unittest package, so I execute it in the directory of the tests. Also, I copy the tests to the binary directory where everything is built to to avoid cluttering my source directory with residues.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, under python/ I create another directory wavepacket/ that will hold my Python bindings. The directory contains the source code for the Python bindings, a CMakeLists.txt to drive the compilation and installation, and an &lt;strong&gt;init&lt;/strong&gt;.py script.&lt;/p&gt;
&lt;p&gt;The compilation of the Python bindings is pretty standard now. The rest is a bit particular:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="nb"&gt;configure_file&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="s2"&gt;"${CMAKE_CURRENT_SOURCE_DIR}/__init__.py"&lt;/span&gt;
     &lt;span class="s2"&gt;"${CMAKE_CURRENT_BINARY_DIR}/__init__.py"&lt;/span&gt;
     &lt;span class="s"&gt;COPYONLY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Installation rules&lt;/span&gt;
&lt;span class="c"&gt;# Note that we need to link against the wavepacket library, so we need to set an rpath&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;pythonInstallDir&lt;/span&gt; &lt;span class="o"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CMAKE_INSTALL_LIBDIR&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/wavepacket_python/wavepacket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;set_target_properties&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;wavepacket_python&lt;/span&gt; &lt;span class="s"&gt;PROPERTIES&lt;/span&gt; &lt;span class="s"&gt;INSTALL_RPATH&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="s"&gt;ORIGIN/../..&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;TARGETS&lt;/span&gt; &lt;span class="s"&gt;wavepacket_python&lt;/span&gt;
     &lt;span class="s"&gt;LIBRARY&lt;/span&gt; &lt;span class="s"&gt;DESTINATION&lt;/span&gt; &lt;span class="o"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;pythonInstallDir&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;FILES&lt;/span&gt; &lt;span class="s"&gt;__init__.py&lt;/span&gt;
     &lt;span class="s"&gt;DESTINATION&lt;/span&gt; &lt;span class="o"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;pythonInstallDir&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The first configure_file copies the &lt;strong&gt;init&lt;/strong&gt;.py to the binary directory. As a consequence, once I build the Python bindings, I only need to add&lt;br/&gt;
${CMAKE_BINARY_DIR}/python to PYTHONPATH and the Python installations can find my wavepacket package just as if it had been installed. This is useful for running the Python tests and demos directly in the binary directory before installation.&lt;/p&gt;
&lt;p&gt;When installing the libraries under ${Install}/lib, I decided to put the Python files under ${Install}/lib/wavepacket_python/wavepacket. This way, you always add ${Install}/lib/wavepacket_python to your Python path and can access the "wavepacket" module. Into this directory, I install my Python binding library&lt;br/&gt;
and the &lt;strong&gt;init&lt;/strong&gt;.py file.&lt;/p&gt;
&lt;p&gt;One last thing to be taken into account is that my Python binding library needs to know the location of the C++ library. This I solved by adding a relative ("origin") rpath that tells the loader where to look for other libraries; this feature should be supported on most modern Unix variants. A Windows build would need some special processing, but building Wavepacket under Windows is nothing a normal user would want anyway.&lt;/p&gt;
&lt;p&gt;Now the &lt;strong&gt;init&lt;/strong&gt;.py file is pretty straight-forward:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;.wavepacket_python&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;It just directs Python to the wavepacket_python module (i.e., library) in the current directory (hence the dot) and imports all symbols from there. This nicely decouples the name of the library with the Python bindings (libwavepacket_python.so) from the name of the Python module (encoded in the directory name).&lt;/p&gt;
&lt;p&gt;While this solution is still quite some way from perfect, it solves my initial problems. I can now compile and isntall the C++ library and the Python bindings with a single CMake call, and also run all the tests using only CMake.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>WavePacket 0.2.4 released</title><link href="https://sourceforge.net/p/wavepacket/cpp/blog/2019/04/wavepacket-024-released/" rel="alternate"/><published>2019-04-28T15:33:49.943000Z</published><updated>2019-04-28T15:33:49.943000Z</updated><author><name>Ulf Lorenz</name><uri>https://sourceforge.net/u/ulflor/</uri></author><id>https://sourceforge.net83bc96291fafb978406671a2df4abb81cea6292e</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I just released the next C++-version of WavePacket. Some excerpts from the &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/git/ci/0.2.4/tree/NEWS"&gt;NEWS file&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;renamed some key classes to disentangle the concept "operator" from "expression/equation/Liouvillian". The new naming system is hopefully much more intuitive. See &lt;a class="" href="https://sourceforge.net/p/wavepacket/cpp/blog/2019/03/naming-the-components-of-equations/"&gt;this blog post&lt;/a&gt; for the explanations.&lt;/li&gt;
&lt;li&gt;The Python interface can be installed as a regular Python module. There are still some loose ends here and there, but it works.&lt;/li&gt;
&lt;li&gt;added a couple new demos&lt;ul&gt;
&lt;li&gt;highlighted demo that explains how to setup thermal states (propagation in imaginary time, random wave functions etc.). See &lt;a class="" href="http://wavepacket.sourceforge.net/cpp-doc/0.2.4/demo_thermal.xhtml" rel="nofollow"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;demo that uses multiple electronic states (MolElectronic/OH/1 and 2)&lt;/li&gt;
&lt;li&gt;unfinished demo that sets up a Lindblad relaxation (MolVibration/OH/3)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;improved several PropagationObservers&lt;ul&gt;
&lt;li&gt;LoggingObserver deals correctly with multiple electronic states&lt;/li&gt;
&lt;li&gt;Plot1DObserver is way easier to setup, and can plot output for multiple electronic states&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;added a projection operator&lt;/li&gt;
&lt;li&gt;some other minor fixes and improvements&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;</summary></entry></feed>