Introduction
// physics · numerical-physics
Jan. 2026·10 min read

Solving efficiently the 2D Schrödinger equation using a split-step FFT method.

Split-Step Fourier for the Schrödinger Equation

Introduction

In the life of a young physicist, quantum mechanics, and especially the Schrödinger equation, often triggers a fascinating, almost masochistic curiosity during our teenage years, one from which we never truly recover...

During my studies, I had the opportunity to implement the Crank–Nicolson algorithm in 1D, which aim to solve the Schrodinger equation with finite differences and semi imlpicit method (ref). 1D is cool, but not cool enough, unfortunately I was limited by my computational resources when trying to extend it to 2D.

More recently, however, I discovered the Split-Step Fourier Method, which saves a lot of resources and is surprisingly easy to implement. In just a few lines of code, we can visualize the famous double-slit experiment, and the younger Joseph (me), who once struggled to visualize QM, would probably be very proud !


Problem

If you are still interested by this article, I assume you already are familiar with quantum mechanics and the Schrödinger equation we aim to solve:

itψ(t)=H(t)ψ(t)i \hbar \frac{ \partial }{\partial t} \, |\psi(t)\rangle = H(t) |\psi(t)\rangle

Where, as usual, the Hamiltonian is the sum of kinetic and potential energy:

H=T+VT=P22m=22mΔP=i\begin{aligned} H &= T + V\\ T &= \frac{P^2}{2m} = - \frac{\hbar^2}{2m} \Delta\\ P &= -i \hbar \nabla \end{aligned}

In the continuum, the kinetic energy is a differential operator. A naive approach would be to solve this PDE using finite differences, but this is computationally expensive in 2D and requires careful treatment to preserve unitarity (e.g., Crank–Nicolson).

The Split-Step Fourier method provides an efficient alternative by working with the time evolution operator, which comes from the exact solution:

ψ(t)=U(t,t)ψ(t)U(t,t)=Texp(ittH(τ)dτ)\begin{aligned} |\psi(t)\rangle &= U(t, t') |\psi(t')\rangle\\ U(t, t') &= \mathcal{T} \exp\left(-\frac{i}{\hbar} \int_{t'}^t H(\tau)\, d\tau\right) \end{aligned}

where T\mathcal{T} denotes time-ordering. For small time steps Δt\Delta t, or when the potential varies slowly, we can approximate (even exact for non-varying potential):

U(t+Δt,t)exp(iΔtH(t))U(t+\Delta t, t) \approx \exp\left(-\frac{i \Delta t}{\hbar} H(t)\right)

The Key Idea: Splitting the Exponential

Directly exponentiating H=T+VH = T + V is challenging because TT and VV are not simultaneously diagonal. However:

  • VV is diagonal in position space.
  • TT is diagonal in momentum space.

Mathematically:

xf(X)ψ=f(x)ψ(x)pf(P)ψ=f(p)ψ~(p)\begin{aligned} \langle x | f(X) | \psi \rangle &= f(x) \psi(x)\\ \langle p | f(P) | \psi \rangle &= f(p) \tilde{\psi}(p) \end{aligned}

where ψ~\tilde{\psi} is the Fourier transform of ψ\psi.
This means we can efficiently compute exponentials in the basis where the operator is diagonal. Computationally, the FFT will allows us to switch between these bases at low cost, this is the heart of the high efficiency of our algorithm.


Exponential Magic: Trotter and Strang Splitting

Using the Baker–Campbell–Hausdorff formula, we can approximate eϵ(A+B)e^{\epsilon(A+B)} by splitting:

  • First-order (Lie–Trotter)
eϵ(A+B)=eϵAeϵB+O(ϵ2)e^{\epsilon(A+B)} = e^{\epsilon A} e^{\epsilon B} + \mathcal{O}(\epsilon^2)
  • Second-order symmetric (Strang)
eϵ(A+B)=eϵA/2eϵBeϵA/2+O(ϵ3)e^{\epsilon(A+B)} = e^{\epsilon A/2} e^{\epsilon B} e^{\epsilon A/2} + \mathcal{O}(\epsilon^3)
  • Higher-order generalizations exist (Suzuki–Trotter), e.g.

Applying Strang splitting to the time evolution operator:

U(t+Δt,t)eiΔtV/(2)eiΔtT/eiΔtV/(2)U(t+\Delta t, t) \approx e^{-i \Delta t V / (2\hbar)} \, e^{-i \Delta t T / \hbar} \, e^{-i \Delta t V / (2\hbar)}

Because VV and TT are diagonal in respectively position and momentum bases:

ψ(t+Δt)=eiΔtV/2F1(eiΔtT/F(eiΔtV/2ψ(t)))\psi(t+\Delta t) = e^{-i \Delta t V/2\hbar} * \mathcal{F}^{-1} \Big( e^{-i \Delta t T/\hbar} * \mathcal{F} ( e^{-i \Delta t V/2\hbar} * \psi (t) ) \Big)

where * denotes pointwise multiplication and F\mathcal{F} / F1\mathcal{F}^{-1} are spatial Fourier transform / inverse.


Discretization

To implement on a computer:

  • Discretize space: a lattice with Nx×NyN_x \times N_y points and spacing Δx\Delta x.
  • Discretize time: NtN_t steps of size Δt\Delta t.
  • Represent the wavefunction as a finite 2D complex array and compute the FFT efficiently.

The FFT reduces the cost of basis transformation from O(N2)O(N^2) to O(NlogN)O(N \log N), which is a huge speedup in 2D or 3D simulations.


Summary of the Algorithm

  1. Discretize the spatial lattice (x,y)(x, y) and time.
  2. Apply half-step potential phase shift in position space (diagonal).
  3. Free propagation (kinetic step):
    • FFT to momentum space
    • Apply full-step kinetic phase shift (diagonal)
    • IFFT back to position space
  4. Reapply half-step potential phase shift in position space.

Python Implementation

Initialize position lattice and potential:

x = np.arange(0, dx*Nx, dx)
y = np.arange(0, dy*Ny, dx)
X, Y = np.meshgrid(x, y, indexing='ij')
V = potential(X, Y)

Initialize momentum lattice and kinetic energy:

kx = 2*np.pi*np.fft.fftfreq(Nx,dx)
ky = 2*np.pi*np.fft.fftfreq(Ny,dx)
Kx,Ky = np.meshgrid(kx, ky, indexing='ij')
K2 = Kx**2+Ky**2
T = hbar**2*K2/(2*m)

Evaluate evolution operators:

U_V = np.exp(-1j * V * dt/(2*hbar))
U_T = np.exp(-1j * K * dt/hbar)

Perform one time step:

psi = U_V * np.fft.ifft2( U_T * np.fft.fft2( U_V * psi) )

Easy to generalise to 3D with the numpy function:

numpy.fft.fftn

Visualization

The phase is represented using a 2π2\pi-periodic colormap, while the density represents a scaled version of the probability distribution to enhance visibility. The potential can be represented in the background.


Resolution, Stability, Error and Complexity

Spatial Resolution:

To resolve the wavefunction properly:

Δx<σ4,Δx<πpmax\Delta x < \frac{\sigma}{4}, \quad \Delta x < \frac{\pi \hbar}{p_{\max}}

where σ\sigma is the typical width of the wavefunction and pmaxp_{\max} is the maximum momentum.

Time Step / Stability:

To ensure stable evolution:

Δt<mΔx2,ΔtVmax\Delta t < \frac{m \Delta x^2}{\hbar}, \quad \Delta t \ll \frac{\hbar}{V_{\max}}

The first condition controls phase rotation due to the kinetic term, while the second controls phase rotation due to the potential term.

Error:

  • Local error per step: O(Δt3)\mathcal{O}(\Delta t^3)
  • Global error over total time T=NtΔtT=N_t\Delta t:
O(NtΔt3)=O(TΔt2)\mathcal{O}(N_t \Delta t^3) = \mathcal{O}(T \Delta t^2)

Computational Complexity:

For a lattice of size Nx×NyN_x \times N_y and NtN_t time steps:

O(NtNxNylog(NxNy))\mathcal{O}(N_t \, N_x N_y \, \log(N_x N_y))

Results

To illustrate all this method we build two interesting potentials : the double-slit experiment and the proton diffusion. In the first we simply create walls using a high potential energy, and in the second case we use a 1/(r+ϵ)1/(|r|+\epsilon) potential. The initial wave packet is a simple Gaussian wave packet with a given momentum.

Double-Slit Simulation.
Proton Diffusion Simulation.

Outlooks

  • Go to 3D, but require a good visualisation framework. Switch to Blender ? Can be cool for a video.
  • Some fancy potential used to illustrate other phenomenons
  • 2 electrons ? Does it work ? How to handle indistiguishability ?
  • As always, OPTIMIZATIONS !
  • ...