Bachelor projects
Are you interested in theoretical physics and the fascinating realm of subatomic particles? Are you intrigued by the connection between the quantum world of microcosmos and the fate of our Universe? Then you should consider applying for a bachelor thesis project in our group.

We usually offer one or two bachelor thesis projects every year. For optimal learning output you should enter the project with a genuine interest in theoretical physics and some confidence in the mathematical toolbox that you have acquired. Most of our projects have a computational aspect; and consequently those projects offer an excellent opportunity to advance your skills in the computational scientific methodology.

We are always aiming to construct projects that are directly related to ongoing research efforts. In fact, we take great pride in the fact that several projects have lead all the way to a scientific publication. This is remarkable given that bachelor projects are performed by third-year undergraduate students. See examples below (bachelor student authors in bold face):

The most recent listing of bachelor projects is available at:
(information mainly in Swedish)

Examples of previous projects are described below:

Gaussian processes for emulating chiral effective field theory describing few-nucleon systems (spring 2017)

Random functions sampled from a posterior predictive distribution with three observed points. Also depicted is the mean and double variance of the Gaussian process. Gaussian processes (GPs) can be used for statistical regression, i.e. to predict new data given a set of observed data. In this context, we construct GPs to emulate the calculation of low energy proton-neutron scattering cross sections and the binding energy of the helium-4 nucleus. The GP regression uses so-called kernel functions to approximate the covariance between observed and unknown data points. The emulation is done in an attempt to reduce the large computational cost associated with exact numerical simulation of the observables. The underlying physical theory of the simulation is chEFT. This theory enables a perturbative description of low-energy nuclear forces and is governed by a set of low-energy constants to define the terms in the effective Lagrangian. We use the research code nsopt to simulate selected observables using chEFT. The GPs used in this thesis are implemented using the Python framework GPy. To measure the performance of a GP we define an error measure called model error by comparing exact simulations to emulated predictions. We also study the time and memory consumption of GPs. The choice of input training data affects the predictive accuracy of the resulting GP. Therefore, we examined different sampling methods with varying amounts of data. We found that GPs can serve as an effective and versatile approach for emulating the examined observables. After the initial high computational cost of training, making predictions with GPs is quick. When trained using the right methods, they can also achieve high accuracy. We concluded that the Matérn 5/2 and RBF kernels perform best for the observables studied. When sampling input points in high dimensions, latin hypercube sampling is shown to be a good method. In general, with a multidimensional input space, it is a good choice to use a kernel function with different sensitivities in different directions. When working with data that spans over many orders of magnitude, logarithmizing the data before training also improves the GP performance. GPs do not appear to be a suitable method for making extrapolations from a given training set, but performs well with interpolations.

Supervisor: Christian Forssén and Andreas Ekström

by Martin Eriksson, Rikard Helgegren, Daniel Karlsson, Isak Larsén, Erik Wallin, 2017

Annihilation of self-interacting dark matter (spring 2017)

Supervisor: Riccardo Catena

by Magdalena Eriksson, Rikard Wadman, Susanna Larsson, Björn Eurenius, 2017

Annihilation of self-interacting dark matter (spring 2017)

Supervisor: Riccardo Catena

by Sebastian Bergström, Emelie Olsson, Andreas Unger, Michael Högberg, 2017

Chiral effective field theory with machine learning (spring 2016)

Joint PDF for A=2,4 observables from NLO interaction
	obtained with machine learning. Machine learning is a method to develop computational algorithms for making predictions based on a limited set of observations or data. By training on a well selected set of data points it is in principle possible to emulate the underlying processes and make reliable predictions. In this thesis we explore the possibility of replacing computationally expensive solutions of the Schrödinger equation for atomic nuclei with a so-called Gaussian process (GP) that we train on a selected set of exact solutions. A GP represents a continuous distribution of functions defined by a mean and a covariance function. These processes are often used in machine learning since they can be made to emulate a wide range of data by choosing a suitable covariance function. This thesis aims to present a pilot study on how to use GPs to emulate the calculation of nuclear observables at low energies. The governing theory of the strong interaction, quantum chro- modynamics, becomes non-perturbative at such energy-scales. Therefore an effective field theory, called chiral effective field theory (chEFT), is used to describe the nucleon-nucleon interactions. The training points are selected using different sampling methods and the exact solutions for these points are calculated using the research code nsopt. After training at these points, GPs are used to mimic the behavior of nsopt for a new set of points called prediction points. In this way, results are generated for various cross sections for two-nucleon scattering and bound-state observables for light nuclei. We find that it is possible to reach a small relative error (sub-percent) between the simulator, i.e. nsopt, and the emulator, i.e. the GP, using relatively few training points. Although there seems to be no obvious problem for taking this method further, e.g. emulating heavier nuclei, we discuss some areas that need more critical attention. For example some observ- ables were difficult to emulate with the current choice of covariance function. Therefore a more thorough study of different covariance functions is needed.

Full text: Chalmers, 2016 (CPL ID: 241791)
Supervisor: Christian Forssén and Andreas Ekström

by Johannes Aspman, Emil Ejbyfeldt, Anton Kollmats, Maximilian Leyman, 2016

Uncertainty Quantifications in Chiral Effective Field Theory (spring 2014)

One-pion exchange in chiral effective field theory. The nuclear force is a residual interaction between bound states of quarks and gluons. The most fundamental description of the underlying strong interaction is given by quantum chromodynamics (QCD) that becomes nonperturbative at low energies. A description of low-energy nuclear physics from QCD is currently not feasible. Instead, one can employ the inherent separation of scales between low- and high-energy phenomena, and construct a chiral effective field theory (EFT). The chiral EFT contains unknown coupling coefficients, that absorb unresolved short-distance physics, and that can be constrained by a non-linear least-square fitting of theoretical observables to data from scattering experiments. In this thesis the uncertainties of the coupling coefficients are calculated from the Hessian of the goodness-of-fit measure chi2. The Hessian is computed by implementing automatic differentiation (AD) in an already existing computer model, with the help of the Rapsodia AD tool. Only neutron-proton interactions are investigated, and the chiral EFT is studied for leading-order (LO) and next-to-leadingorder (NLO). In addition, the correlations between the coupling coefficients are calculated, and the statistical uncertainties are propagated to the ground state energy of the deuteron. At LO, the relative uncertainties of the coupling coefficients are 0.01%, whereas most of the corresponding uncertainties at NLO are 1%. For the deuteron, the relative uncertainties in the binding energies are 0.2% and 0.5% for LO and NLO, respectively. Moreover, there seems to be no obvious obstacles that prevent the extension of this method to include the proton-proton interaction as well as higher chiral orders of the chiral EFT, e.g. NNLO. Finally, the propagation of uncertainties to heavier many-body systems is a possible further application.

Full text: Chalmers, 2014 (CPL ID: 199193)
Publication: arXiv:1506.02466 [nucl-th]
Phys. Rev. X 6, 011019 (2016)
Supervisor: Christian Forssén and Andreas Ekström

by Dag Fahlin Strömberg, Oskar Lilja, Mattias Lindby, Björn Mattsson, 2014

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling Systems (spring 2014)

A recent theoretical study of quantum few-body tunneling implemented a model using a Berggren basis expansion. This approach leads to eigenvalue problems, involving large, complex-symmetric Hamiltonian matrices. In addition, the eigenspectrum consists mainly of irrelevant scattering states. The physical resonance is usually hidden somewhere in the continuum of these scattering states, making diagonalization difficult.
Artist's view of
	eigenvalue spectra and path of convergence of the
	Jacobi-Davidson algorithm.

This thesis describes the theory of the Jacobi-Davidson algorithm for calculating complex eigenvalues and thus identifying the resonance energies of interest. The underlying Davidson method is described and combined with Jacobi's orthogonal complement method to form the Jacobi-Davidson algorithm. The algorithm is implemented and applied to matrices from the theoretical study. Furthermore, a non-hermitian formulation of quantum mechanics is introduced and the Berggren basis expansion explained. The results show that the ability of the Jacobi-Davidson algorithm to locate a specific interior eigenvalue greatly reduces the computational times compared to previous diagonalization methods. However, the additional computational cost of implementing the Jacobi correction turns out to be unnecessary in this application; thus, the Davidson algorithm is sufficient for finding the resonance state of these matrices.

Full text: Chalmers, 2014 (CPL ID: 199190)
Publication: arXiv:1504.013034
Few-Body Syst (2015) 56: 837.
Supervisor: Christian Forssén and Jimmy Rotureau

by Gustav Hjelmare, Jonathan Larsson, David Lidberg, Sebastian Östnell, 2014

LHC, the Higgs particle and physics beyond the Standard Models - Simulation of an additional scalar particle a's decay (spring 2014)

This thesis explores a possible addition of a scalar boson to the Standard Model. Apart from a quadratic coupling to the Higgs boson, it couples to the photon and the gluon. To fully be able to explore this new boson, it is necessary to get acquainted with some of the vast background theory in form of quantum field theory. This involves the most fundamental ideas of relativistic quantum mechanics, the Lagrangian formulation, cross section, decay rate, calculations of scatteringamplitude, Feynman diagrams, the Feynman rules and the Higgs mechanism. To analyse the particle, it was necessary to use computer aid in form of FeynRules, a package to Mathematica, for retrieve the Feynman rules for the particle, and MadGraph 5 for numerical calculations of decay rate and cross section. This was used to find limits to coupling constants with in the Lagrangian to concur with experimental findings.

Full text: Chalmers, 2014 (CPL ID: 199189)
Supervisor: Gabriele Ferretti

by Tor Djärv, Andreas Olsson, Justin Salér-Ramberg, 2014

Quantum resonances in a complex-momentum basis (spring 2013)

Completeness relation in the complex energy plane. Resonances are important features of open quantum systems. We study, in particular, unbound and loosely bound nuclear systems. We model Helium-5 and Helium-6 in a few-body picture, consisting of an alpha-particle core with one and two valence neutrons respectively. Basis-expansion theory is briefly explained and then used to expand the nuclear system in the harmonic oscillator and momentum bases. We extend the momentum basis into the complex plane, obtaining the so-called Berggren basis. With the complex-momentum method we are able to reproduce the observed resonances in 5He. The 5He Berggren basis solutions are used as a single-particle basis to create many-body states in which we expand the 6He system. For the two-body interaction between the neutrons, we use two different phenomenological models: a Gaussian and a Surface Delta Interaction (SDI). The strength of each interaction is fitted to reproduce the 6He ground state energy. With the Gaussian interaction we do not obtain the 6He resonance, whereas with the SDI we do. The relevant parts of the second quantization formalism is summarized, and we provide details for its implementation.

Full text: Chalmers, 2013
Supervisor: Christian Forssén and Jimmy Rotureau

by Jonathan Bengtsson, Ola Embréus, Vincent Ericsson, Pontus Granström, Nils Wireklint, 2013

Feasibility of FPGA-based Computations of Transition Densities in Quantum Many-Body Systems (spring 2013)

This thesis presents the results from a feasibility study of implementing calculations of transition densities for quantum many-body systems on FPGA hardware. Transition densities are of interest in the field of nuclear physics as a tool when calculating expectation values for different operators. Specifically, this report focuses on transition densities for bound states of neutrons. A computational approach is studied, in which FPGAs are used to identify valid connections for one-body operators. Other computational steps are performed on a CPU. Three different algorithms that find connections are presented. These are implemented on an FPGA and evaluated with respect to hardware cost and performance. The performance is also compared to that of an existing CPU-based code, Trdens.
Basis dimensions for A-body
					   systems as a function of
					   model space truncation.

The FPGA used to implement the proposed designs was a Xilinx Virtex 6, built into Maxeler's MAX3 card. It was concluded that the FPGA was able to find the connections of a one-body operator in a fraction of the time used by Trdens, ran on a single CPUcore. However, the CPU-based conversion of the connections to the form in which Trdens presents them, was much more time-consuming. For FPGAs to be feasible, it is hence necessary to accelerate the CPU-based computations or include them into the FPGA-implementations. Therefore, we recommend further investigations regarding calculations of the final representation of transition densities on FPGAs, without the use of an off-FPGA computation.

Full text: Chalmers, 2013
Extra material: Code and documentation
Supervisor: Christian Forssén and Håkan Johansson

by Robert Anderzen, Magnus Rahm, Olof Sahlberger, Joakim Strandberg, Benjamin Svedung, Jonatan Wårdh, 2013

Higgsbosonen, standardmodellen och LHC (spring 2013)

This report aims to provide an insight into the particle physics of today, and into the research that goes on within the field. The focus is partly on the recent discovery of the Higgs boson, and partly on how software can be used to simulate processes in particle accelerators. Basic concepts of particle physics and the search for the Higgs boson are discussed, and experimental results, including those from the Large Hadron Collider, are compared with simulations made in MadGraph 5. Furthermore, simple new models of particle physics are created in FeynRules, in order to make simulations based on the models. To support the presentations of these aspects, some of the underlying theory is built from the ground up. Additionally, instructions are given on the usage of the programs FeynRules, for creation of models; MadGraph 5, for simulating processes in particle accelerators; and MadAnalysis 5, for data processing of the results obtained. The most significant results are simulations of processes commonly used for Higgs boson searches, with results in qualitative agreement with predictions and experimental data. The results also include consistent analytical and numerical calculations in a simple model with one particle.

Full text: Chalmers, 2014 (CPL ID: 183527)
Supervisor: Gabriele Ferretti

by Anton Nilsson, Olof Norberg, Linus Nordgren, 2013

Configuration Interaction Methods and Large-scale Matrix Diagonalization (spring 2012)

When dealing with systems of many particles, the complexity of modelling the dynamics increases dramatically as the number of particles increases. One approach to this problem is to use the configuration interaction method, where the wavefunction representing the many-particle system is expressed as a linear combination of many- particle basis states.
non-zero matrix elements of matrix from NCSMb.

The ultimate goal of the project has been to develop guidelines and recommendations for a future implementation of an eigensolver, where the matrices are calculated by the code No-Core Shell Model for bosons, NCSMb, currently under development. The eigensolver is based on the Lanczos algorithm, a method particularly well suited to reduce the complexity of finding a few eigenvalues of large, sparse and symmetric matrices.

Full text: Chalmers CPL 158696
Extra material: Code and documentation
Supervisor: Christian Forssén and Håkan Johansson

by Pontus Hansson, Joakim Löfgren, Karin Skoglund Keiding, Simon Vajedi, 2012

The Similarity Renormalization Group for Three-Body Interactions (spring 2011)

The properties of many-body systems are frequently not easily accessible due to the complicated structure of the interactions between the particles. A well-known example is the properties of neutron matter that determine the properties of neutron stars and are still not reliably determined. Renormalization group methods have become a valuable tool in modern day many-body physics.
SRG evolution The Similarity Renormalization Group equations are flow equations that change the properties of quantum-mechanical potentials such that certain properties of many-body systems can be computed more easily.

The specific aim of this project was the implementation of the Similarity Renormalization Group flow equations for a combination of one-dimensional two- and three-body interactions.

Full text: Chalmers CPL 144109
Publication: arXiv:1107.3064
  Eur. Phys. J. A (2011) 47: 122
Supervisor: Christian Forssén and Lucas Platter

by O. Åkerlund, E. J. Lindgren, J. Bergsten, B. Grevholm, P. Lerner, R. Linscott, 2011

GPU Implementation of the Feynman Path-Integral Method in Quantum Mechanics (spring 2011)

Classical and quantum mechanics is most naturally connected via the Feynman Path Integral formalism. Classical trajectories are replaced by a sum over an infinite number of paths to calculate probabilities (quantum amplitudes). In modern theoretical physics, the path integral formalism in Euclidean time is an exceptionally valuable tool and used e.g. in lattice field theory to study properties of Quantum Chromodynamics.

Six-particle probability distribution The specific aim of this project was the use of the Feynman Path Integral formalism to study simple one-dimensional quantum mechanical systems, with the extension to study also many-particle systems in one dimension. This was achieved through a computer implementation of the formalism on modern graphics cards utilizing the computational capabilities of graphics processing units (GPU).

Full text: Chalmers CPL 144105
Source code available at: Sourceforge
Supervisor: Christian Forssén

by Olof Ahlén, Gustav Bohlin, Kristoffer Carlsson, Martin Gren, Patric Holmvall, Petter Säterskog, 2011

Big Bang Nucleosynthesis (spring 2010)

The term nucleosynthesis refers to the formation of heavier elements, atomic nuclei with many protons and neutrons, from the fusion of lighter elements. The Big Bang theory predicts that the early universe was a very hot place. One second after the Big Bang, the temperature of the universe was roughly 10 billion degrees and was filled with a sea of neutrons, protons, electrons, positrons, photons and neutrinos.
Big Bang reaction network As the universe cooled, the neutrons either decayed into protons and electrons or combined with protons to make deuterium. During the first three minutes of the universe, most of the deuterium combined to make helium. Trace amounts of lithium were also produced at this time. This process of light element formation in the early universe is called "Big Bang nucleosynthesis" (BBN).

The specific aim of this project was to study the Big Bang model and, in particular, the features that are key to understand the synthesis of the light elements. An integral part of the project was to perform computer simulations of the primordial nucleosynthesis.

Full text: Chalmers CPL
Supervisor: Christian Forssén

by Joakim Brorsson, Johan Jacobsson, and Anton Johansson, 2010