The main focus of our group is the development and application of numerical methods and simulations to various research fields, such as complex fluids, engineering materials, geomorphology and general questions of statistical physics. In contrast to Newtonian fluids complex fluids show peculiar phenomena such as shear thinning and shear thickening. Quicksand is an example of a natural colliodal system and its simulation is nowadays still challenging. We focus on the modeling of quicksand, but also on more general topic of complex turbulent flow of fluids, that is still not fully understood.
Understanding micro-mechanics of engineering materials and their effect on macroscopic observations is the key to designing advanced engineering materials. We are focused on computational material mechanics for predicting the constitutive behavior of complex, multi-scale materials under fatigue and quasi-static fracture, but also on more general physical aspects of dynamic fracture and fragmentation.
The texture of a landscape is the product of thousands of years of competing geological processes. Geomorphological models describe processes on large time and length scales, embedded in a complex environment. Our interest is in predicting the evolution of river deltas and the sand dune formation and motion on earth and mars.
We apply concepts of statistical physics to a broad range of problems, ranging from Morphogenesis in constraint spaces like the membrane growth inside a sphere over reaction-diffusion systems to complex social networks.
To provide the requested computational power for the simulations done by our groupmembers, we decided to build a high-performance computing cluster (HPC). Our first cluster (still in use) was purchased in 2006 based. As the group grow the team invested in an IBM cluster in 2009. This purchase was followed by two extensions in 2010 and 2012.
This Cluster consists of two Master-Nodes as well as 55 Computational-Nodes and 3 Evaluation-Nodes. We have in total 656 physical cores with 11 Nvidia Telsa M2075 CUDA cards. The total RAM available for scientific simulations is ~1.8TB.
The interconnection of the nodes is based on a QDR Infinband network with a peak performance of 40GB/s.
We are using an IBM Spectrum Scale (GPFS) filesystem and storage system to store the research data produced on the cluster, provoding approximatly 24TB of space available.
For the time being, the main purpose of using the cluster is running self-written code, most of it in C++ and C. To analyse the calculated data, Matlab (R2015b), R and self written evaluation code is the preferred programm on the system. Other important references are the Intel Compilers for C and Fortran. Compared to the gcc-compiler, not only the compilation itself finishes earlier, the produced binaries run up to 30% faster compared to the standard Linux compilers.