Skip to Content

Activities / Courses Repository / 2025 Course Description


2025 Course Description


7 - 11 ​ april

Coordinators:

  • Detlef Lohse
  • Olga Shishkina
Turbulence is one of the last unsolved problems in classical physics. It is omnipresent in nature and technology, and many of today’s societal problems are deeply connected with turbulence, from ocean flow and mixing in the ocean, melting of glaciers, pollution in the atmosphere or in the ocean, climate, processing of fluids in the industry, transport in pipelines, etc. Turbulence has been approached from various sides, namely from statistical and theoretical physics, from mechanical or chemical engineering, from the applied mathematics point of view, and from the practitioner’s point of view, who works in oceanography or geophysics. All of these approaches have their justification and their strengths, but also their limitations.
Going back to Kolmogorov, there has been the paradigm in the more fundamental turbulence community of one universal state of homogeneous isotropic turbulence. In the last 15 years or so it has become more and more clear that this is not the case and that there can be several different states of turbulence, with transitions between these states. The transitions between the different states normally are of subcritical nature, just as the transition from laminar pipe or channel flow to turbulent pipe of channel flow. There, the origin of the subcritical nature is the nonnormality of the operator, combined with the nonlinearity of the Navier-Stokes equation. The nonnormality is intimately related to the local shear of the flow. All this also holds for turbulent flow, where one locally also has strong shear.
One example of a transition between different turbulent states is the transition between classical Rayleigh- Bénard turbulence and ultimate Rayleigh-Bénard turbulence. The transition leads to considerably enhanced heat transport. Viewing this transition in the nonnormal-nonlinear context has resolved various controversies in the field and further intensified the interest in the Rayleigh-Bénard convection and related thermally driven turbulent systems such as horizontal convection, vertical convection, internally heated convection or buoyancy-driven systems such as stratified inclined duct flow. However, the fully developed turbulence community traditionally had been disconnected from the community focusing on the transition from laminar to turbulent flow. One objective of the workshop is to bring together these two communities for their mutual benefit, resulting in the transfer of experimental, numerical, and theoretical methods and defining new joint problems.
Other examples of transitions between different turbulent states have been observed in the von Karman flow and Taylor-Couette flow. Also Rayleigh-Bénard turbulence knows further transitions, namely the one from zonal flow to flow with convection rolls or from turbulent flow with n convection rolls to that with n+1 or n-1 convection rolls. Questions to ask on all these systems with multiple turbulent states are: How to trigger a transition? What is the hysteretic character of the transition? What are the transport properties (heat, mass, momentum) of the different states? What are the lifetimes of the states, depending on the control parameters? Do local disturbances to one state grow and finally overwhelm the global flow structure and lead to a transition or do they die out?
The lecturers are all renowned scientists having worked on the transition to turbulence and on fully developed turbulence. All of them are Fellows of the American Physical Society. They have backgrounds in mathematics, physics, and mechanical engineering.

5 - 9 ​ may

Coordinators:

  • Matthias Möller
  • Fabian Key
The organizers gratefully acknowledge the Jülich Supercomputing Centre for supporting the school by providing computing time through the Jülich UNified Infrastructure for Quantum computing (JUNIQ) on the D-Wave quantum annealer. JUNIQ has received funding from the German Federal Ministry of Education and Research and the Ministry of Culture and Science of the State of North Rhine-Westphalia.

Background:
Quantum Computing (QC) is an emerging technology that holds the potential to revolutionise the way we will be solving computational mechanics problems in the future. The potential advantage of QC over classical high-performance computing, however, does not come for free but requires the redesign of solution approaches from scratch, that is, in terms of quantum or hybrid quantum-classical algorithms that exploit the principles of quantum mechanics such as superposition of states, entanglement, and quantum parallelism. It also requires a rethinking of the overall problem formulation as a potential computational advantage can easily get destroyed if the user aims to extract the full solution fields of, say, a quantum-CFD computation, which would require up to exponentially many computations.
Course objectives:
This advanced course provides a gentle introduction to the basic principles of quantum computing and discusses a large spectrum of quantum and hybrid quantum-classical algorithms and their applications in computational mechanics. In particular, the course will address the commonalities of and differences between gate-based quantum computers (GBQC) and quantum annealers (QA) and discuss how various problem types from the mechanical sciences can be formulated as quantum circuits for the former and Ising models (IM) / quadratic unconstrained binary optimization problems (QUBO) for the latter, respectively. The types of applications range from structural design optimization and seismic imaging to fluid and power flow analysis.
Course outline and target audience:
The course primarily addresses students and practitioners from industry and academia with backgrounds in the mechanical sciences who want to know more about the opportunities and challenges of quantum computing as an emerging computing technology to solve challenging problems in the (near) future.
The outline of the course is as follows:
- Introduction into quantum computing with discussion of the commonalities and differences of gate-based quantum computers, quantum annealers, and Ising machines.
- Formulation of QUBOs / Ising models using different encodings and approaches to implement constraints; tips and tricks for practical implementations on quantum annealers and Ising machines.
- Application of quantum annealers to solving problems in truss optimization, structural design analysis and optimization, seismic imaging, phase-field analysis, as well as fluid and power flow analysis.
- Advanced topics in quantum annealing: representation of real-valued variables, hybridization strategies, Ising machines beyond quantum annealers.
- Introduction into variational quantum algorithms (VQA) and quantum machine learning (QML) for gate-based quantum computers: quantum kernel methods (QKM), classical and quantum support vector machines (SVM).
- VQA and the Density Matrix Renormalization Group algorithm for solving the Stacking Sequence Retrieval problem with constraints.
- SVM for strength prediction of open hole composite panels.
- QKM for solving regression problems.
- QKM and physics-informed QML for solving differential equations.
- Quantum (lattice) Boltzmann methods.
- Quantum-PDE algorithms based on the finite volume/element method for CFD applications, radiation hydrodynamics, and numerical relativity.

19 - 23 ​ may

Coordinators:

  • Franz Rammerstorfer
  • Valentina Balbi
This course is aimed at graduate students, PhD candidates, and postdoctoral researchers in electronics/biomedical/mechanical/civil engineering, materials science, biophysics and applied mathematics. It is also valuable for senior scientists and engineers in academia and industry interested in the fundamental theoretical aspects of wrinkling phenomena, their numerical simulation and experimental characterization.
Wrinkles appear almost everywhere in nature and during manufacturing or use of single- or multilayered thin structures. For instance, wrinkling is one of the major phenomena that control the morphogenesis of soft tissues (e.g. the brain) and the shape of plant leaves. In film-substrate systems, wrinkles can form due to mechanical loading, swelling of the thin layer or shrinking of the substrate. Wrinkling is the mechanism that renders desired or undesired surface patterns in stretchable electronic devices made of thin metallic films on polymeric substrates. Wrinkles are known to appear on the human skin, occurring naturally or as a result of a surgical procedure. In this context, the hierarchical structure of skin, its microstructure and material properties play a dominant role. Wrinkles can evolve into other patterns such as creases and folds, period-doubling/tripling and other secondary bifurcations as well as debonding between layers. Wrinkling (especially combined with delamination) is considered a typical failure mechanism in composites, flexible electronics, as well as in lightweight sandwich structures. Wrinkles must be avoided when draping during the production of doubly-curved textile-reinforced composites by proper lay-up formation and controlled mechanical and thermal loadings. Undesired wrinkles can also occur during metal forming, for example when rolling or straightening strips. Various wrinkling phenomena can be observed in thin plates and strips under tension or combined stretching and twisting. These few examples illustrate the wide variety of areas in which wrinkling plays an essential role.
Regardless of whether wrinkling appears in biological systems or in engineering structures, from a mechanical perspective, this phenomenon can be studied with a broad array of advanced methods. This course focuses on presenting state-of-the-art modelling techniques used to predict the development of wrinkling in a wide range of applications. The following analytical, semi-analytical and computational approaches will be discussed: tension field theory, eigenvalue analysis for discretised models, unit cell analysis, nonlinear computational analysis for studying growth/disappearance or transitions of wrinkles; theory of growth and remodelling coupled with nonlinear and incremental elasticity for studying soft tissue morphogenesis; exact linear and weakly nonlinear analyses under the framework of nonlinear elasticity as well as kinetic approaches, complementing each other, for studying wrinkling of thin films on substrates. Experimental studies and practical simulations of wrinkling defects during composite forming processes will complement the theoretical considerations. Practical work will be carried out the participants’ laptops, using open-source software.

26 - 30 ​ may

Coordinators:

  • Eckart Meiburg
  • Benjamin Kneller
IMPORTANT NOTICE: Lecture recordings for this course will be provided to online participants.
Gravity currents are a ubiquitous phenomenon in nature and technology. They constitute primarily horizontal flows that are driven by hydrostatic pressure gradients due to variations in temperature, chemical composition, or the presence of suspended particles. They are among the main drivers of heat and mass redistribution on the planet, via density-driven flows in the ocean and atmosphere, and are responsible for a range of natural and man-made hazards, such as snow avalanches, landslides, volcanic eruptions of various kinds, but also positively buoyant flows such as plumes and smoke from fires within buildings. Turbidity currents represent an important class of particle-driven gravity currents, as they represent a key mechanism for transporting sediment into deeper water. Their interaction with the seafloor via erosion and deposition is responsible for the formation of large-scale features such as submarine canyons, and their deposits form the largest sediment bodies on Earth. From an engineering point of view, submarine gravity currents pose a significant hazard to infrastructure such as oil pipelines and telecommunication cables.
That gravity currents can form under such a wide variety of conditions renders them a particularly fascinating research topic. They can be associated with opposite ends of the Reynolds number spectrum (magma flows versus atmospheric currents), so that they are governed by different balances between inertial, viscous, and gravitational forces. They can be nonconservative in that their excess density varies with time (for example, eroding or depositing turbidity currents), they can be Boussinesq or non-Boussinesq in nature (seabreezes versus powder snow avalanches), they can give rise to non-Newtonian dynamics (debris flows), and they can be linked to chemical reactions or to the preferential exclusion of salt during the formation of ice. Gravity currents can exist in ambient environments that exhibit velocity shear, such as in thunderstorm outflows, their dynamics can be affected by complex topography, and they can interact with a background density stratification, thereby triggering the formation of internal gravity waves, or vice versa where sediment resuspension due to breaking internal waves may conversely generate gravity currents.
The proposed course will introduce the fundamental physical principles governing the dynamics of gravity and turbidity currents, along with a broad variety of examples in nature. We will highlight several conceptual models of high Reynolds number gravity currents, and introduce depth-resolving modeling approaches based on the full Navier-Stokes equations.
The course will include a one-day field excursion to examine deposits of ancient submarine gravity flows that occurred 55 million years ago and are now preserved as sedimentary rocks. These include the deposits of turbidity currents and submarine landslides.
The course is intended primarily for graduate students, postdocs, and early career researchers, as well as for senior scientists from engineering and geosciences.

9 - 13  june

Coordinators:

  • Aurélien Doitrand
  • Vladislav Mantič
This course offers an in-depth exploration of the novel Finite Fracture Mechanics (FFM) technique for modeling crack and failure initiation, with practical applications to real-world problems. It is designed for post-graduate students, expert researchers, and engineers who wish to understand, apply, or develop this approach.
Various theories predict failure initiation in complex structures with stress concentrations or singularities, such as holes or V-notch tips. FFM allows to identify the structure-specific intrinsic length scale and extends the concepts of traditional fracture mechanics to more general configurations with stress concentrations or singulčarities beyond just a crack tip with a square root singularity.
FFM has been validated through numerous experimental observations, successfully predicting failure initiation in complex geometries. In recent years, it has been extended to 3D domains, geometrical and material nonlinearities, and dynamic aspects, including subsonic crack propagation. It has proven effective in assessing fractures at the micro- or nano-scale, in bio-inspired and 3D-printed materials and composites. Additionally, it has provided a physical explanation for the regularization parameter in phase-field models for fracture and established a link with traction-separation profiles of cohesive zone models. These extensions, applications, and interactions with other fracture models make FFM a cutting-edge approach in failure modeling, which will be thoroughly discussed. Practical applications and hands-on exercises will enable participants to master FFM techniques.
The proposed CISM course brings together six researchers who have extensively studied and applied FFM techniques. It will begin by addressing the framework and origin of FFM, including related experimental and theoretical aspects as well as numerical implementation. It will then focus on applications for a wide range of materials and configurations. Recent FFM extensions, including 3D applications, material nonlinearities such as plasticity or nonlinear elasticity, geometrical nonlinearities, dynamic and fatigue loadings, and FFM as an optimization problem, will be covered. The relationship of FFM to other fracture models will also be reviewed in detail.

23 - 27  june

Coordinators:

  • Tomasz Sadowski
  • Patrizia Trovalusci
Recent advancements in multiphysics and multiscale modelling of complex materials, which are materials endowed with microstructure, detectable at different scale levels (nano, micro, meso, macro), and characterised by a complex material behaviour (plasticity, damage, fracture) are required by novel applications.
Advanced composites (ACs), consist of various components (metal, polymer, ceramic, etc.) with complicated internal architectures, including porosity, and reinforcement with fibres or particles of different properties, shapes, and sizes. Optimal distribution of the (1) reinforcing phase within the matrices or (2) different phases in multiphase materials is the major task in designing complex composites to get the required material response to the various kinds of loads. The AC’s macroscopic properties are subjected to multi-degradation phenomena which are governed by multiphysics processes that occur at one to several scales below the level of observation, suggesting the application of multiscale approaches. A thorough understanding of how these processes influence the reduction of stiffness and strength is of key importance for the analysis of existing, and the design of improved, complex materials. It is widely recognized that important macroscopic material properties, such as stiffness and strength, are governed by processes occurring at one to several scales below the macro-observation. A thorough understanding of how these processes influence gross behaviour is key to the analysis and the design of existing and/or performance-improved composite materials (multiscale analysis).
The recent advancement in applied computer science and artificial intelligence in the multiphysics modelling of materials aims at modelling of multi-damage and failure processes, validated through experimental assessment of local mechanical properties and microstructures. For example, data-driven parametrically-upscaled constitutive models with machine learning and uncertainty quantification are a novel idea which proposes a parametric representation of lower-scale microstructural descriptors expressed as functions of representative aggregate microstructure parameters including data information about microstructural morphology and crystallography. The application of a machine learning tool is utilized for the generation of constitutive descriptors. Moreover, recently proposed non-local data-driven models for green ACs with particular emphasis on the derivation of the formulation of non-classical models for materials continua and the description of the necessary algorithms and procedures adopted to develop the proposed multiscale model.
Furthermore, innovative multiscale modelling strategies applied to the study of ACs under static and fatigue loading to crack initiation on atomistic and microstructural length scales as well as macroscopic final failure using scale-appropriate methods will be also of interest for the course, to compare these approaches with experimental results for many practical cases. The course also covers recent developments in the modelling of complex materials as non-local continua obtained through the adoption of multiscale approaches.

30  june - 4 july

Coordinators:

  • Jean-François Ganghoffer
  • Catalin Picu
The advancement of additive manufacturing making possible the creation of complex multiscale architectures controlled from the nano / micro level led to a new paradigm in the design of materials. Architected materials (AM) derive their properties not from the ones of their base material, but rather from the design and topology of their microstructure. Amongst architected materials, the category called metamaterials indicate materials with pre-designed multiscale architecture that exhibit unusual static and dynamic properties associated with large local deformations, the presence of multiple metastable states, and instabilities.
For the effective mechanical and acoustic properties of AM to be determined, a link between the microstructural and the scale of an effective substitution medium needs to be set relying on suitable homogenization schemes; generalized continuum models like micromorphic or strain gradient media are sometimes needed to account for the special microstructural deformation modes.

Creating architectures of controlled anisotropy tuned to be ultra-soft or ultra-stiff and lightweight has become increasingly important in biomechanical, civil, and mechanical engineering applications. For instance, specific biological structural members such as tendons and ligaments exhibit Poisson's ratio values well above the isotropic limits, thereby highlighting the need for biomimetic metamaterial microstructures, appropriate for tendon or ligament restoration processes. Fibrous materials are a class of AM that includes many biological and engineering examples, named collectively network materials. Connective tissue in human and animal bodies, paper, cellulose products, nonwovens, and textiles, are all network materials. Their behavior is highly non-linear and is defined by the presence of metastable states, instabilities, and ultimately, by the network architecture. Network materials are tough, damage resistant; they may be designed and built to exploit unusual static and dynamic properties and such systems belong to the class of metamaterials.
Topologically interlocked materials form a distinct class of AM. They are composed of periodic building blocks which are assembled to tesselate space. The contacts between blocks control the highly non-linear behavior of the ensemble on the macroscopic scale. These materials are damage-resistant and tough, and exhibit interesting behaviors in shear, indentation and under dynamics loads.
The objective of the course is to bring together researchers from the modeling, computational and experimental mechanics communities to expose an overview, including recent research activities, of the rapidly expanding field of architectured materials, with a special focus on mechanical metamaterials in both statics and dynamics. The proposed course targets both established researchers and a younger audience and will provide state-of-the-art information in this area.

14 - 18 july

Coordinators:

  • Marcus Granegger
  • Bernhard Semlitsch
The demand for donor organs exceeds the supply, necessitating the development of alternative, devicebased artificial organs. These devices are essential not only for bridging patients to transplantation but also as potential permanent solutions to organ failure. To ensure these devices function safely and effectively at the intersection of biology, fluid dynamics, chemistry, mechanical engineering, and medicine, extensive testing under realistic conditions is crucial. However, current experimental models, such as mock circulations and animal experiments, are often costly, time-consuming, and fail to fully replicate human pathophysiology. These limitations contribute to high development costs, risks, and failure rates in the clinical translation of artificial organs.
In this context, in-silico methods are increasingly important in the design and approval process of complex medical devices. Computational simulations can significantly reduce risks and failure rates by allowing for the screening of numerous scenarios and predicting critical events in "virtual patients." These simulations can uncover issues that are not detectable in traditional laboratory setups, offering unprecedented insights into human-machine interactions, such as detailed flow field information across the entire domain. This is particularly valuable in blood-wetted artificial organs, where early-stage predictions of hemocompatibility issues, such as blood damage and thrombosis, can lead to improved designs. The flexibility of numerical models allows for the incorporation of realistic boundary conditions-such as moving geometries and a wide range of pressure and flow conditions-and complex physiological mechanisms, including non-Newtonian flow phenomena. However, the computational intensity of these simulations necessitates efficient and accurate modeling strategies to deliver results in time. As a result, numerical simulations often involve problem idealization and approximation. Patient-specific geometries and fluid properties introduce inherent uncertainties, making the validation of computational methodologies and uncertainty quantification essential for reliable predictions. Berlin Heart GmbH is among the market leaders in the sector, upholding the highest standards in reliability and precision for innovative VADs for mechanical circulatory support, and will present their approach to utilizing numerical simulations from a manufacturer’s perspective.
This course provides participants with the fundamental framework to understand and conduct computational simulations in bio-fluidic applications. Designed for graduate students, physicians and researchers in medical, applied, and engineering sciences with an interest in biomechanics, the course offers an overview of numerical simulation methods to guide the design and application of artificial organs. Topics range from an introduction to the basic physiology of the respiratory and cardiovascular systems to advanced numerical simulation methods that capture complex human-machine interactions in moving organs. Emphasis is placed on modeling boundary conditions, turbulent flow regimes, and non-Newtonian flow behavior. The lecture series is complemented by poster sessions, where participants can present their work and foster cross-disciplinary collaborations.

21 - 25 july

Coordinators:

  • Cristian Marchioli
  • René Van Hout
Non-spherical particle-turbulence interactions are common in many environmental, technological and biological applications. In some cases, these particles can be modeled as spherical ones, but in many other cases, e.g. microplastics dispersion, ice crystals in the atmosphere and composite material fabrication, the non-sphericity and associated alignment is governing the dispersion, light reflection or material strength. Much progress has been made in our understanding of the interaction of non-spherical particles in turbulence. However, due to the non-trivial interaction of these particles with turbulent flow structures that may be characterized by preferential sampling of flow regions (in the case of inertial particles) as well as preferential alignment with turbulent flow structures, still many questions remain unanswered, especially in non-homogeneous turbulent flows. Numerical simulations supported by theoretical and experimental results have been leading the way. However, both numerical and experimental methods have been advancing at a rapid pace and experiments are currently able to catch up with numerical simulations or even surpass them, especially at high Reynolds number applications. As a consequence of these developments, it is now useful to look back and review the many studies on the subject to survey the current state of research and put future research paths in perspective.
The course aims to provide a general and unified framework of state-of-the-art theoretical, numerical and experimental techniques for the study of the dynamical behavior of non-spherical particles in turbulent flows. Participants will be exposed to the different methodologies and approaches, their strengths and weaknesses, thus becoming more aware of the capabilities and limitations of the different approaches. Only by understanding the capabilities and shortcomings of the employed methodologies, one can achieve synergy between the different approaches and as a result further advance our understanding of the complex interaction of non-spherical particles with turbulent flows. A comprehensive ensemble of applications, mainly extracted from the lecturers’ own research field and covering several areas of applied physics and engineering, will also be provided. After the lectures, students should possess the necessary knowledge of the basic capabilities, potentials and limitations of the various addressed numerical and experimental methods and, hence, should be able to critically evaluate the reliability and accuracy of the information these methods can provide when applied to practical situations.
The course delivers a comprehensive overview of complex particle-laden turbulent flows, and hence will be particularly attractive to graduate students, PhD candidates, young researchers, and faculty members in applied physics, applied mathematics and chemical/mechanical engineering. The advanced topics and the presentation of current progress in this very active field will also be of considerable interest to many senior researchers, as well as industrial practitioners having a strong interest in understanding the multi-scale complex behavior of such multiphase flows, with particular emphasis on their modelling, simulation, and experimentation.

1 - 5 september

Coordinators:

  • Peter Fratzl
  • Eran Sharon
Matter is rarely completely static: often matter can morph. This is true for all living systems that grow, adapt and change shape. Indeed, cells divide, leaves and fungi grow, octopuses transform, and wings reshape to control flight. But it is also true that bread rises and that pasta swells. While morphing is omnipresent in the living, it is not confined to it. Harnessing morphing capacities has many potential applications, from machines and robots to architecture.
The goal of this course is to review the current and fast-growing knowledge about structural materials that change shape or develop spontaneous internal stresses that improve their properties. The emphasis will be on potential applications in the built environment, from houses to infrastructures.
Lecturers from physics, engineering, biomaterials science and architecture will cover this topic in an interdisciplinary way.
The mechanics of shape-change requires the physical understanding of how internal stresses in conjunction with the overall shape lead to macroscopic deformation. This raises a number of questions that will be addressed: To what extent can one predict the final shape? In which cases and in which way is it possible to solve the reverse problem? Which ways of inducing shape change, such as air inflation, water-based swelling and shrinking, thermal expansion, have been addressed theoretically and practically? How can shape-change be used for self-assembly and dis-assembly of morphing units? What is different at large scales that are relevant for architecture and where gravitational forces play a major role?
Nature is a major inspiration for shape-change due to its ability to grow and remodel. But natural systems, plants in particular, evolved to use “passive” morphing that does not involve an active metabolism. Major examples are seed dispersal systems, such as the pine cone and many others. The course will also address natural examples and review the current knowledge in the field in order to provide a basis for the inspiration of technical structural materials that are able to morph.
Examples from architecture will be discussed, including inflatables, assemblies of morphing and non- morphing particles, and composites with internal stresses. But architecture is not only an engineering field but also requires design knowledge and approaches. One of the goals of this course is also to initiate the participants to research and development approaches that combine scientific and engineering methods with techniques from design.
In nature, morphing has always been critical– it is about growth and survival; a matter of life itself. Along with scientific observations, insights and theorems on natural morphing matter, shape-shifting permeated the dreams and fantasies of mythology, folklore, fiction and the human imagination.
The vision that led to the development of this course is that morphing will also increasingly impact our built environment, perhaps encompassing more sustainable solutions than what is common practice today.

8 - 12 september

Coordinators:

  • Jacek Pozorski
  • Alfredo Soldati
Multiphase flows are common in nature and engineering. Atmospheric flows often involve the dispersion of droplets or solid particulate matter (dust, sand, ice crystals in clouds); marine systems are populated by plankton, sediments or microplastics. Industrial applications of disperse flows include particle separators, filtration systems, atomisers and combustion devices, spray dryers, bubble columns; interfacial and free-surface flows are widespread in chemical and process industry.
To address the complexity associated with such flows, diverse phenomena are combined: flow dynamics (including turbulence or microfluidics), transport of dispersed phase, heat transfer and phase change, chemical reactions, surface science (particle deposition, resuspension or agglomeration) or even biology of organic objects. These multiphysics processes may span a wide range of spatial and temporal scales (from nano through macro to geophysical ones).
The course will focus on Lagrangian approaches. They are often methods of choice to treat the particulate phase transport and polydispersity; they may also be used, in terms of so-called particle-based methods, for the macroscopic description of fluid motion. Looked at from this perspective, the course should nicely complement a typical curriculum on fluid dynamics and CFD modelliang to provide a broader view, next-to (but not off) the beaten track, especially valuable for PhD candidates.
The course will cover a range of Lagrangian techniques in use, with a special emphasis on particle-laden turbulence. It will include lectures and hands-on sessions on Particle Image Velocimetry and Particle Tracking Velocimetry that are widely used to measure dispersed two-phase flows. On the other hand, non-spherical tracer particles can be used to quantify turbulence. The physics of particle dispersion, aggregate breakup, and anisotropic particle dynamics will be covered.
In terms of modelling and computation, the lectures will describe hybrid Euler-Lagrange approaches for multiphysics two-phase flows, in particular disperse turbulent ones. The course will address particle-resolved and particle-modelled Direct Numerical Simulations (PR-DNS, PM-DNS), Large-Eddy Simulations (LES) with particle tracking, which may include modelling of the subscale phenomena, as well as statistical (RANS and one-point PDF) approaches with stochastic Lagrangian models to account for the missing information on turbulence. Smoothed Particle Hydrodynamics (SPH) will be presented as a representative of a particle-based numerical approach, including multiphase SPH, along with some fundamental issues.
Through these various examples, similarities and differences between particle-based descriptions will be discussed, considering the multiphysics nature of two-phase flows. The lectures will shed light on experimental issues (including uncertainty assessment), modelling challenges (point-particle vs. particle-resolved models, unresolved scales handling), and computational approaches (including hybrid ones).

15 - 19 september

Coordinators:

  • Charalampos Baniotopoulos
  • Enzo Marino 
As climate change is nowadays one of the greatest global threats and challenges, the recent global initiatives reflect the strong will of societies to synergistically and proactively apply robust strategies to moderate the climate crisis. This overarching set of policies aims at making Europe climate neutral in 2050, i.e. with zero emissions of greenhouse gases by that date. This strategy includes, for instance, the plan for a 25-times increase in Offshore Wind by 2050, a fact that requires paramount cutting-edge innovation in Wind Energy. Within this framework, AI in all its forms including Aeolian potential big data management, Digital Twins, Machine Learning and Artificial Neural Network approaches are destined to play a major role in the coming years in boosting Wind Energy.
The present CISM Course aims to systematically cover all current trends related to the use of AI to advance Wind Energy. During the course, besides the fundamental knowledge regarding wind and wind energy yielding, including offshore installations, new technologies such as LiDAR for the prognosis of the Aeolian potential, will be presented, along with Machine Learning and Digital Twins techniques applied to the design of wind energy systems. Along these lines, current achievements related to the accuracy of the assessment of the wind resources and the respective wind flow characteristics will be presented. The advances in the design and optimized maintenance of wind energy converters by means of Machine Learning techniques will be presented. It seems that nowadays the key and the game-changer is the in-time prognosis of the response and performance of the wind energy systems, based on high performance monitoring and inspection data. To this end, the concept of Digital Twins has begun to be employed, aiming to bridge the gap between the numerical model and the physical asset by integrating the measurement that, however, can hardly be realised via traditional tools. The emerging artificial intelligence offers a feasible solution from a novel perspective: Digital Twin prototypes are applied to optimise and control the performance of the wind energy systems by integrating inspection and measurement data via Bayesian inference and Machine Learning. Indeed Digital Twins have a promising potential to provide innovative insights into Wind Energy infrastructure status with the physical model, monitoring data and inspection results integrated.
The course will also cover the trends of promising new technologies and valuable topics such as the concept of modular sustainable energy islands and the sustainability analysis of selected types of Wind Energy converters.
With these lectures we aim to attract doctoral students, post-doctoral researchers, and practicing engineers working with data-driven Wind Energy projects. The objective is to provide the audience with all the tools necessary to better understand the most recent developments on the above described subjects, and thus to facilitate the technology transfer from research to applications.

22 - 26 september

Coordinators:

  • A. John Hart
  • Christoph Meier  
Additive manufacturing (AM) of metals offers highest production flexibility, almost munlimited freedom of design and the potential for pointwise control of microstructure mand mechanical properties. However, a sub-optimal choice of process parameters often leads to high residual stresses, dimensional warping, porosity, undesirable microstructures or even failure of the part during production. The main objective of this course is to convey the physical fun damentals of metal AM processes, the basics of process implementation and monitoring, material aspects as well as modeling and simulation techniques on different length scales.
The course begins with an overview of existing metal AM processes comprising powder bed fusion (PBF), directed energy deposition (DED), binder jetting (BJ), and material Jetting (MJ). After conveying the physical and technical fundamentals, key performance attributes, and potential fields of application, means of monitoring and process control are presented. Different types of defects in metal AM are categorized and strategies for defect detection via in-situ and ex-situ metrology (e.g., X-Ray CT, density inspection, geometry control) are discussed.
Moreover, the course will convey essential material aspects such as the principles, mechanisms, and kinetics of solidification as well as the fundamentals of equilibrium and non-equilibrium thermodynamics. Phase formation and microstructure control, alloy design, powder metallurgy and process-microstructure-property correlations will be discussed in the context of metal AM and compared to conventional casting.
A further focus of the course lies on modelling and simulation approaches in metal AM, covering the underlying modelling assumptions, governing equations, discretization strategies as well as numerical aspects (e.g., balance of computational efficiency and solution accuracy). Specifically, modelling strategies for the mechanics, radiation transfer, heat transfer and sintering kinetics in powder beds are discussed.
Moreover, mesoscale thermo-hydrodynamics modelling and simulation approaches aiming at the prediction of melt flow instabilities and final part properties such as surface roughness, layer-to-layer adhesion and residual porosity are conveyed. Eventually, part-scale thermo-solid-mechanics modelling and simulation approaches aiming at the prediction of residual stresses, thermal strains, constitutive behavior, and dimensional warping at the length scale of entire design parts are presented.
Each set of lectures will start from the respective basics but will then quickly move on to cutting-edge research topics. The lectures are primarily designed for doctoral students of mechanics, engineering, material sciences and physics with a strong interest in the different research aspects of metal AM. However, they are equally suited for young and senior researchers, who would like to gain a comprehensive overview in an efficient compact course format. It might also be interesting for practicing engineers working on high-level industrial applications of metal AM.

29 september - 3 october

Coordinators:

  • Kaushik Bhattacharya
  • Antonio De Simone  
Recent progress in artificial intelligence techniques has delivered tools to solve complex and very high dimensional optimization problems that allow to discover correlations in large data sets, and hence to extract features and patterns from measurements or simulations of complex phenomena.
Applications of these machine learning techniques to the field of solid mechanics range from the identification of constitutive equation from full-field time-dependent measurements, to machine learning techniques for the regularization of ill-posed inverse problems in imaging and control, and to the automatic discovery of reduced order models recapitulating the main features of observed patterns in complex multi-physical systems, thanks to the automatic discovery of hidden structures and symmetries in nonlinear dynamical systems.
Building upon some introductory lectures on the mathematical foundations of machine learning and on the basic computational tools required, these recent and potentially revolutionary approaches to long standing problems in nonlinear solid mechanics will be discussed in the context of specific case studies from engineering applications.
We plan to cover the fundamental mathematical background of machine learning techniques to explain and rationalize the reasons for their success, to illustrate the possibility of automated discovery of dimensionally-reduced hidden structures and symmetries in solid mechanics problems, and to illustrate the potential of machine learning techniques in the context of application to specific classes of material systems ranging from metals, to polymers, to granular materials.
The school will be structured according to the following plan:
Mathematical background to neural networks and neural operators.
Lectures by Carola Schönlieb (Cambridge) and Nikola Kovachki (Nvidia/NYU).
Discovery of internal variables and invariant manifolds in history-dependent phenomena.
Lectures by Antonio DeSimone (SISSA/Pisa) and Kaushik Bhattacharya (Caltech).
Learning from experimental data and connections to numerical methods.
Lectures by Dirk Mohr (ETH) and Laurent Stainier (Nantes).

6 - 10 october

The proposed course is aimed at drawing an overview of recent developments in the field of image-based mechanics to give the participants clues for understanding how images are acquired, used or analysed and how they can be used for understanding and validation purposes. Having a global vision of the whole pipeline from the lab to the computing center is essential for practitioners and researchers to make their work more effective. The course is divided into several blocks to answer the following questions: how are images acquired? What can we do with images? How is image analysis formulated/implemented? How robust are the results? To help the participants in their future use of images, this comprehensive overview of fullfield measurements deployment and quantitative image analysis is coupled with an in-depth exploration of best practices in formulation and implementation. The immense potential of these techniques is showcased through discussions on related inverse methods for material characterisation, multimodal setups, and in-situ/in-operando implementations but also on the evaluation of their performance.
Among the possible uses of images in mechanical analysis, full-field displacement measurements and microstructure analysis are covered. Concerning full kinematic field measurements, an overview of different versions and their associated practical implementation in the lab is presented together with an introduction of the numerical implementation of digital image correlation (DIC), localized spectrum analysis (LSA) and optical flow (OF) methods. Applications and specific features of 2D, stereo and volume imaging will be discussed. Quantitative image analysis is discussed for microstructure characterization and crack detection using 3D images. Various image processing techniques, from classical mathematical morphology to machine learning-aided image segmentation, are introduced and discussed. Approaches for model-based quantitative image analysis will also be showcased. Requirements on image resolution and methods for correcting sampling biases will be addressed.
From the results obtained using the previously mentioned techniques, it is now common to identify the mechanical behaviour of the studied materials at the chosen observation scale.
The course thoroughly explores advanced identification methods, offering participants a deep understanding of techniques such as Finite Element Model Updating (FEMU) and Virtual Fields Method (VFM). Through practical demonstrations and theoretical discussions, participants will gain proficiency in implementing these advanced methods to enhance material characterisation and simulation accuracy. Recently proposed non-parametric (without postulating a constitutive relation) identification techniques based on the data-driven paradigm will also be introduced theoretically and practically. These approaches require a computational model numerically twinning the sample. A short overview of the computational issues regarding geometrical description and computational efficiency is also proposed. Last, the question of robustness is covered. To this end, uncertainty quantification will be considered as an ongoing issue.

13 - 17 october

Coordinators:

  • Gabor Orosz
  • Denes Takacs
The importance of micromobility has become relevant in the last decades since micromobility vehicles (electric scooters, bicycles, skateboards, unicycles, etc.) provide modern solutions for the last-mile problem of urban transportation. Although the invention of these vehicles dates back to the last century, and the analysis of their dynamics also began at that time, electrification changed the game. Higher speed and more agility characterize modern micromobility vehicles, while in the meantime, non-professional riders are using them on city roads in heavy traffic. Thus, the risk of serious accidents is high. Understanding the dynamics and control of e-bikes, scooters, skateboards, and unicycles is becoming more and more important.
The aim of this course is to give a perspective on the dynamics and control of micromobility vehicles. First, the classical mechanical modeling of the simplest nonholonomic systems will be presented. Starting from Newton’s second law of dynamics and introducing kinematic constraints, the mystic, speed-dependent dynamics of nonholonomic systems will be demonstrated via the analysis of the uncontrolled skateboard. The effect of human control on stability is investigated via the implementation of a linear state feedback with human reaction time. While single-track vehicles are highly unstable, with some forward speed, the system is easy to stabilize and control. The course will showcase the key fundamental results on this important topic.
The theoretical background of nonholonomic systems will be given via lessons on the Lagrangian approach extended for kinematic constraints. Then, the concept of pseudo-velocities will be introduced, and the Appellian approach will be presented. The differences between the aforementioned methods will be shown via the analysis of the spatial rolling problem of a rigid wheel. A simplified model of the electric unicycle will also be introduced. The analysis of nonholonomic articulated robotic vehicles will highlight how periodic excitations can be used for driving micromobility vehicles. Namely, the motion of the Twistcar (which is a very popular kids' cart) is analyzed, and forward and backward motions are identified depending on the amplitude, frequency, and phase of the excitation.
Rider modeling, path tracking, and stabilization of bicycles and e-scooters will also be presented. Modeling of the wobble mode of bicycles, including frame flexibility, transient tire-road contact forces, and the role of rider posture, will be investigated. The stability of e-scooters under braking will be identified, and different braking control strategies (ABS, optimal brake force distribution) will be highlighted. A Kalman-filter-based estimation of the tire-road friction potential and of vehicle-rider mass distribution will also be the subject of the course. The course will also discuss the development of automated single-track vehicles. Path-following control strategies will be analyzed, and experimental investigation of bicycle control will be showcased.

20 - 24 october

Coordinators:

  • Peter Wriggers
  • Jörg Schröder
This course provides an advanced exploration of numerical methods utilized in the analysis of solid mechanics problems. It includes sophisticated emerging techniques essential for addressing complex engineering challenges, focusing on both theoretical understanding and practical implementation. The techniques range from non-classical methods to machine learning.
Lectures will provide an in-depth exploration of lattice Boltzmann methods (LBMs) and their applications in solid mechanics. LBMs are a class of computational fluid dynamics techniques based on kinetic theory, but they have been extended to simulate a wide range of phenomena beyond fluid flow. Students will learn the fundamentals of LBMs and how they can be adapted to address various solid mechanics scenarios.
Another set of lectures deals with the least squares finite element method (LSFEM), which provides a robust framework for solving a wide range of boundary value problems. The theoretical and algorithmic formulations are derived for applications in solid mechanics. In addition, mixed finite element methods considering linear and nonlinear formulations are discussed and applied to engineering analysis problems.
Virtual element methods (VEMs) have a broad range of applications. VEM represents a new modern numerical discretization technique capable of handling complex geometries and heterogeneous material properties. The lectures provide students with theoretical foundations and hands-on experience in utilizing VEMs for solving solid mechanics problems efficiently and accurately.
A comprehensive description of the field of electro-magneto-mechanics will be discussed, starting from Maxwell and Cauchy equations, moving into sophisticated computational formulations and finishing with practical Finite Element technologies. With a primary focus on large strains, the course will cover fundamental aspects such as well-posedness and specific spatial discretization techniques. For applications into soft robotics (i.e. Electro-Active Polymers), development of structure preserving time integrators and data-driven Machine Learning type of constitutive models will demonstrate a broad range of realistic applications.
This course explores and critically questions the use of machine learning (ML) in solid mechanics applications. ML has proven to be a helpful tool for reducing computational costs and enabling accurate representations of data ranging from experiments to high-fidelity simulations. The lectures cover the principles of ML techniques and discuss their possibilities to solve partial differential equations, facilitate multiscale problems, and describe constitutive behavior.
Further lectures explore automation of discretization techniques in the context of the numerical methods discussed in the course. It will be shown that automation helps to develop more complex emerging discretization techniques like the virtual element method.
The course will appeal to doctoral students and postdoctoral researchers from academia and industry. Participants will gain expertise in employing different numerical methods to accurately model and simulate a wide range of solid mechanics phenomena.