Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Filter by Track or Type of Session 
Only Sessions at Date / Time 
 
 
Session Overview
Date: Wednesday, 13/Sept/2023
9:00am - 10:40amMS09-1: Multi-scale shape optimization problems in continuum mechanics
Location: EI7
Session Chair: Jacques Zwar
Session Chair: Daniel Wolff
 
9:00am - 9:20am

Damage optimisation in forming processes using Abaqus as FE solver

F. Guhr, F.-J. Barthold

TU Dortmund University, Germany

One phenomenon to consider in metal forming nowadays is the accumulation of ductile damage during the forming process. Ductile damage, i.e. the nucleation, growth and subsequent accumulation of micro-defects such as voids, is inherently present in any formed part. Therefore, it is advisable to reduce these damage effects and in turn produce parts with reduced damage accumulation and thus higher safety factors. Herein, optimisation is a very useful tool to enhance already established processes with damage minimisation in mind. By defining process dependent parameters as the design variables of the optimisation, improved tool sets can be generated to reduce the ductile damage of the formed part.

An important aspect to consider when dealing with simulation of forming processes is the underlying necessity of contact algorithms. Approaches to handle these discontinuous problems are available in literature, by e.g. utilising sub-gradients, however, their application mainly see academic use. The problems for the proposed optimisation are however very complex in nature and therefore require robust and efficient implementation. Consequently, the commercial finite element software Abaqus is used to simulate the processes and solve the necessary contact problems.

In this submission, a framework defined in Matlab is presented, which utilises Abaqus as the finite element program to solve the stated optimisation problems. The framework is applied to different forming processes, such as rod extrusion and deep drawing-like processes, in order to optimise them with regard to their accumulated damage. Different sets of geometric parameters are defined, which in turn result in optimal design for the work tools of the analysed problems. Due to the nature of the framework, the optimisation is not limited to process optimisation and further examples regarding curve fitting for experimental setups are also presented.



9:20am - 9:40am

Efficient cavity design for injection molding through spline-based methods

F. Zwicke, S. Elgeti

Technische Universität Wien, Austria

When molding processes, such as injection molding, are used to produce plastics parts, it can be difficult to achieve the correct product shape. As part of the process, the material must cool down and solidify. Since this can happen in an inhomogeneous way, residual stresses can remain in the material. These lead to warpage, after the part is ejected from the machine.

There are several aspects of the process that could be adjusted to improve the resulting product shape. The focus of this work is on the shape of the mold cavity. If suitable adjustments are made to this cavity, the product shape can be improved although shrinkage and warpage still occur. In order to estimate the effects of certain cavity shape changes, a numerical simulation method for the process is required.

This cavity design problem can then be treated either as a shape optimization problem or as an inverse problem. In the former case, a suitable shape parameterization and objective function need to be found. Both options profit from the use of splines, since this allows the shape to be transferred back to a CAD format. The method of Isogeometric Analysis (IGA) offers a convenient way of using splines as a geometry representation in the Finite Element Method. We will discuss the different design approaches and explain the benefits and challenges involved with the spline representations.



9:40am - 10:00am

Adjoint sensitivity analysis for manufacturing constraints in shape optimization

G. Barrón Loeza1,2, S. Peter1,2, M. Hojjat2, K.-U. Bletzinger1

1Technical University of Munich, Germany; 2BMW Group Digital Campus Munich, Germany

In the typical product development process of an automotive part, multiple disciplinary teams collaborate to converge on a final design. Structural mechanics, design, crashworthiness and manufacturability are relevant disciplines that mutually influence one another. Sheet metal forming operations are the cornerstone of automotive part production, as a significant portion of the individual components of the Body-in-White (BiW) are fabricated through stamping and deep-drawing processes. Manufacturability assurance for sheet metal forming is commonly addressed by engineering experience and heuristic rules based on geometrical constraints. This work explores the idea of formulating analytical manufacturing constraints for stamped and deep-drawn parts and its inclusion into existing multidisciplinary shape optimization workflows to address formability and performance objectives simultaneously.

As discussed by [1], gradient methods based on adjoint sensitivity analysis, together with a filtering technique as Vertex-Morphing are powerful tools for the typical large and very large optimization use cases in the industry. In this contribution, we present the current progress in the formulation of a constraint for shape optimization that accounts for the manufacturing process, discuss the definition of a meaningful objective function and present details regarding the calculation of adjoint-based sensitivities and its combination with Vertex-Morphing. The formulations of the primal and adjoint problems are also presented, based on the simplified Finite Element Analysis for sheet metal forming proposed by [2].

[1] Kai-Uwe Bletzinger. A consistent frame for sensitivity filtering and the vertex assigned morphing of optimal shape. Structural and Multidisciplinary Optimization, 49, 01 2014.

[2] Y. Q. Guo, J. L. Batoz, J. M. Detraux, and P. Duroux. Finite element procedures for strain estimations of sheet metal forming parts. International Journal for Numerical Methods in Engineering, 30(8):1385–1401, 1990.



10:00am - 10:20am

Shape modes of dynamic structures

S. A. Ghasemi, J. Liedamann, F.-J. Barthold

TU Dortmund University

This work aims to gain a deeper understanding of sensitivity information through the use of principal component analysis (PCA). By decomposing sensitivity matrices, it is possible to explore and analyze the underlying relationships between variables and the impact of their changes on the overall structure. The approach for this analysis is discussed in [1]. PCA allows us to analyze the eigenvectors of the covariance matrix, which are known as the principal components. The first principal component is considered the most significant mode of variation as it indicates the direction with the highest variance in the data. Similarly, the second principal component represents the direction with maximum variance, but this time it must be orthogonal to the first principal component. This process continues for the remaining principal components. The work at hand makes use of gradient-based sensitivity analysis [2] for dynamic structures and compares two different methods for shape design: Isogeometric Analysis (IGA) [3] and Finite Element Method (FEM). The main focus is on using direct differentiation, but if analytical gradients are not available, numerical differentiation methods such as complex-step method (CSM) can be used as alternatives. We utilize different types of basis functions, such as Bernstein polynomials, B-Splines, and Non-Uniform Rational B-Splines (NURBS), to describe the shape of the structure. IGA has several advantages over traditional FEM-based approaches. These advantages include the ability to accurately describe geometry using fewer control points, high-order continuity, and increased flexibility due to control point weights. These characteristics have a significant impact on shape sensitivity analysis. IGA is used during the structural optimization process to avoid costly remeshing and design velocity field calculations. It is more efficient and effective than traditional FEM approaches for these tasks. In contrast to static analysis, the response of a structure to time-dependent loads is significantly affected by inertia and damping effects. The necessary computational characteristics for this type of problem are discussed and the full solution algorithm is presented.

References

[1] N. Gerzen and F.-J. Barthold, “Design space exploration based on variational sensitivity analysis,” PAMM, vol. 14, no. 1, pp. 783–784, Dec. 2014. DOI: 10.1002/pamm.201410374.

[2] F.-J. Barthold, N. Gerzen, W. Kijanski, and D. Materna, “Efficient variational design sensitivity analysis,” in Mathematical Modeling and Optimization of Complex Structures (Computational Methods in Applied Sciences), P. Neittaanmäki, S. Repin, and T. Tuovinen, Eds., Computational Methods in Applied Sciences. DOI: 10.1007/978-3-319-23564-6_14.

[3] T. Hughes, J. Cottrell, and Y. Bazilevs, “Isogeometric analysis: Cad, finite elements, nurbs, exact geometry and mesh refinement,” Computer Methods in Applied Mechanics and Engineering, vol. 194, no. 39-41, pp. 4135–4195, 2005. DOI: 10.1016/j.cma.2004.10.008.



10:20am - 10:40am

Unified shape and topological sensitivity analysis for level-set based topology optimization

M. Gfrerer1, P. Gangl2

1TU Graz, Austria; 2RICAM Linz, Austria

Topology optimization is an effective numerical tool to design high-performance, efficient and economical lightweight structures. In this talk the solution procedure for a two material topology optimization problem constrained by a scalar second order PDE is presented. The approach relies on a numerical topological-shape derivative as a main ingredient for the gradient-based solution algorithm.

We state the optimization problem in the continuous setting and subsequently discretize it. On the continuous level we review the classical shape derivative where the perturbation is realized by the action of a vector field and the classical topological derivative where the perturbation is done by means of sets. In contrast to this, in the presented approach the geometry is represented by the zero level-set of a scalar function. Based on this representation we suggest a topological-shape derivative unifying the concepts of shape derivative and topological derivative. In a next step we consider the discretization of the PDE as well as the level-set function by linear triangular Lagrange finite elements. In this numerical setting we can now consider the perturbation of the level-set function by the perturbation of its nodal values. Based on this we give explicit formulas for the computation of the numerical topological-shape derivative. This derivative information is used in an algorithm to update the level-set function where no distinction between shape changes and topological changes is made. The algorithm is tested in a numerical example, where the shape of two circles with different radii is recovered.

 
9:00am - 10:40amMS16-1: Modeling, simulation and quantification of polymorphic uncertainty in real word engineering problems
Location: EI9
Session Chair: F. Niklas Schietzold
Session Chair: Selina Zschocke
 
9:00am - 9:20am

The consideration of aleatory and epistemic uncertainties in the data assimilation by using a multilayered uncertainty space

M. Drieschner, C. Herrmann, Y. Petryna

Technische Universität Berlin, Chair of Structural Mechanics, Gustav-Meyer-Allee 25, 13355 Berlin, Germany

This study has been performed within the research project MuScaBlaDes "Multi scale failure analysis with polymorphic uncertainties for optimal design of rotor blades", which is part of the DFG Priority Programme (SPP 1886) "Polymorphic Uncertainty Modelling for the Numerical Design of Structures" started in 2016.

The modeling of real engineering structures is a tough challenge and always accompanied by uncertainties. Geometry, material and all boundary conditions should be quantified as accurately as possible. The quality of the numerical prediction of the system behavior and of desired system outcomes depends on the underlying model. Real measurements on the structure provide the possibility to assess and to verify the numerics. In general, discrepancies exist between the predicted and the measured values. Within the data assimilation framework, it is possible to consider both for the estimation of the system state. Additionally, the estimation of unknown parameters at once can be achieved in nonlinear problems by using the ensemble Kalman filter (EnKF).

In this contribution, the EnKF is extended by parameters, which influence the system state and which are subject to aleatory or epistemic uncertainty. These parameters have to be quantified by suitable uncertainty models first, and then integrated into the numerical simulation. Stochastic, interval and fuzzy variables are used leading to a multilayered uncertainty space and a nested numerical simulation in which the EnKF is embedded. Besides an academic example, the practical applicability is demonstrated on real engineering structures with synthetic and also with real measurement data.



9:20am - 9:40am

Surrogate assisted data-driven multiscale analysis considering polymorphic uncertain material properties

S. Zschocke, W. Graf, M. Kaliske

Institute for Structural Analysis, Technische Universität Dresden, Germany

Composite materials, such as (reinforced) concrete, which are designed by combining different constituents to obtain materials with beneficial properties for specific applications, are involved in many current research topics. The combination of different materials yields heterogeneities. These must be taken into account in the numerical simulation in order to obtain realistic results. Traditionally, the FE2 method based on the concept of numerical homogenization is used to obtain the macro-structural constitutive response at each integration point through a nested finite element analysis, whereby the meso-structural behavior is characterized by representative volume elements (RVE).

The main drawback of this method is the large computational effort because the representative volume elements, which are usually very complex, need to be evaluated at every material point. An approach to reduce the computational effort is the concept of decoupled numerical homogenization. Therefore, a database representing the macroscopic material behavior is derived by solving the boundary value problem of the considered RVE for different applied boundary conditions. Subsequently, the approach of data-driven computational mechanics is utilized to receive an approximate solution for the boundary value problem on the macroscale with direct reference to stress-strain data obtained from mesoscale evaluations. In order to receive accurate results by data-driven analyses, a sufficient data set density with respect to the present problem is essential.

With respect to the definition of the concrete mesostructure, aleatoric uncertainties are introduced by natural variability especially in the material behavior. Additional epistemic uncertainties are caused by manufacturing tolerances and an insufficient amount of measurement data. A combined consideration is realized by polymorphic uncertainty models. The acquisition of data sets consisting of uncertain macroscopic stress-strain states leads to a large number of required evaluations of the considered RVEs and correspondingly high computational effort, which is addressed by incorporating surrogate models for uncertainty quantification. The large number of uncertainty propagations that must be performed for data set generation is the main challenge in creating the surrogates. Accordingly, overhead and training time caused by surrogate creation need to be as low as possible in order to avoid impracticably high computational cost. In this contribution, a polynomial chaos assisted data set acquisition approach enabling the efficient consideration of polymorphic uncertainty is presented and applied in the context of data-driven computational homogenization.



9:40am - 10:00am

Sensitivity analysis in the presence of polymorphic uncertainties based on tensor surrogates

D. Moser

IGPM - RWTH

We will explore sensitivity analysis for mechanical engineering problems in presence of polymorphic uncertainty. Polymorphic uncertainty quantification allows for the incorporation of different sources of uncertainty, such as epistemic and aleatory, which have varying levels of complexity and dependence.

A measurement of distances between the most common polymorphic uncertainties will be at the core of the computation of sensitivity indices.

We will discuss how sensitivity analysis can aid in understanding the effects of input uncertainties on system performance and inform further polymorphic uncertainty quantification analysis. Additionally, we will cover methods for efficiently computing sensitivity measures for high-dimensional systems based on tensor surrogates.



10:00am - 10:20am

A computational sensitivity analysis tool for investigations of structural analysis models of real-world engineering problems

M. Fußeder, K.-U. Bletzinger

Chair of Structural Analysis, Technical University of Munich, Germany

The method of influence functions is a well-known engineering tool in structural analysis to investigate the consequences of load variations on deflections and stress resultants. Based on its strong relationship with adjoint sensitivity analysis [1], the traditional method of influence functions can be generalized as an engineering tool for sensitivity analysis [2]. The aim of our contribution is to give insights into these methodical extensions and to demonstrate their added value.

The traditional influence function approach can be seen as work balance based on Betti’s theorem. In our contribution we show how that work expression can be extended for sensitivity analysis with respect to various parameters. We discuss the significance of the resulting mechanically interpretable sensitivity analysis and its limitations. In that regard, we also specify how the graphical analysis procedure, for which the traditional influence function technique is well-known, can be generalized. The intention is to use those “sensitivity maps” to identify the positions of extreme influences and the individual effects of the partitions to the final sensitivity and its spatial distribution. In this way, structural analysis models of real-world engineering problems can be systematically explored and important model parameters to be considered in uncertainty quantification can be identified. Hence, our method has the potential to provide valuable support for preliminary investigations of structural models as a basis of polymorphic uncertainty analysis.

References

[1] A. Belegundu, Interpreting Adjoint Equations in Structural Optimization, Journal of Structural Engineering 112 (1986) 1971–1976. https://doi.org/10.1061/(ASCE)0733-9445(1986)112:8(1971).

[2] M. Fußeder, R. Wüchner, K.-U. Bletzinger, Towards a computational engineering tool for structural sensitivity analysis based on the method of influence functions, Engineering Structures 265 (2022) 114402. https://doi.org/10.1016/j.engstruct.2022.114402.

 
10:40am - 11:10amCoffee Break
Location: Aula
11:10am - 12:40pmPL3: Plenary Session
Location: EI7
Session Chair: Antonia Wagner
 
11:10am - 11:55am

In vitro, in vivo, in silico: use of computer modeling and simulation in skeletal pathologies and treatment

L. Geris1,2

1University of Liège, Belgium; 2KU Leuven, Belgium

The growing field of in silico medicine is focusing mostly on the two largest classes of medicinal products: medical devices and pharmaceuticals. However, also for advanced therapeutic medicinal products, which essentially combine medical devices with a viable cell or tissue part, the in silico approach has considerable benefits. In this talk an overview will be provided of the budding field of in silico regenerative medicine in general and computational bone tissue engineering (TE) in particular. As basic science advances, one of the major challenges in TE is the translation of the increasing biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of TE. Computational modeling allows to study the biological complexity in a more integrative and quantitative way. Specifically, computational tools can help in quantifying and optimizing the TE product and process but also in assessing the influence of the in vivo environment on the behavior of the TE product after implantation. Examples will be shown to demonstrate how computational modeling can contribute in all aspects of the TE product development cycle: from providing biological blueprints, over guiding cell culture and scaffold design, to understanding the etiology and optimal treatment strategies for large skeletal defects. Depending on the specific question that needs to be answered the optimal model systems can vary from single scale to multiscale. Furthermore, depending on the available information, model systems can be purely data-driven or more hypothesis-driven in nature. The talk aims to make the case for in silico models receiving proper recognition, besides the in vitro and in vivo work in the TE field.



11:55am - 12:40pm

Mixed-dimensional finite element formulations for beam-to-solid interaction

I. Steinbrecher

Universität der Bundeswehr München, Germany

The interaction between slender fiber- or rod-like components, where one spatial dimension is much larger than the other two, with three-dimensional structures (solids) is an essential mechanism of mechanical systems in numerous fields of science, engineering and bio-mechanics. Examples include reinforced concrete, supported concrete slabs, fiber-reinforced composite materials and the impact of a tennis ball on the string bed of a tennis racket. Applications can also be found in medicine, where stent grafts are a commonly used device for endovascular aneurysm repair, and in many biological systems such as arterial wall tissue with collagen fibers.
The different types of dimensionality of the interacting bodies, i.e., slender, almost one-dimensional fibers and general three-dimensional solids, pose a significant challenge for typical numerical simulation methods. The presented focuses on developing novel computational approaches to simulate the interaction between these fiber-like structures and three-dimensional solids. The key idea is to model the slender components as one-dimensional Cosserat continua based on the geometrically exact beam theory, enabling an accurate and efficient description of the fibers. This results in a mixed-dimensional beam-to-solid interaction problem.
In a first step positional and rotational coupling between the beam centerline and the underlying solid in line-to-volume problems are addressed. Mortar-type methods, inspired by classical mortar methods from domain decomposition or surface-to-surface interface problems, are used to discretize the coupling constraints. A subsequent penalty regularization eliminates the Lagrange multipliers from the global system of equations, resulting in a robust coupling scheme that avoids locking effects. Furthermore, consistent spatial convergence behavior, well within the envisioned application range, is demonstrated.
In a second step, the previously developed algorithms for line-to-volume coupling are extended to to line-to-surface coupling. This introduces the additional complexity of having to account for the surface normal vector in the coupling constraints. Consistent handling of the surface normal vector leads to physically accurate results and guarantees fundamental mechanical properties such as conservation of angular momentum.
Finally, a Gauss point-to-segment beam-to-solid surface contact scheme that allows for the modeling of unilateral contact between one-dimensional beams and two-dimensional solid surfaces is presented.
The previously mentioned building blocks constitute a novel mixed-dimensional beam-to-solid interaction framework, which is verified by theoretical discussions and numerical examples. Already in the present state, the presented framework is an efficient, robust, and accurate tool for beam-to-solid interaction problems and can become a valuable tool in science and engineering.

 
12:40pm - 1:40pmLunch Break
Location: Aula
1:40pm - 3:00pmMS04-1: Digital twins and their enabling technologies
Location: EI10
Session Chair: Norbert Hosters
Session Chair: Alexander Popp
Digital Twins and High-Fidelity Models
 
1:40pm - 2:00pm

A virtual testbed infrastructure for thermal drilling: application to cryobots

L. Boledi, M. S. Boxberg, A. Simson, J. Kowalski

Chair of Methods for Model-based Development in Computational Engineering (MBD), RWTH Aachen University, Germany

Thermal drills have become an important tool for the exploration of the cryosphere. In particular, cryobots are employed to access icy environments and retrieve (geo-)physical data, e.g., in Antarctica. In view of future missions to the icy moons of our Solar System, we have to extrapolate the performance of melting probes to extreme conditions that cannot be tested for with experiments on Earth. Thus, digital twins and virtual testbeds will help to develop and improve cryobots for such future missions. In our contribution, we present a testbed that includes environmental data, physics-based forward models for the performance of the cryobots, as well as data-driven approaches based on experimental cryobot data.

First, we need to provide data that can be employed by simulation software. Crysophere measurement data often lack simulation readiness as their (meta)data is inconsistent and incomplete [1]. We developed a tool, named Ice Data Hub, that flexibly stores cryosphere data in a reusable manner and provides interfaces to simulation software. It comes with a GUI to enter, edit, analyze, interpret and export the data. The interface to simulation tools ensures consistent data supply and reproducible preprocessing.

From a modeling perspective, cryobots offer an extremely challenging problem. We have to consider, in fact, a physical object melting its way through a static environment, for which a high-fidelity mathematical model that reflects the probe’s dynamic response to the ambient conditions does not exist as of now. Instead, we build upon a model hierarchy of increasing complexity for different simulation purposes. Starting from the energy balance in the microscale melt film, the melting velocity can be derived and integrated into a global trajectory prediction [2]. Alternatively, we can examine the transient ramp-up of the melting process by neglecting equilibrium assumptions. Finally, the evolution of the melt channel around the probe can be modeled by considering the evolving phase-change interface. The aforementioned problems require advanced numerical techniques, such as mesh-update and level-set methods [3].

In this work, we present our simulation models and tools and their integration with the Ice Data Hub. Furthermore, we show test cases of increasing complexity in view of realistic physical scenarios and discuss future extensions.

[1] A. Simson et al., Enriched metadata for hybrid data compilations with applications to cryosphere research, Helmholtz Metadata Collaboration Conference, online, October 5-6, doi: 10.5281/ZENODO.7185422 (2022).

[2] M. S. Boxberg et al., Ice Transit and Performance Analysis for Cryorobotic Subglacial Access Missions on Earth and Europa, Astrobiology, doi: 10.1089/ast.2021.0071 (2023).

[3] L. Boledi et al., A level-set based space-time finite element approach to the modelling of solidification and melting processes, Journal of Computational Physics 457 (2022) 111047.



2:00pm - 2:20pm

Diverse time scales in multidisciplinary problems - challenges in coupling procedures and software design

M. Kelemen, R. Wüchner, S. Warnakulasuriya

Technical University of Braunshweig, Germany

The ever-increasing demand for the integration of high fidelity multidisciplinary simulations in other processes such as optimization and machine learning requires cutting computational costs of each task comprising an analysis, while also ensuring their desired accuracy. Exploiting the inherently different spatial scales that the physical phenomena act on is a proven approach [1] toward achieving this goal. However, such problems often evolve over vastly different time scales as well, but methods taking advantage of this fact [2, 3] are much less mature and lack generalization.

A prime example of diverse spatial and temporal scales is coupling meteorological analyses that have hour-long time steps with local fluid simulations focusing on specific regions, that require temporal resolutions on the scale of seconds, to better capture the influence of local flow effects. Another one is accurately predicting the damage evolution of coupled chemical-mechanical degradation processes, such as the interaction between physical salt attack and dynamic loading on concrete structures.

Partly due to the diversity of the involved phenomena and the wide range of temporal scales, existing multiscale time integration approaches lack a unified structure, greatly limiting their applicability to problems other than what they were designed for. We propose a generic framework for temporal multiscale analyses that incrementally introduce specializations to exact problems in order to help the interchangeability of methods between disciplines. Furthermore, we demonstrate specific applications focusing on meteorology, fluid-structure interaction, and chemical-physical degradation.

[1] E. Weinan, B. Engquist, X. Li, W. Ren, and E. Vanden-Eijnden. Heterogeneous multiscale methods: A review. Communications in Computational Physics, 2(3):367–450, 2007.

[2] M. Brun, A. Gravouil, A. Combescure, and A. Limam. Two feti-based heterogeneous time step coupling methods for newmark and alpha-schemes derived from the energy method. Computer Methods in Applied Mechanics and Engineering, 283:130–176, 2015.

[3] M. Pasetto, H. Waisman, and J. S. Chen. A waveform relaxation newmark method for structural dynamics problems. Computational Mechanics, 63:1223–1242, 2019.



2:20pm - 2:40pm

A system identification approach for high fidelity parameter models of digital twins

F. Meister1, S. Warnakulasuriya1, R. Löhner2, R. Wüchner1

1TU Braunschweig, Germany; 2George Mason University, USA

Digital twins of structures have a wide range of useful applications in fields such as structural health monitoring or predictive maintenance. Furthermore, they enable new methods based on the precise digital representation of a building.

During the lifetime of a building, its condition and state changes. This could be due to a planned change of the structure, damages as a result of use, and/or the degradation of materials. One of the crucial factors for the digital twin concept is that those changes must be reflected in the digital model. Therefore, an automated and robust way to calibrate the structural analysis model so that it meets the necessary criteria to be considered a digital twin is of high importance.

To measure how accurate the digital model is, real world measurements are compared with their corresponding simulation results. These simulations are often driven by high fidelity models (typically based on FEM) with a larger dimensional parameter space. This research aims to directly operate in such high dimensional parameter spaces without reducing is complexity – and thus also keeping the potential richness in information to be adjusted. Using such high fidelity parameter models allows for efficient workflows, a better capture of physical phenomena and a better representation of the real structure, even in complex scenarios.

The resulting inverse problem from such high fidelity models is in most cases highly underdetermined. Hence, to solve such ill posed problems efficiently, an approach based on adjoint sensitivity analysis is selected and different stabilization and regularization techniques are applied. The capabilities and limitations of this approach are demonstrated with illustrative examples.



2:40pm - 3:00pm

Hybrid dgital twin: combining physics-based modelling with data-driven predictions for critical infrastructure

B. Maradni, M. von Danwitz, T. Sahin, A. Popp

UniBW M, Germany

Digital twins are models that map real physical objects and processes into the digital environment. As the world leans more towards digitalization, digital twins can potentially assist directly in monitoring and protecting critical infrastructure (e.g., bridges), where digital twins can have an essential role in structural health monitoring (SHM) [1]. Hybrid digital twins (HDTs) combine physics-based simulations (virtual twin) with data-based analysis (digital twin), providing a simulation tool with predictive capabilities for damage detection, conditional simulations, and trends identification. In this work, we explore hybrid digital twinning of steel-reinforced concrete beams, and analyse it with experimental data from a real-life structure.

Our virtual twin is based on finite element methods with a consistent beam-to-solid volume coupling approach [2]. A model for steel-reinforced concrete structures is created using embedded 1D beam finite elements that enable physics-based modeling to capture the interaction between the reinforcement components and the concrete matrix of the investigated structure. Our digital twin employs physics-informed neural networks (PINNs). The PINNs are trained by optimizing the network weights and biases to reduce the residuals of the partial differential equation, boundary, and initial conditions of a given initial boundary value problem. The network is additionally trained with sensor data to simulate reliable digital representations and provide predictions [3].

The digital and virtual twins can be combined in different approaches, including enriching the digital twin training with the physics-based model and using data-based analysis to enhance the virtual twin. The combination methods of the data-based techniques with physics-based modeling and simulation are studied and contrasted. The model predictions are also compared to the results of physical experiments and sensor data to provide real leverage of the benefits of each twin.

REFERENCES

[1] Thomas Braml, Johannes Wimmer, Yauhen Varabei, Stefan Maack, Stefan Küttenbaum, Thomas Kuhn, Maximilian Reingruber, Alexander Gordt and Jürgen Hamm. Digitaler Zwilling: Verwaltungsschale BBox als Datenablage über den Lebenszyklus einer Brücke. Bautechnik, 99, 2021.

[2] Ivo Steinbrecher, Matthias Mayr, Maximilian J. Grill, Johannes Kremheller, Christoph Meier, Alexander Popp. A mortar-type finite element approach for embedding 1D beams into 3D solid volumes Computational Mechanics, 66:1377-1398, 2020.

[3] Max von Danwitz, Thank Thank Kochmann, Tarik Sahin, Johannes Wimmer, Thomas Braml, and Alexander Popp. Hybrid Digital Twins: A Proof of Concept for Reinforced Concrete Beams. Accepted in Proceedings in Applied Mathematics and Mechanics, 2022.

 
1:40pm - 3:00pmMS09-2: Multi-scale shape optimization problems in continuum mechanics
Location: EI7
Session Chair: Daniel Wolff
Session Chair: Jacques Zwar
 
1:40pm - 2:00pm

Optimization of the specimen geometry for one‑shot discovery of material models

S. Ghouli1, M. Flaschel2, S. Kumar3, L. De Lorenzis1

1Department of Mechanical and Process Engineering, ETH Zürich, 8092 Zürich, Switzerland; 2Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin, Germany; 3Department of Materials Science and Engineering, Delft University of Technology, 2628 CD Delft, The Netherlands

We recently proposed an approach for Efficient Unsupervised Constitutive Law Identification and Discovery (EUCLID), which exploits machine learning tools such as sparse regression [1–3], Bayesian learning [4], or neural networks [5] to automatically discover material laws independent of stress data, but solely based on full-field displacement and global force data obtained from mechanical testing. The displacement field can be measured on the surface of a target specimen via digital image correlation (DIC).

An important feature of the approach is that, in principle, the discovery of the material law can be performed in a one-shot fashion, i.e., using only one experiment. However, this capability heavily relies upon the richness of the measured displacement data, i.e., their ability to probe the stress-strain space (where the stresses depend on the constitutive law being sought) to an extent sufficient for an accurate and robust discovery process. The richness of the displacement data is in turn directly governed by the specimen geometry.

In the present study, we aim to optimally design the geometry of the target specimen in order to maximise the richness of the deformation field obtained by the DIC method. We seek to excite various deformation modes (from tension/compression to shear) in a single optimised specimen, to maximise the performance of EUCLID. To this aim, we utilise density-based topology optimisation driven by an objective function deduced from EUCLID itself, which essentially aims at enhancing the identification robustness of material parameters (especially in noisy measurements). In this contribution, we shed light on the objective function, the topology optimisation framework, and the initial results.



2:00pm - 2:20pm

Optimization of fiber-reinforced materials to passively control strain-stress response

C. D. Fricke1, I. Steinbrecher2, D. Wolff3, A. Popp2, S. Elgeti1

1TU Wien, Austria; 2University of the Bundeswehr Munich, Germany; 3RWTH Aachen, Germany

The intricate and nonlinear nature of material behavior, characterized by a progressively changing stress-strain relationship, is a fundamental and indispensable property governing the behavior of numerous mechanical systems. Examples of these mechanical systems include rubber components in automobile suspensions and engine mounts, soft tissues and organs in bio-mechanics and medical engineering, as well as packaging materials such as foam, paper, and plastics. Components used for the construction of these mechanical systems often need to meet specific stiffness requirements which can be influenced by the composition of the employed material, see Steinbrecher et al. [Computational Mechanics, 69 (2022)].

On the macro or micro level, such materials can often be classified as fiber-reinforced materials, i.e., thin and long fibers embedded inside a matrix material. One way to control the stress-strain relationship of fiber-reinforced materials is to alter the geometry of the reinforcements, thus creating passive materials with a highly nonlinear stress-strain response. This can be a viable method for the development of optimized system components or meta-materials.

This method can be explored with a single beam embedded in a softer matrix. If the embedded beam is straight, the stress increase would be approximately linear with increasing strain. Bending the beam inside the matrix will lower the starting stress rate. The stress rate increases until the encased beam is straight, at which point the stress rate will not increase further. By manipulating the initial geometry of the beam, the evolution of the strain rate can be influenced.

Previously, Reinforcement Learning based shape optimization has been used to optimize structures in the context of fluid dynamics, see Fricke et al. [Advances in Computational Science and Engineering, 1 (2023)]. This approach is different from classical optimization methods, as it trains an agent to solve a specific task inside a defined problem set. While the training is computationally more expensive than a single optimization, the trained agent is able to optimize a problem inside the learned problem set with less effort.

Applying the RL-based shape optimization method to the beam geometry, an agent is trained to identify optimal beam geometries for a set of starting stress rates and ending stress rates.



2:20pm - 2:40pm

Cavity shape optimization in injection molding to compensate for shrinkage and warpage using Bayesian optimization

S. Tillmann1, S. Elgeti1,2

1RWTH Aachen University, Chair for Computational Analysis of Technical Systems; 2TU Wien, Institute of Lightweight Design and Structural Biomechanics

In injection molding, shrinkage and warpage lead to shape deviations of the produced parts with respect to the cavity. Caused by shrinkage and warpage, these deviations occur due to uneven cooling and internal stresses inside the part. One method to mitigate this effect is to adapt the cavity shape to the expected deformation. This deformation can be determined using appropriate simulation models, which then also serve as a basis for determining the optimal cavity shape.

Shape optimization usually requires a sequence of forward simulations, which can be computationally expensive. To reduce this computational cost, we use Gaussian Process Regression (GPR) as a surrogate model. The GPR learns the objective function, which measures the shape difference. This difference is computed by taking the average Euclidean distance of sample points on the surface of the deformed product on the one hand and the desired shape on the other hand. Additionally, GPR has the benefit that it allows to account for uncertainty in the model parameters and thus provides a means to investigate their influence on the optimization result. We present a GPR trained with samples from a finite-element solid-body model. It predicts the deformation of the product after solidification and, together with GPR, allows for efficient cavity shape optimization. The optimization parameters are the position of spline points representing the geometry.

To further improve the learning efficiency, we use Bayesian optimization. This approach selects the next training points by utilizing an acquisition function that balances exploration and exploitation. Here, training points with low function values and high uncertainty are given priority. This achieves the goal of finding the optimal solution with the least number of training points. The Bayesian optimization framework was implemented in Python code.

The material parameters can underlie fluctuations because of different batches or when using recycled material. To account for these uncertainties in the material parameters, a new formulation of the objective function is proposed to find the optimal shape resulting in low shrinkage and warpage as well as low variance with respect to the input parameters.



2:40pm - 3:00pm

Automated surgery planning for an obstructued nose

M. Rüttgers1,2,3, M. Waldmann2,3, K. Vogt4, W. Schröder2,3, A. Lintermann1

1Jülich Supercomputing Centre, Forschungszentrum Jülich, Germany; 2Institute of Aerodynamics and Chair of Fluid Mechanics, RWTH Aachen University; 3Jülich Aachen Research Alliance, Center for Simulation and Data Science; 4Faculty of Medicine, Center of Experimental Surgery, University of Latvia

The nasal cavity is one of the most important organs of the human body. Its various functionalities are essential for the well-being of the individual person. It is responsible for the sense of smell, supports degustation, and filters, tempers, and moistens the inhaled air to provide optimal conditions for the lung. Diseases of the nasal cavity like chronic rhinosinusitis, septal deviation, or nasal polyps may lead to restrictions or complete loss of these functionalities. A decreased respiratory capability, the development of irritations and inflammations, and lung diseases can be the consequences.

The shape of the nasal cavity varies from person to person with stronger changes being present in pathological cases. A decent analysis on a per-patient basis is hence crucial to plan for a surgery with a successful outcome. Nowadays diagnostic methods rely on morphological analyses of the shape of the nasal cavity. They employ methods of medical imaging such as computed tomography (CT) or magnetic resonance imaging (MRI), and nasal endoscopy. Such methods, however, do not cover the fluid mechanics of respiration, which are essential to understand the impact of a pathology on the quality of respiration, and to plan for a surgery. Only a meaningful and physics-based diagnosis can help to adequately understand the functional efficiency of the nasal cavity, to quantify the impact of different pathologies on respiration, and to support surgeons in decision making.

Physics-based methods to diagnose pathologies in the human respiratory system have recently evolved to include results of computational fluid dynamics (CFD) simulations. In the current study, a reinforcement learning (RL) algorithm that learns to optimize the nasal cavity shape based on feedback from CFD simulations is developed. The final structure of the airway then functions as the proposed surgery. It is investigated how the algorithm finds the optimal structural modification based on various optimization criteria:

(i) the capability of inhaling and supplying the lung with air, expressed by the pressure difference between the nostrils and the pharynx;

(ii) the functionality of the nose for heating up incoming air, represented by the temperature difference between the nostrils and the pharynx;

(iii) the possibility for a balanced air supply between the left and right nasal passages, realized by equal mass flow rates through both nasal passages;

Furthermore, different types of RL algorithms are employed and their computational efficiency is analyzed. It is the aim to further advance these techniques to include them into clinical pathways, thereby allowing personalized analyses on a per-patient basis.

 
1:40pm - 3:00pmMS14-1: Mechanics of soft multifunctional materials: experiment, modeling and simulation
Location: EI8
Session Chair: Matthias Rambausek
Session Chair: Miguel Angel Moreno-Mateos
 
1:40pm - 2:00pm

Advanced constitutive modelling of polymers for tissue bioprinting applications

L. Zoboli1, D. Bianchi1, G. Vairo2, M. Marino2, A. Gizzi1

1Research Unit of non-linear Physics and mathematical modelling, Campus Bio-Medico University of Rome, Italy; 2Department of Civil Engineering and Computer Science, University of Rome Tor Vergata

Modern 3D bioprinting techniques aim at reproducing a specific tissue composition by extruding a bioink, which is a cluster of stem cells embedded into a hosting gel, into the desired pattern. If the extruded structure is fed suitable nutrients, cell differentiation and growth is initiated. However, prior to activating these processes, the gel must first be converted into a polymer construct to provide support and preferential directions to the successive cellular growth phase. There are many ways to accomplish this melt-to-solid transition, most notably photo-polymerisation. The irradiation of a light with suitable intensity and wavelength triggers chemical processes that induce the cross-linking between polymer chains within the printed material, in a time-evolving scheme of structure formation. Controlling this process holds great importance, since cellular motility and nutrient diffusion are greatly affected by the disposition and orientation of the polymer network. As it currently stands, the 3D printing process briefly described above is well known, but in many instances it is not yet adequately optimised and the influence of a variety of parameters hinders a large-scale production basis. For example, the intensity and direction of the UV light has no standard protocol yet, so the definition of an optimal disposition of the light sources can prove essential in minimising the polymerisation times, hence tissue formation times as a whole. This work intends to ground the choice of selected polymerisation parameters to a rational basis. To achieve this, the relevant Physics of what happens after the melted bio-ink is deposited has been represented through multi-physics Finite Element simulations, where the kinetics of polymer cross-linking has been coupled with finite deformation formulations. Viscoelastic behaviour during polymerisation has also been accounted for. To deal with the highly non-linear differential equations representing the problem, a parametrised custom Finite Element variational formulation has been implemented.



2:00pm - 2:20pm

Magneto mechanical experiments on soft Magneto Active Polymer

A. Garai, K. Haldar

Indian Institute of Technology Bombay, India

Magneto Active Polymers (MAPs) are composite material that combines micron-sized magnetic particles with an elastomer matrix. These materials are notable for their softness and ability to become stiffer in response to an external magnetic field. MAP is prepared by mixing micron-sized iron particles with an elastomeric matrix (i.e., PDMS), which is one of the varieties of silicon rubber. Here we present a study on the mechanical characterization of magneto active polymers prepared by mixing iron particles with a Polydimethylsiloxane (PDMS) (Ecoflex polymers) matrix. The stiffness of PDMS depends on the mixing ratio of these two components. Tensile and relaxation tests were conducted to characterize the mechanical properties of MAP. The experimental data obtained from these tests were used to calibrate the model for the material and to determine the elastic and viscoelastic constants. The results of the study showed that the MAP exhibited desirable mechanical properties and that the external magnetic field can control its response. The calibrated model effectively predicted the mechanical behavior of the material under different loading conditions. The findings of this study have significant implications for the development of magneto active polymers for various applications, such as in the field of soft robotics, where the material's mechanical properties play a crucial role in the design and operation of soft robots.



2:20pm - 2:40pm

On the magnetostrictive and fracture behavior of soft magnetorheological elastomers: influence of magnetic boundary conditions

M. A. Moreno-Mateos1,2, K. Danas3, M. Hossain4, P. Steinmann1, D. Garcia-Gonzalez2

1Institute of Applied Mechanics, Universität Erlangen–Nürnberg, Egerland Str. 5, 91058 Erlangen, Germany.; 2Department of Continuum Mechanics and Structural Analysis, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Madrid, Spain.; 3LMS, C.N.R.S, École Polytechnique, Institut Polytechnique de Paris, Palaiseau, 91128, France.; 4Zienkiewicz Centre for Computational Engineering, Faculty of Science and Engineering, Swansea University, SA1 8EN, Swansea, UK.

Magnetorheological elastomers (MREs) with soft matrices have paved the way for new advancements in the fields of soft robotics and bioengineering. The material response is governed by a complex magneto-mechanical coupling, which necessitates the use of computational tools to guide the design process. However, these computational models typically rely on finite element frameworks that oversimplify and idealize the magnetic source and magnetic boundary conditions (BCs), leading to discrepancies with the actual behavior even at a qualitative level. In this study, we comprehensively examine the impact of magnetic BCs and highlight their significance in the modeling process. We present a magneto-mechanical framework that models the response of soft-magnetic and hard-magnetic MREs under various magnetic fields generated by an idealized magnetic source, a permanent magnet, a coil system, and an electromagnet with two iron poles. Our results demonstrate noteworthy differences in magnetostriction depending on the magnetic source used. Furthermore, we implement a virtual testbed to explore the fracture performance of MREs with remanent magnetic fields. To this end, we prescribe remanent magnetization conditions on rectangular samples, and we add a damage phase-field to model crack propagation. In order to maintain the continuity of the magneto-mechanical fields, the damaged material is designed to exhibit the same behavior as the surrounding air. The results show that remanent magnetization enhances the fracture energy and arrests cracks propagation.

Refs:

[1] Lucarini S, Moreno-Mateos MA, Danas K, Garcia-Gonzalez D. "Insights into the viscohyperelastic response of soft magnetorheological elastomers: Competition of macrostructural versus microstructural players". International Journal of Solids and Structures, Vol. 256, 2022.

[2] Moreno-Mateos MA, Hossain M, Steinmann P, Garcia-Gonzalez D. “Hard magnetics in ultra soft magnetorheological elastomers enhance fracture toughness and delay crack propagation”. Journal of the Mechanics and Physics of Solids, Vol. 173, 2023.



2:40pm - 3:00pm

Towards the simulation of multistable microstructures of extremely soft magnetorheological elastomers

M. Rambausek, J. Schöberl

Institute of Analysis and Scientific Computing, TU Wien, Austria

Two decades ago, new experiments accompanied by the modernization of magnetoelastic theory have spawned a great amount of theoretical, numerical but also experimental developments on magnetoelastic composites such as magnetorheological elastomers (MREs). Thanks to extensive research efforts, their coupled magnetoelastic response is well understood nowadays. However, this applies only to MREs and related materials based on sufficiently stiff matrix material. Indeed, as the shear modulus of the matrix material is reduced further and further, magnetoelasticity turns out to be an insufficient theoretical framework at the macroscopic scale as demonstrated in this contribution. Even when neglecting the dissipation in the constituents, one may observe significant dissipation.

In composites based on very soft matrix material that can only store rather small amounts of elastic energy, the magnetic energy may dominate the total energy of the system. Multiple (meta-)stable configurations are the consequence, which render the composite material "magneto-pseudoelastic" even when both the inclusions and the matrix material are practically non-dissipative. While such “magnetodeformal shape-memory” effects can be found in mainly experimental literature, we are not aware of quantitatively predictive simulations in this regard.

In this talk we present ongoing work pushing the limits of finite element and re-meshing technologies in order to render the complicated processes extremely soft MREs accessible by computational means.

 
1:40pm - 3:00pmMS16-2: Modeling, simulation and quantification of polymorphic uncertainty in real word engineering problems
Location: EI9
Session Chair: Selina Zschocke
Session Chair: F. Niklas Schietzold
 
1:40pm - 2:00pm

Process steering of additive manufacturing processes under polymorphic uncertainty

A. Schmidt1, T. Lahmer2

1Materials Research and Testing Institute at the Bauhaus-Universität Weimar, Germany; 2Institute of Structural Mechanics, Bauhaus-Universität Weimar, Germany

During the last decade, additive manufacturing techniques have gained extensive attention. Especially extrusion-based techniques utilizing plastic, metal or even cement-based materials are widely used. Numerical simulation of additive manufacturing processes can be used to gain a more fundamental understanding of the relations between the process and material parameters on one hand and the properties of the printed product on the other hand.

Hence, the dependencies of the final structural properties on different influencing factors can be identified. Additionally, the uncertain nature of process and material parameters can be taken into account to reliably control and finally optimize the printing process. Therefore, numerical models of printing processes demand geometric flexibility while being computationally efficient.

An efficient numerical simulation of an extrusion-based printing process of concrete, applying a voxel-based finite element method is used in this study. Along with the progressing printing process, a previously generated FE mesh is activated step-by-step using a pseudo-density approach. Additionally, all material parameters vary spatially and temporally due to the time dependency of the curing process. In order to estimate material and process parameters realistically a polymorphic uncertainty approach is chosen incorporating interval-probability based random processes and fields.

By having a numerical model – at least at some level of abstraction – and an extensive description and possibility to consider uncertainties, the probabilities of the occurrence of the failure mechanisms (strength-based stability, geometric deviations, layer interface, and buckling) can be estimated. In an optimal steering of the process, failures should be minimized. However, reducing the failure probabilities of one mechanism may increase the ones of the other mechanisms, e.g., shape stability and layer interface might be in conflict.

In this study, process steering is rationalized using a reliability-based optimization approach taking into account the uncertain nature of the system’s material and process parameters. In the light of polymorphic uncertainty, tailored surrogate model strategies are investigated to boost efficiency for this numerically demanding task.



2:00pm - 2:20pm

Two propagation concepts for polymorphic uncertain processes – simulation- and uncertainty quantification-based

F. N. Schietzold, W. Graf, M. Kaliske

Institute for Structural Analysis, Technische Universität Dresden, Germany

The combination of both types of uncertainty – aleatoric and epistemic – in polymorphic uncertainty models is common as fundamental step for a realistic description of system parameters (geometric definitions, loads, boundary conditions and material properties) in structural safety assessment. Such polymorphic uncertainty models are defined by combination of basic uncertainty models, such as random variates, interval sets, fuzzy sets etc., where the two types of uncertainty are accounted for in different basic models. As combined models, p-boxes, fuzzy probability based random variates etc. are documented.

In addition to the consideration of the two types of uncertainty, functional dependencies of uncertain quantities are observed in real world problems. Functional dependencies are due to temporal variation, referred to as uncertain process, or due to spatial variation, referred to as uncertain field.

This contribution focuses on temporally dependent polymorphic uncertainty in safety assessment of – in this application case, structural – systems. Therefore, uncertainty quantification is required, which means estimating the uncertain system responses (uncertain output) of a structural analysis (basic solution), based on the uncertain structural parameters (uncertain input). When considering polymorphic uncertain processes, a key challenge arises from the coupling and propagation of the temporal dependency in the uncertain input and in the basic solution. For this propagation, two concepts are presented.

The first concept is propagation of temporal dependency by the uncertainty analysis. Therefore, each single basic solution is not necessarily time dependent. Contrarily, the time dependency is reached by sampling from a time dependent uncertain input parameter in the uncertain analysis and each sample is applied for a single computation of the basic solution. Finally, the chaining of such basic solutions and the interdependence between them leads to time-dependent output of the total uncertainty quantification.

The second concept is propagation of time dependency in the basic solution. Therefore, each sample of the uncertainty analysis is a full realization of a time dependent function, in particular a full deterministic process. The basic solution in this concept is required to be time-dependent and the realization of the process is the deterministic input defining the function of the parameter in time.

In this contribution, both concepts are presented, and the challenges and advantages in their implementations are outlined. Moreover, general problems of polymorphic uncertainty models are pointed out based on the shown concepts and solutions for their unbiased modeling and re-sampling are introduced. As numerical examples, application cases in the simulation of the life-cycle (production process and structural operation) of compressed wood components are shown, where both concepts are applied in multiple simulation phases.



2:20pm - 2:40pm

Human-induced vibrations of footbridges: modeling with polymorphic uncertainties

M. Fina, M. Schweizer, W. Wagner, S. Freitag

Karlsruhe Institute of Technology, Germany

The development of new materials allows to increase the span length of footbridges constructed as lightweight structures. However, slender footbridges are more sensitive to vibrations caused by human-induced vibrations. This can reduce the comfort for pedestrians significantly. In addition, eigenfrequencies of slender footbridges are often in the range of the step frequency. A resonance case has to be avoided to ensure the structural safety. The gait of a pedestrian and thus the step frequency is very difficult to quantify in a dynamic load model. It depends on many factors, e.g., body height and weight, gender, age, psychological aspects and even the economic and social status of a human have an influence. There are many parameters with a lack of knowledge to quantify these factors in a load model. Therefore, pedestrian load models are very simplified in current design guidelines. An adequate quantification of aleatoric and epistemic uncertainties is not yet sufficiently addressed in the modeling of human-induced vibrations of footbridges.

In this contribution, uncertain parameters for a pedestrian load model are quantified with polymorphic uncertainty models based on available data. Then, dynamic structural analyses are performed with human-induced vibrations, which are approximated by surrogate models. The results are fuzzy stochastic processes of the structural accelerations, velocities and displacements. In current design codes, the comfort levels are defined with respect to acceptable accelerations. Due to the subjective perception of structural accelerations, the comfort levels are also defined with uncertainty models. Associated results are presented for a real-world footbridge using a 3D finite element model.



2:40pm - 3:00pm

Flexibility and uncertainty quantification using the solution space method for crashworthiness

P. Ascia, F. Duddeck

Technische Universität München, Germany

In the present landscape, researchers quantify the natural variability or lack of knowledge of a system to counteract its effects. What if, instead of trying to reduce this uncertainty, we try to exploit it during the development? In this work we propose how to use the knowledge on the said uncertainty to increase the design flexibility of the sub-systems of a new product. Imagine the development process being supported by the solution space method and its corridors on the performance of each sub-system. From a certain point of view, these corridors quantify an interval epistemic uncertainty of the development process. The method, however, allows to change the intervals while maintaining the same overall target performance. We exploit the flexibility of the method to find on which parts of the new product is worth investing to reduce the variability and which ones to allow a larger interval. A larger interval yields a bigger flexibility in the design, hence less development effort. The method we propose balances in the development process between reducing the variability of certain sub-systems and increasing the flexibility on the design of other sub-systems.

 
3:00pm - 3:30pmCoffee Break
Location: Aula
3:30pm - 4:30pmMS04-2: Digital twins and their enabling technologies
Location: EI10
Session Chair: Norbert Hosters
Session Chair: Alexander Popp
Physics-Informed Neural Networks
 
3:30pm - 3:50pm

Physics-informed neural networks for enabling digital twins of profile extrusion processes

D. Wolff1, S. Elgeti2

1Chair for Computational Analysis of Technical Systems, RWTH Aachen University, Germany; 2Institute of Lightweight Design and Structural Biomechanics, TU Wien, Austria

By now, simulations have become an essential tool in engineering sciences. Especially in the field of production engineering, many production processes do not easily allow for measurements. Thus, digital twins, e.g., in the form of high-fidelity simulation models, gain increasing interest as they enable insights into the underlying dynamics of the manufacturing process. However, simulating realistic applications with conventional full-order models is often very expensive due to the large number of degrees of freedom.

This motivates the interest in model-order reduction techniques, which approximate full-order models at much lower computational costs by drastically reducing the degrees of freedom. Here, the recent breakthroughs in deep-learning approaches have drawn attention toward data-based strategies for constructing reduced-order models as alternatives to the well-established full-order models. Many deep-learning-based approaches rely on the abundance of data, which is usually scarce in engineering applications. Utilizing the underlying physics additionally for guiding the learning process has become particularly attractive for constructing accurate but fast digital twins.

In our work, we are interested in the plastic manufacturing process of profile extrusion. Precisely, we are interested in modeling the shear-thinning flow of the highly viscous plastics melt inside profile extrusion dies. To construct a digital twin, we utilize Physics-Informed Neural Networks [1]. We will present comparisons with respect to high-fidelity digital twins, i.e., provided through Finite Element simulation results, and elaborate on training heuristics, which proved essential for our application to obtain reduced models with sufficient accuracy.

References

[1] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686–707. https://doi.org/10.1016/j.jcp.2018.10.045



3:50pm - 4:10pm

Investigation of network architecture and optimizer parameters of physics-informed neural networks

T. Sahin, M. v. Danwitz, A. Popp

University of the Bundeswehr Munich, Germany

Physics-Informed Neural Networks (PINNs) have been introduced as a promising method that can combine differential equations and measurement data in the loss function of the neural network [1]. PINNs are a meshless method so they can handle high-dimensional domains. Furthermore, they are a good candidate to solve inverse problems due to the easy integration of data. Based on sensor data, PINNs can be used as a surrogate fast-to-evaluate model in hybrid digital twins of civil engineering structures [2].

One of the main challenges is to find a suitable PINN configuration since the prediction accuracy and model efficiency depend on hyperparameters [3]. Commonly, hyperparameters have been determined by manual adjustment through trial and error. In this contribution, we investigate the network and optimizer parameters of PINNs in various examples aiming for a hyperparameter tuning guideline for computational mechanics problems.

The search space for the network hyperparameters contains the distribution of training points, the number of hidden layers with accompanying neurons, activation functions, and network parameter initializers. On the other hand, the investigated optimizer parameters consist of different optimization algorithms along with their combinations, learning rates and the number of iterations. The main targets of hyperparameter optimization are training performance, loss on collocation and boundary points, and prediction accuracy. Besides a systematic exploration of the search space, we attempt a sensitivity analysis of the optimal PINN configuration in dependence on varying material parameters.

Specific examples include a one-dimensional cantilever beam under a triangular distributed load, a two-dimensional Lam‘e problem and a Hertzian contact problem with the mixed-variable formulation, as well as two- and three-dimensional heat transfer problems and corresponding inverse problems.

[1] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, 2019.

[2] M. von Danwitz, T. T. Kochmann, T. Sahin, J. Wimmer, T. Braml, and A. Popp, “Hybrid digital twins: A proof of concept for reinforced concrete beams,” Accepted in Proceedings in Applied Mathematics and Mechanics, 2022.

[3] Y. Wang, X. Han, C.-Y. Chang, D. Zha, U. Braga-Neto, and X. Hu, “Auto-pinn: Understanding and

optimizing physics-informed neural architecture,” arXiv preprint arXiv:2205.13748, 2022.



4:10pm - 4:30pm

Strategies for improving the performance of Physics-Informed Neural Networks as reduced simulation models for Stirred Tank Reactors

V. Travnikova1, E. von Lieres2, M. Behr1

1Chair of Computational Analysis of Technical Systems, RWTH Aachen University, Germany; 2Institute of Bio- and Geosciences, Forschungszentrum Jülich GmbH, Germany

Stirred Tank Reactors (STRs) play a central role in biotechnological process development and manufacturing. Digital twins of STRs can be used both to minimize the amount of supporting experimental studies required during process design and scale-up, and to deepen the understanding of conditions inside a reactor, where little information is available due to the lack of appropriate measurement techniques. For this purpose, Computational Fluid Dynamics (CFD) tools are already widely used in the industry. However, the high computational cost of high-fidelity simulations, especially in scenarios, where the same model must be solved repeatedly for different parameter values (such as stirring rate), motivates the construction of less computationally intensive Reduced Order Models (ROMs) to approximate solutions. Physics-Informed Neural Networks (PINNs), originally proposed by Raissi et al. [1], are a promising candidate for ROMs in engineering problems, as they allow to simultaneously exploit both the available data and the knowledge of the underlying physics of the problem by embedding the governing equations in the loss function of the neural network.

This use case represents a particular challenge for PINNs due to the geometric complexity of the computational domain and the large variety of phenomena involved in the process (e.g., turbulence, mass transfer).

Building on the investigation of strategies to improve the predictive accuracy of the model, for example by imposing boundary constraints in a post-processing step using an interpolation spline as proposed in [2] or by leveraging additional knowledge of the problem, such as domain decomposition based on the different character of the flow in different parts of the domain, we aim to apply the approaches tested in 2D to more realistic 3D models. The presented methods can be transfered to other complex problems to improve the overall performance of PINNs.

References

[1] M. Raissi, P. Perdikaris, and G. E. Karniadakis, \Physics-informed neural networks: A deep

learning framework for solving forward and inverse problems involving nonlinear partial di er-

ential equations," Journal of Computational Physics, vol. 378, pp. 686{707, Feb. 2019.

[2] H. Sheng and C. Yang, \Pfnn: A penalty-free neural network method for solving a class of second-

order boundary-value problems on complex geometries," Journal of Computational Physics,

vol. 428, p. 110085, 2021.

 
3:30pm - 4:30pmMS08: Numerical simulations of flows in porous media
Location: EI9
Session Chair: Marco De Paoli
 
3:30pm - 3:50pm

Towards a simulation of repeated wave-induced liquefaction processes

H. Keese1, J. Rothschink2, O. Stelzer2, T. Nagel1,3

1TU Bergakademie Freiberg, Germany; 2Federal Waterways Engineering and Research Institute, Germany; 3Freiberg Center for Water Research (ZeWaF), Germany

A riverbed is a porous medium consisting of a granular skeleton and the pore fluid, which itself comprises water and air. In quasi-saturated conditions, the degree of water saturation ranges from 85 - 99 %. Hydrodynamic boundary conditions affected by, e.g., ship passage influence the hydro-mechanical state of the riverbed. When the intergranular contact forces disappear due to an increase in (excess) pore water pressure, liquefaction occurs, at this point the soil behaves like a fluid instead of a solid. This can lead for example to sediment movement, destabilization of bank protection measures, washed-out submarine pipelines or damaged coastal structures. Modeling of this process requires strong hydro-mechanical coupling and the possibility to represent large deformations. The FEniCS framework was used to solve the underlying partial differential equations in a Lagrangian setting using the finite element method. In addition to the process description, the underlying material model for the soil particle phase must be able to correctly represent the transition from a solid-like to a fluid-like behavior. To date, no approach available in geotechnical software is able to satisfactorily represent the entire process for all associated phases. First steps towards this direction will be shown coupling FEniCS with MFront, which offers the possibility to implement different material models and to incorporate them into different programs via a generic interface. Through this procedure the use of different software packages and even numerical methods becomes practically feasible. First steps towards the identification of a material model, which can represent the bidirectional phase change during liquefaction, will be shown. Experimental data sets generated in a soil column in combination with an alternating flow apparatus serve as a basis for comparison.



3:50pm - 4:10pm

A matrix-free discontinuous Galerkin solver for unsteady Darcy flow in anisotropic porous media

B. Z. Temür1, N. Fehn1, P. Munch2, M. Kronbichler2, W. A. Wall1

1Technical University of Munich, Garching, Germany; 2University of Augsburg, Augsburg, Germany

Flow in porous media can be described by the Darcy model in a wide range of applications from soil mechanics to biomechanics. Many relevant applications manifest large scale problems that require transient simulations and finely resolved discretizations. With currently available algorithmic approaches, this can lead to impractically high computing costs or demand to exclude certain effects. For example, current poroelastic models of the human lungs generally solve the steady-state Darcy equation, leaving transient effects unstudied. To address this matter, we propose a new solver for the unsteady Darcy flow problem in anisotropic porous media with spatially and temporally variable porosity and permeability fields.

We use the discontinuous Galerkin method with L2-conforming tensor-product elements for the spatial discretization, and the BDF method for the temporal discretization of the Darcy flow equations. We solve the resulting coupled pressure-velocity system by matrix-free implementation techniques for operator evaluation in Krylov solvers as well as preconditioners. To ensure fast convergence of the solvers, we identify spectrally equivalent preconditioners based on the so-called block preconditioning technique with approximate inverses of the velocity-velocity block and of the Schur complement of the coupled system. For the velocity-velocity block, we design a matrix-free cellwise inverse mass operator with variable coefficients. To minimize arithmetic work, we exploit the tensor-product structure of shape functions using a technique known as sum-factorization. On the other hand, a hybrid multigrid preconditioner for the Poisson problem with variable coefficients approximates the inverse of the Schur complement. We expect these methods to lay a new foundation for high-performance numerical simulations of general Darcy flow.

All methods and applications are implemented in the open source software projects ExaDG and deal.II.



4:10pm - 4:30pm

Pore-scale simulation of convective mixing in confined media

M. De Paoli1,2, C. J. Howland1, R. Verzicco1,3,4, D. Lohse1,5

1Physics of Fluids Group, University of Twente (Enschede, the Netherlands); 2Institute of Fluid Mechanics and Heat Transfer, TU Wien (Vienna, Austria); 3Gran Sasso Science Institute (L’Aquila, Italy); 4Dipartimento di Ingegneria Industriale, University of Rome “Tor Vergata” (Rome, Italy); 5MPI for Dynamics and Self-Organization (Göttingen, Germany)

We use numerical simulations to investigate the mixing dynamics of a convection-driven porous media flow. We consider a fully saturated homogenous and isotropic porous medium, in which the flow is driven by density differences induced by the presence of a solute. In particular, the fluid density is a linear function of the solute concentration. The configuration considered is representative of geological applications in which a solute is transported and dissolves as a result of a density-driven flow, such as in carbon sequestration in saline formations or water contamination processes. The mixing mechanism is made complex by the presence of rocks (solid objects), which represent obstacles in the flow and make the solute to further spread, due to the continue change of the fluid path. Making accurate predictions on the dynamics of this time-dependent system is crucial to provide reliable estimates of the evolution of subsurface flows, and in determining the controlling parameters, e.g., the injection rate of a current of carbon dioxide or the spreading of a pollutant in underground formations. To model this process, we consider here an unstable and time-dependent configuration defined as Rayleigh-Taylor instability, where a heavy fluid (saturated with solute) initially sits on top of a lighter one (without solute). The fluids are fully miscible, and the mixing process is characterised by the interplay of diffusion and advection: initially diffusion controls the flow and is responsible for the initial mixing of solute. At a later stage, the action of gravity promotes the formation of instabilities, and efficient fluid mixing takes place over the entire domain. The competition between buoyancy and diffusion is measured by the Rayleigh-Darcy number (Ra), the value of which controls the entire dynamics of the flow. We analyse the time-dependent evolution of this system at high Ra, and we quantify the effect of the Rayleigh-Darcy number on solute transport and mixing. Simulations are performed with a highly parallelized finite difference (FD) code coupled with immersed boundaries method (IBM) to account for the presence of the solid obstacles. We compare the results against experimental measurements in bead packs. The results are analysed at two different flow scales: i) at the Darcy, where the buoyancy-driven plumes control the flow dynamics, and ii) at the pore-scale, where diffusion promotes inter-pore solute mixing. Numerical and experimental measurements are used to design simple physical models to describe the mixing state and the mixing length of the system.

 
3:30pm - 4:30pmMS09-3: Multi-scale shape optimization problems in continuum mechanics
Location: EI7
Session Chair: Jacques Zwar
Session Chair: Daniel Wolff
 
3:30pm - 3:50pm

Development of 3D printed adaptive structures for lower limb prostheses shafts

A. M. J. Ali1,2, M. Gfoehler2, F. Riemelmoser1, M. Kapl1, M. Brandstötter1

1ADMiRE Research Center, Carinthia University of Applied Sciences, Austria; 2Faculty of Mechanical and Industrial Engineering, TU Wien, Austria

With an aim to fill the gaps in the current 3D‐printing technology to digitally fabricate medical assistive devices with significant user benefit, well‐being, and availability; we develop a design methodology that enables the optimization of lightweight multi-material lattice structures in order to enhance the design of prostheses and rehabilitation devices. This is done by firstly developing a suitable multi-variable mathematical model for topology optimization of two-scale structures and secondly demonstrating it on an outer shaft of prostheses (lower limb prostheses shaft). We develop a two-scale gradient-based optimization algorithm procedure of multiple design variables that generates functionally graded structures having excellent performance. Our design methodology employs three families of predefined micro-structures that share similar geometric features. Those two additional families thwart the convergence of our gradient-based algorithm to the global minima and we aim at presenting a computational framework that enhances multi-variable optimizations by avoiding the unfavorable local minima.



3:50pm - 4:10pm

Integration of numerical homogenization and finite element analysis for production optimization of 3D printed flexible insoles

D. Bianchi1,2, L. Zoboli1, C. Falcinelli3, A. Gizzi1

1Università Campus Bio-Medico di Roma, Italy; 2Medere srl, Italy; 3G. D’Annunzio Chieti-Pescara University, Italy

Recently, there has been a development of innovative materials that imitate the strong and lightweight properties of natural structures, such as bones, honeycombs and sponges. These materials have a porous microstructure that alternates between solid and void, and are being used in various fields, especially in healthcare, thanks to advanced manufacturing techniques like 3D printing. However, the production time of 3D printed objects can vary depending on factors such as material rigidity, infill pattern, and printing parameters. To address this issue, a computational tool was developed, integrating numerical homogenization and topological optimization in ANSYS Mechanical. The study used computational homogenization to simulate the mechanical properties of the insoles' infill, investigating various infill patterns in terms of mechanical properties and printing performance. The calculated properties were assigned to the insoles' geometries, and different loading scenarios were analysed, considering therapeutic and usage frameworks. Using the results of these structural simulations, several topology optimization analyses were performed with the objective of reducing the frontal part of the insole's compliance while staying within a specified mass threshold. The study aimed to find a distribution of mass that minimized material use and printing time while maintaining a satisfactory structural response during insole insertion into the shoe. Additionally, this computational approach can optimize the material distribution in various orthopaedic devices, making 3D printing production more effective and reducing printing time.



4:10pm - 4:30pm

Gradient-based shape optimization of microstructured geometries

J. Zwar1, L. Chamoin2, S. Elgeti1

1TU Wien, Austria; 2Université Paris-Saclay, ENS Paris-Saclay, CNRS, LMT, France

Through recent advances in modern production techniques, particularly in the field of additive manufacturing, new previously unthinkable geometries have become feasible. This vast realm of new possibilities cannot be adequately addressed by classical methods in engineering, which is why numerical design techniques are becoming more and more valuable. In this context, this work aims to present concepts that exploit the emerging possibilities and facilitate numerical optimization.

The numerical optimization is built on a microstructured grid, where the geometry is constructed by means of functional composition between splines, resulting in a regular pattern of building blocks. Here, a macro-spline defines the outer contour, a micro-geometry sets the individual tiles and a parameter-spline controls the local parametrization of the microstructure, e.g., acting on the thickness or material density in a specific region. This approach opens up a broad design space, where the adaptivity of the resulting microstructure can be easily extended by increasing the number of control variables in the parameters-spline via h- or p-refinement. The geometric representation uses volume splines, on the one hand providing full compatibility with CAD/CAM and on the other hand facilitating the use of Isogeometric Analysis (IGA). To fully utilize the potential of this type of geometry parameterization, gradient-based optimization algorithms are employed in combination with analytical derivatives of the geometry and adjoint methods.

We will present first results in two fields of application, namely passive heat regulation and an elasticity problem. Here, we demonstrate how optimized microstructures can compensate for irregular boundary conditions and how compliance can be minimized using these lattice-like structures for major weight reduction.

This research has been supported by European Union's Horizon 2020 research and innovation program under agreement No. 862025.

 
3:30pm - 4:30pmMS14-2: Mechanics of soft multifunctional materials: experiment, modeling and simulation
Location: EI8
Session Chair: Matthias Rambausek
 
3:30pm - 3:50pm

Swelling induced deformation of hydrogel

V. K. Singh, K. Haldar

Indian Institute of Technology Bombay, India

Hydrogels are three-dimensional networks of polymer chains that are linked together by chemical and physical crosslinks. They are highly swellable, capable of changing chemical energy to mechanical energy and vice versa. They have unique properties such as low elastic moduli and large deformability. The main constituent of hydrogels are the polymer chains that are highly hydrophilic. When immersed in water they absorb water molecules increasing the volume, resulting in swelling. This generally takes place in three steps: one, diffusion of water into the polymer network, two, relaxation of network chains and three, expansion of the polymer network. Normally hydrogels in the fully swollen state are viscoelastic and rubbery, similar to the biological fluids. These properties make them biocompatible. Thus, hydrogels have found applications in biomedical fields, such as making contact lenses, wound dressings, and tissue engineering. They are also used in fluid control and drug delivery systems. In this work, we focus on free swelling of a hydrogel from dry state to fully swollen state. We take the polyacrylamide (PAAm) hydrogel with degree of swelling Q = 42.5. Further, we use this state as the reference state and apply uniaxial load in tension. We assume that swelling is homogeneous. We focus on the non-linear theory of swelling. We plot the stress versus stretch diagram under uniaxial loading conditions. The model is validated with the available experimental results.



3:50pm - 4:10pm

A phase field model for crack propagation in electroactive polymers

A. Möglich1, R. Denzer1, M. Ristinmaa1, A. Menzel2,1

1Division of Solid Mechanics, Lund University, Sweden; 2Institute of Mechanics, TU Dortmund University, Germany

Electroactive polymers are a class of smart materials which change shape when stimulated by an electric field. Typical applications are in the areas of, e.g., robotics, artificial muscles and sensors. For such applications a reliable prediction of properties and performance, including loading and performance limits, is important. The occurrence of damage and fracture has a strong influence on the material behaviour. In this context, this work combines a material model for electroactive polymers with a fracture model.

The behaviour of electroactive polymers is modelled as a quasi-static large strain electro-mechanical material. The model is derived from a potential. The mechanical part of the model is a Neo-Hooke material and the electro-mechanical coupling is described by the relative permittivity. The material parameters are chosen such that the model mimics the behaviour of a soft electroactive polymer. The model is analysed with respect to the physically reasonable response and numerical stability. A phase field model for crack propagation is applied as fracture model. This method describes the crack propagation by means of an additional scalar field, the phase field. This phase field takes values between zero and one, whereby value zero represents undamaged material and value one corresponds to a fully damage state, respectively crack at the particular location. Since the model is used for polymers, the phase field model is adapted to large strains. The electro-mechanically coupled problem is solved within a monolithic scheme. The phase field problem, however, is solved within a staggered algorithm. The crack-propagation turns out to be different for the purely mechanical case as compared to the electro-mechanically coupled case.

The proposed model is implemented in a nonlinear finite element framework. Representative numerical examples are discussed in order to show the applicability of the model.



4:10pm - 4:30pm

Surface elasticity in soft solids

S. Basu

Indian Institute of Technology Kanpur, India

Soft solids such as silicone gels, with bulk shear modulus ranging from ∼10 to 1000 kPa, often exhibit strongly strain-dependent surface stresses. Moreover, unlike conventional stiffer materials, the effects of surface stress in these materials manifest at length scales of tens of micrometers rather than nanometers. The theoretical framework for modelling such problems envisages a soft hyperelastic bulk on which the infinitesimally thin surface that acts as a `wrapper’, with its own constitutive equation. We will recall the essential features of this theoretical framework and its FE implementation in the first part of this talk.

In the second, we will highlight simple force-twist, torque-twist, and force-extension (force-compression) responses of a soft cylinder held between two inert, rigid plates to demonstrate the role that the parameters in the surface constitutive model play in modulating the overall response of the bulk-surface system.

Finally, we will, through Finite Element simulations, demonstrate the effect of surface elasticity on two problems. The first is a variation of the well-known problem of an axisymmetric liquid capillary bridge between two rigid surfaces, with the liquid replaced by a soft solid. When the associated length scales are small, the shapes of the meniscus of a soft solid capillary with significant surface elasticity exhibits a much richer variety of shapes than a liquid. However, with stretch, the meniscus tends behave like a liquid bridge.

In the second problem, we explore the recent rather counter-intuitive experimental observation that, soft solids, when reinforced with small liquid inclusions, can become stiffer than the matrix material. We perform computational homogenisation on liquid inclusion reinforced soft solids with a view to understand the effect of surface stresses on their overall stiffness and manner in which cracks propagate in them.

 
4:30pm - 4:40pmShort Break
4:40pm - 5:00pmClosing
Location: EI7
6:30pm - 10:00pmConference Dinner
Location: Wiener Rathauskeller