Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Filter by Track or Type of Session 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: EI9
Date: Monday, 11/Sept/2023
11:10am - 12:30pmMS05-1: Multi-scale modelling and computational approaches to continua with micro-structure
Location: EI9
Session Chair: Andreas Warkentin
Session Chair: Adam Sky
 
11:10am - 11:30am

A computational multiscale approach to account for material interfaces in electrical conductors

D. Güzel1, T. Kaiser1, A. Menzel1,2

1TU Dortmund University, Germany; 2Lund University, Sweden

Every material in nature exhibits heterogeneous behaviour at a certain scale. In a system, defects such as pores, grain boundaries, phase boundaries, secondary phases and particles can be the reasons for heterogeneity. The effective behaviour of the materials is significantly influenced by the underlying microstructure. Interfaces, such as grain boundaries, can affect the overall response of the material under consideration. Experimental findings shows that grain boundaries have a critical influence on electrical properties [1] and in order to model the macroscopic behaviour realistically, interfaces at the microscale should be taken into account.

Motivated by the change of effective electrical properties due to interfaces, e.g. microcracks or grain boundaries, a computational multiscale framework for continua with interfaces at the microscale is proposed in this contribution. More specifically speaking, the computational multiscale formulation for electrical conductors [2] is extended to account for interfaces at the microscale. Cohesive-type interfaces are considered at the microscale, such that displacement and electrical potential jumps can be accounted for. The governing equations for the materials with interfaces under mechanical and electical loads are provided. Based on these, a computational multiscale formulation is established. In particular, averaging theorems for kinematic quantities and for their energetic duals are discussed and their consistency with an extended Hill-Mandel condition for suitable boundary conditions is shown. The coupling between the electrical and mechanical subproblem is established by the constitutive equations at the material interface. In order to investigate deformation-induced property changes at the microscale, evolution of interface damage is elaborated.

To show the capabilities of the proposed framework, different representative simulations are selected. In particular, the calculation of effective macroscopic conductivity tensors for given two-dimensional microstructures is discussed and the fully coupled effective electro-mechanical material response due to the damage evolution is presented.

References

[1] H. Bishara, S. Lee, T. Brink, M. Ghidelli, and G. Dehm, “Understanding grain boundary electrical resistivity in Cu: The effect of boundary structure,” ACS Nano, vol. 15, no. 10, pp. 16607–16615, 2021.

[2] T. Kaiser and A. Menzel, “An electro-mechanically coupled computational multiscale formulation for electrical conductors,” Arch. Appl. Mech., vol. 91, pp. 1–18, 2021.



11:30am - 11:50am

On the continuum modeling of flexoelectricity in ferroelectric materials

F. Sutter, M. Kamlah

Karlsruhe Institute of Technology, Germany

The technical relevance of small-scale electromechanical systems is rapidly increasing today. For this reason, the flexoelectric effect, which occurs in all dielectrics, is increasingly getting into the focus of research. This size-dependent effect describes the linear coupling between the electric polarization in the material and an occurring strain gradient in, for example, bent cantilever beams. There also exists a converse flexoelectric effect defined as a mechanical stress response under the action of an electric field gradient especially noticeable at sharp electrode tips in microelectromechanical systems (MEMS). In order to make these coupling effects technically usable, suitable models are required to predict the resulting system response.

A continuum-based model approach that takes into account elastic, dielectric, piezoelectric and flexoelectric effects is presented. Different model variants will be discussed and suitable finite element formulations for solving the electromechanical boundary value problem will be presented. A mixed variation formulation is used here in order to reduce the higher continuity requirements due to the occurring gradient fields. When considering ferroelectric materials (e.g. PZT), microstructural domain switching processes must be taken into account in order to be able to predict the behavior realistically. A microscopically motivated material model representing these dissipative processes is introduced and fitted into the flexoelectric continuum approach. The influence of acting strain and electric field gradients on the domain switching processes in ferroelectrics when considering the flexoelectric effect is studied by numerical experiments.



11:50am - 12:10pm

Phase-field optimization schemes for periodic micro-lattices with anisotropic properties

A. Krischok, B. Yaraguntappa, M.-A. Keip

University of Stuttgart, Germany

Inspired by lattice structures that can be observed in nature, periodic unit cells and their mechanical properties have caused an ever increasing interest in recent years due to the growing performance of additive manufacturing methods. In order to incorporate cells with optimal properties into printed high-performance structures and devices that can respond to given macroscopic stress-strain states in an optimal manner, one has to provide anisotropic properties that can respond to these individual loads.

We discuss the performance of a phase-field approach for optimizing periodic micro-structures based on triply periodic minimal surface problems (TPMS) to obtain unit cells with an optimal homogenized stiffness response in the direction of the maximal principal stress direction. We show that different TPMS-types exhibit fundamental differences in the way they can respond to uni-axial or shear-dominated loads. An essential aspect in optimizing cells is, on the one hand, to maximize the compliance with external loads and, on the other hand, to limit the danger of failure due to local buckling which is achieved by preserving the connectivity of the cell grid.

Further aspects that are discussed include numerical strategies to handle linear systems of such high-resolution optimization problems in an efficient manner as well as strategies to verify the gain of the homogenized stiffness experimentally.



12:10pm - 12:30pm

Toughening mechanisms of the Bouligand structure from the perspective of peridynamics

J. Tian1, Z. Yang1,2

1Institute of Solid Mechanics, School of Aeronautic Science and Engineering, Beihang University (BUAA), Beijing 100083, China; 2Aircraft & Propulsion Laboratory, Ningbo Institute of Technology (NIT), Beihang University (BUAA), Ningbo 315832, P.R. China

The Bouligand structure comprises twisted parallel fibers arranged in a helical pattern, which enables greater energy dissipation and fracture toughness, mainly through a large crack surface area and crack-bridging phenomenon compared to regular fiber-reinforced composites. Considering the complex nature of this structure, numerical models that accurately capture the propagation of cracks through its twisted fiber arrangement are limited. This is due to the most popular simulating approach, the finite element method (FEM), is based on the classic continuum mechanics that uses spatial differential equations to describe continuous material behaviors. In contrast, Peridynamics is a computational framework that has been developed to overcome the limitations of classical continuum mechanics in describing crack propagation. Unlike FEM, Peridynamics is a non-local continuum theory that utilizes integral equations instead of differential equations in space to simulate material behaviors. This characteristic makes it highly suitable for modeling the complex crack propagation behaviors in the Bouligand structure. In this study, we present a bond-based peridynamics model to accurately describe the fiber-reinforced composites with a small angle mismatch between adjacent layers in Bouligand structures. To investigate the fracture mechanisms of such a structure, we conduct comprehensive numerical simulations, including 3-point bending and low-velocity impact tests, to obtain detailed information on its deformation and failure behavior. This information is difficult to achieve solely through experimental and theoretical studies. Based on our insights into the toughening mechanisms of the Bouligand structure, we propose a novel approach to further enhance the material’s fracture toughness by combining the Bouligand structure with other toughening mechanisms. Overall, the current study provides important insights into the fracture behavior of Bouligand structures and presents new avenues for designing advanced materials with superior mechanical properties.

 
1:40pm - 3:20pmMS05-2: Multi-scale modelling and computational approaches to continua with micro-structure
Location: EI9
Session Chair: Andreas Warkentin
Session Chair: Adam Sky
 
1:40pm - 2:00pm

A Finite Element approach based on an efficient scale bridging concept for ferroelectric continua

R. Wakili, S. Lange, A. Ricoeur

University of Kassel, Germany

Ferroelectric as well as ferromagnetic materials are widely used in smart structures and devices as actuators, sensors etc. Regarding their nonlinear behavior, a variety of models has been established in the past decades. Investigating hysteresis loops or electromechanical/magnetoelectric coupling effects, only simple boundary value problems (BVP) are considered. In [1] a new scale–bridging approach is introduced to investigate the polycrystalline ferroelectric behavior at a macroscopic material point (MMP) without any kind of discretization scheme, the so–called Condensed Method (CM). Besides classical ferroelectrics, other fields of application of the CM have been exploited, e.g. [2, 3, 4]. Since just the behavior at a MMP is represented by the CM, the method itself is unable to solve complex BVP, which is technically disadvantageous if a structure with e.g. notches or cracks shall be investigated.

In this paper, a concept is presented, which integrates the CM into a Finite Element (FE) environment. Considering the constitutive equations of a homogenized MMP in the weak formulation, the FE framework represents the polycrystalline behavior of the whole discretized structure, which finally enables the CM to handle arbitrary BVP. A more sophisticated approach completely decouples the constitutive evolution from the FE discretization, by introducing an independent material grid. Furthermore, energetic consistencies of scale transitions from grain to MMP and MMP to macroscale are investigated. Numerical examples are finally presented in order to verify the approach.

References

[1] Lange, S. and Ricoeur, A., A condensed microelectromechanical approach for modeling tetragonal ferroelectrics, International Journal of Solids and Structures 54, 2015, pp. 100 – 110.

[2] Lange, S. and Ricoeur, A., High cycle fatigue damage and life time prediction for tetragonal ferroelectrics under electromechanical loading, International Journal of Solids and Structures 80, 2016, pp. 181 – 192.

[3] Ricoeur, A. and Lange, S., Constitutive modeling of polycrystalline and multiphase ferroic materials based on a condensed approach, Archive of Applied Mechanics 89, 2019, pp. 973 – 994.

[4] Warkentin, A. and Ricoeur, A., A semi-analytical scale bridging approach towards polycrystalline ferroelectrics with mutual nonlinear caloric–electromechanical couplings, International Journal of Solids and Structures 200 – 201, 2020, pp. 286 – 296.



2:00pm - 2:20pm

Modeling of polycrystalline materials using a two-scale FE-FFT-based simulation approach

A. Schmidt, C. Gierden, J. Waimann, S. Reese

RWTH Aachen University, Germany

Components used in the aerospace or automotive industries are often exposed to multi-physical loading conditions and thus may simultaneously be subjected to high stresses and strains as well as temperature changes. Therefore, high-strength and high-temperature resistant materials such as metals are commonly used for applications in this field. Since the overall material behavior is directly influenced by the distribution, size and morphology of the individual grains of the underlying polycrystalline microstructure, detailed knowledge of this microstructural behavior is required in order to accurately predict the macroscopic material response. Hence, multi-scale simulation approaches have been developed. Considering a two-scale finite element (FE) and fast Fourier transform (FFT)-based simulation approach [1, 2], the macroscopic and microscopic boundary value problems are first solved individually by assuming scale separation. In this context, the homogeneous macroscale is subdivided into a discrete number of finite elements. The microscopic boundary value problem is attached to each macroscopic integration point and solved using the FFT-based simulation approach. The scale transition is then performed by defining the macroscopic quantities as the average value over the corresponding local fields. This simulation approach is an efficient alternative to the classical FE² method for the simulation of periodic unit cells [3]. To illustrate the applicability of our model, we will present several numerical examples.

[1] J. Spahn, H. Andrä, M. Kabel, and R. Müller. A multiscale approach for modeling progressive damage of composite materials using fast Fourier transforms. Computer Methods in Applied Mechanics and Engineering, 268, 871–883, 2014

[2] J. Kochmann, S. Wulfinghoff, S. Reese, J. R. Mianroodi, and B. Svendsen. Two-scale FE–FFT- and phase-field-based computational modeling of bulk microstructural evolution and macroscopic material behavior. Computer Methods in Applied Mechanics and Engineering, 305, 89–110, 2016

[3] C. Gierden, J. Kochmann, J. Waimann, B. Svendsen, and S. Reese. A review of FE-FFT-based two-scale methods for computational modeling of microstructure evolution and macroscopic material behavior. Archives of Computational Methods in Engineering, 29(6), 4115-4135, 2022.



2:20pm - 2:40pm

Immersed isogeometric analysis with boundary-conformal quadrature for thermo-elastic microstructure homogenization

Y. T. Elbadry1, P. Antolin2, O. Weeger1

1Cyber-Physical Simulation Group & Graduate School of Computational Engineering, Technische Universität Darmstadt, Germany; 2Institute of Mathematics, École Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland

Numerical simulation of complex geometries and microstructures can be costly and time consuming, in particular due to the long process of preparing the geometry for meshing and the meshing process itself [1]. Several methods were proposed to overcome this issue, such as the extended finite element, meshless, Fourier transform and immersed boundary methods. Immersed boundary methods rely on embedding the physical domain into a Cartesian grid of finite elements and resolving the geometry only by adaptive numerical integration schemes. For instance, the isogeometric finite cell method (FCM) exploits the accuracy of higher-order, smooth B-Spline basis functions for the discretization and employs an octree scheme in order to refine the quadrature rule in trimmed elements. FCM has been applied successfully to various problems in solid mechanics, including linear and nonlinear elasticity, elasto-plasticity, and thermo-elasticity [2]. However, FCM typically requires several levels of refinement of the quadrature rule in order to deliver accurate results, which may lead to high computation times, especially for nonlinear, internal variable, and coupled multiphysics problems.

In this work, we adopt a novel algorithm for boundary-conformal quadrature based on a high-order reparameterization of trimmed elements [3] to solve small and large deformation thermo-elastic problems using spline-based immersed isogeometric analysis (IGA) without the need for a body conformal finite element mesh. In particular, the Gauss points on trimmed elements are obtained by a NURBS reparameterization of the physical subdomains of the cut elements of the Cartesian grid. This ensures an accurate integration with a minimal number of quadrature points. Furthermore, using periodic B-Spline discretizations, periodic boundary conditions for homogenization can be automatically fulfilled. Several numerical examples are presented to show the accuracy and efficacy of the boundary-conformal quadrature algorithm.

REFERENCES

[1] T. Hughes, J. Cottrell, and Y. Bazilevs. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering, 194:4135– 4195, 2005.

[2] Schillinger, D. and Ruess, M., 2015. The Finite Cell Method: A review in the context of higher-order structural analysis of CAD and image-based geometric models. Archives of Computational Methods in Engineering, 22(3), pp.391-455.

[3] Wei, X., Marussig, B., Antolin, P. and Buffa, A., 2021. Immersed boundary-conformal isogeometric method for linear elliptic problems. Computational Mechanics, 68(6), pp.1385-1405.



2:40pm - 3:00pm

Aspects on the modeling of mechanical metamaterials via the relaxed micromorphic model

M. Sarhil1, L. Scheunemann2, J. Schröder1, P. Neff3

1Institut für Mechanik, Universität Duisburg-Essen, Germany; 2Lehrstuhl für Technische Mechanik, RPTU Kaiserslautern-Landau, Germany; 3Lehrstuhl für Nichtlineare Analysis und Modellierung, Universität Duisburg-Essen, Germany

Metamaterials are attracting growing attention in industry and academia due to their unique mechanical behaviour. However, when the scale separation does not hold, they show size-effects. Generalized continua can model such materials as a homogeneous continuum with capturing the size-effects.

The relaxed micromorphic model [1] describes the kinematics of each material point via a displacement vector and a second-order micro-distortion field. It has demonstrated many advantages over other higher-order continua such as using fewer material parameters and the drastically simplified strain energy compared to the classical micromorphic theory. Moreover, the relaxed micromorphic model operates between two bounds; linear elasticity with the micro and macro elasticity tensors. The strain energy function in the relaxed micromorphic model employs the Curl of the micro-distortion field and therefore H(Curl)-conforming FEM implementation is necessary [2-3].

In our talk, we will present our recent results in identifying the material parameters and boundary conditions in the relaxed micromorphic model [4].

REFERENCES

[1] P. Neff, I.D. Ghiba, A. Madeo, L. Placidi and G. Rosi. A unifying perspective: the relaxed linear micromorphic continuum. Continuum Mechanics and Thermodynamics 26,639-681(2014).

[2] J. Schröder, M. Sarhil, L. Scheunemann and P. Neff. Lagrange and H(curl,B) based Finite Element formulations for the relaxed micromorphic model, Computational Mechanics 70, pages 1309–1333 (2022).

[3] A. Sky, M. Neunteufel, I. Muench, J. Schöberl, and P. Neff. Primal and mixed finite element formulationsfor the relaxed micromorphic model. Computer Methods in Applied Mechanics and Engineering 399, p. 115298 (2022).

[4] M. Sarhil, L. Scheunemann, J. Schröder, P. Neff. Size-effects of metamaterial beams subjected to pure bending: on boundary conditions and parameter identification in the relaxed micromorphic model. https://arxiv.org/abs/2210.17117 (2022).



3:00pm - 3:20pm

On the second-order computational homogenization of fluid-saturated porous media

E. Polukhov, M.-A. Keip

Institute of Applied Mechanics, University of Stuttgart, Germany

In the present contribution, we deal with a second-order computational homogenization of fluid flow in porous materials. Similar to the first-order computational homogenization in [1], the microscopic problem is formulated employing a minimization-type variational formulation at small strains; see also [2]. While a first-order Darcy-Biot-type fluid transport is considered at the microscale [2], the macroscopic problem is characterized by a second-order material response [3]. Hence, the present formulation allows the relaxation of the scale-separation assumption and the incorporation of the macroscopic second-order terms associated with deformation and fluid-flux fields at the microscale. The macro- and microscale boundary value problems are then bridged via an extended form of the Hill-Mandel condition, which results in suitable boundary conditions at the microscale and a set of constraints [4,5]. Finally, we present numerical examples that provide further insights into the presented formulation.

References:

[1] E. Polukhov and M.-A. Keip. Computational homogenization of transient chemo-mechanical processes based on a variational minimization principle. Advanced Modeling and Simulation in Engineering Sciences, 7, 1-26 (2020).

[2] C. Miehe, S. Mauthe, and S. Teichtmeister. Minimization principles for the coupled problem of Darcy--Biot-type fluid transport in porous media linked to phase field modeling of fracture. Journal of the Mechanics and Physics of Solids, 82, 186-217 (2015).

[3] G. Sciarra, F. dell'Isola, and O. Coussy. Second gradient poromechancis. International Journal of Solids and Structures, 44, 6607-6629 (2007).

[4] V.G. Kouznetsova, M.G.D. Geers and W.A.M. Brekelmans. Multi-scale second-order computational homogenization of multi-phase materials: a nested finite element solution strategy. Computer methods in Applied Mechanics and Engineering, 193, 5525-5550 (2020).

[5] I. A. Rodrigues Lopez, and F. M. Andrade Pires. Unlocking the potential of second-order computational homogenisation: An overview of distinct formulations and a guide for their implementation. Archives of Computational Methods in Engineering, 1-55 (2021).

 
4:10pm - 5:10pmMS12-1: Modeling and simulation of heterogeneous materials: microstructure and properties
Location: EI9
Session Chair: Markus Sudmanns
 
4:10pm - 4:30pm

A gradient plasticity formulation to model intergranular damage in polycrystals

J. Lara, P. Steinmann

Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany

The motion of dislocations has been determined to be one of the main mechanisms leading to inelastic deformation in crystalline materials. Their motion is affected by other crystal imperfections, e.g., at grain boundaries their advancement is hindered due to misalignment between the crystals' slip systems. The pile-up that occurs at the boundaries can lead to yielding inside the adjacent grains or intergranular fracture. The damage induced by the latter acts as a precursor to failure at the macroscopic scale. As such, a formulation capable of describing the interaction between the aforementioned crystal imperfections could provide a feasible tool to predict failure of components made from crystalline materials.

To this end, a gradient crystal plasticity formulation which accounts for grain misorientation is enhanced by considering the grain boundary as a cohesive interface and by introducing a damage variable influencing the interaction between adjacent grains. Numerical examples demonstrating the material response based on the proposed formulation are presented and discussed.



4:30pm - 4:50pm

Material modelling for efficient finite element simulation of steel quenching

M. Schewe1, P. Scherm1, A. Menzel1,2

1TU Dortmund, Germany; 2Lund University, Sweden

Heat treatment plays an essential role in the production of cold-work steel parts. While the material properties are adjusted by the heat treatment, side effects like distortion and residual stresses have to be controlled. A good prediction of the heat treatment plays a major role in reducing the necessary grinding time in subsequent finishing operations. Optimising the heat treatment process has the potential to save energy in the furnaces. This presentation discusses the application of simplified material models for the finite element (FE) simulation of quenching. The formation of martensite is covered by a purely temperature dependent Koistinen-Marburger model, whereas the diffusive formation of Bainite is modelled with an incrementally isothermal Johnson-Mehl-Avrami-Kolmogorov relation [1]. Both models are used in rate format and solved monolithically. The thermal-mechanical-microstructural-coupling implemented in the FE-software Abaqus is presented alongside numerical examples.

[1] de Oliveira, W.P., Savi, M.A., Pacheco, P.M.C.L., 2013. Finite element method applied to the quenching of steel cylinders using a multi-phase constitutive model. Arch Appl Mech 83, 1013–1037. https://doi.org/10.1007/s00419-013-0733-x



4:50pm - 5:10pm

Investigation of the role of the barrier parameter for the infeasible primal-dual interior point method for single crystal plasticity

F. Steinmetz, L. Scheunemann

RPTU Kaiserslautern-Landau, Germany

Modeling single crystal plasticity is essential for understanding the behavior of polycrystalline materials such as metals and alloys. The mechanical properties of such materials depend on the microstructure of individual grains and their interaction through grain boundaries. Single crystal plasticity aims to model the behavior of an individual grain based on the microscopic lattice structure. It can be expressed mathematically using the concept of multisurface plasticity. Applying the principle of maximum plastic dissipation leads to an optimization problem where the individual slip systems of the crystal, represented by yield criteria, define the constraints of the optimization problem.

In the framework of rate-independent crystal plasticity models, the set of active slip systems is possibly non-unique, which makes the algorithmic treatment challenging. Typical approaches are either based on an active set search using various regularization techniques [3] or simplifying the problem in such a way that it becomes unique [1]. In computationally extensive simulations, the problem needs to be evaluated multiple times. Therefore, a stable, robust, and efficient algorithm is required to obtain satisfactory results.

Recently, an alternative strategy based on the infeasible primal-dual interior point method (IPDIPM [2] has been presented in [4], which handles the ill-posed problem without perturbation techniques. Through the introduction of slack variables, a stabilization of the conventional active set search approach is reached. The introduction of barrier terms with related barrier parameters continuously penalizes the violation of the feasibility of the intermediate solution. This talk especially focuses on the treatment of the barrier parameter and the related speed of convergence.

[1] M. Arminjon. A Regular Form of the Schmid Law. Application to the Ambiguity Problem. Textures and Microstructures, 14:1121–1128, 1991.

[2] A. S. El-Bakry, R. A. Tapia, T. Tsuchiya, and Y. Zhang. Journal of Optimization Theory and Applications, 89(3):507–541, 1996.

[3] C. Miehe and J. Schr ̈oder. A comparative study of stress update algorithms for rate-independent and rate-dependent crystal plasticity. International Journal for Numerical Methods in Engineering, 50:273–298, 2001.

[4] L. Scheunemann, P. Nigro, J. Schröder, and P. Pimenta. A novel algorithm for rate independent small strain crystal plasticity based on the infeasible primal-dual interior point method. International Journal of Plasticity, 124:1–19, 2020.

 

Date: Tuesday, 12/Sept/2023
9:00am - 10:40amMS12-2: Modeling and simulation of heterogeneous materials: microstructure and properties
Location: EI9
Session Chair: Markus Sudmanns
 
9:00am - 9:20am

Data-driven modeling of the plastic yield behaviour of nanoporous metals under multiaxial loading

L. Dyckhoff1, N. Huber1,2

1Helmholtz Centre Hereon, Germany; 2Hamburg University of Technology, Germany

Nanoporous metals, built out of complex ligament networks, can be produced with an additional level of hierarchy [S. Shi et al., Science 371, 1026-1033, 2021]. The resulting complexity of the structure makes modeling of the mechanical behaviour computationally highly expensive and time consuming. In addition, multiaxial stresses occur in the higher hierarchy ligaments. Therefore, knowledge of the multiaxial material behaviour, including the 6D yield surface, is required. For finite element (FE) modeling, we separate the hierarchical nanoporous structure into the upper and lower level of hierarchy. This allows independent adjustment of structural parameters on both hierarchy levels and therefore an efficient analysis of structure-property-relationships. Furthermore, a promising approach to significantly reduce computational cost is to use surrogate models and FE-beam models to predict the mechanical behaviour of the lower level of hierarchy.

As a first step towards such a model, we studied the elastic behaviour and yield surfaces of idealized diamond and Kelvin beam models, representation of the lower level of hierarchy, using FE simulations. The yield surfaces exhibit pronounced anisotropy, which could not be described properly by models like the Deshpande-Fleck model for isotropic solid foams. For this reason, we used data-driven and hybrid artificial neural networks, as well as data-driven support vector machines and compared them regarding their potential for the prediction of these yield surfaces. All considered methods turned out to be well suited and resulted in relative errors < 4.5. Of the considered methods, support vector machines exhibit the highest generalization and accuracy in 6D stress space and outside the range of the used training data.

Implementation of the trained SVC into Abaqus [A. Hartmaier, Materials 13, 1060, 2022] results in a promising agreement with the mechanical material response of the original FE beam model, provided that a non-associated flow rule is used. Furthermore, the evolution of the yield surface for higher plastic strains during radial loading were included and as such allow an implementation of the hardening behaviour into the UMAT.



9:20am - 9:40am

Mechanical properties of additively manufactured lattice structures

H. Kruse1, H. Mapari2, J. H. Schleifenbaum1

1RWTH Aachen University; 2Ansys Germany GmbH

In recent years, the application of lattice structures in additive manufacturing (AM) has gained a lot of attention due to their unique properties, such as high surface-to-volume ratio and self-supporting capabilities. They enable the production of complex parts that are difficult or even impossible to manufacture using conventional methods such as casting or machining. However, despite the advantages of 3D printing over conventional manufacturing technologies, its potential is limited by various phenomena such as warpage due to residual stresses and strains or porosity, leading to a lack of knowledge about the mechanical properties of lattice structures and hindering their commercial application.

To address this shortcoming, this study employs Finite Element Analysis (FEA) to examine the influence of residual stress and porosity defects on the mechanical properties of lattice structures, including Young's modulus, yield strength, and Specific Energy Absorption (SEA). The simulation results are validated through experimental data on the compressive behavior of lattice structures produced through Laser Powder Bed Fusion (L-PBF) with varying parameters. The sequentially coupled thermomechanical finite element model utilized in the simulation evaluates the thermal histories and residual stress evolution throughout the entire AM process. The findings of this study provide valuable insights into the mechanical properties of lattice structures, paving the way for their practical applications in diverse fields.



9:40am - 10:00am

Multiscale modeling of thermal conductivity of concrete at elevated temperatures

S. Peters

Ruhr University Bochum

Apart from experimentation, computational models are helpful to aid understanding and subsequently predict the damage processes of concrete under fire, considering physical effects such as chemical dehydration or aggregate-matrix mismatch. These temperature-driven multi-physical deterioration processes are mainly influenced by the macroscopic effective thermal conduction because it predominantly governs the macroscopic temperature distribution. To quantify all degradation factors according to the macroscopic effective thermal conductivity separately, a multiscale model for concrete is proposed.

Four scales of observation characterize the concrete, namely hydrates, cement paste, mortar, and concrete. Based on Eshelby-type homogenization techniques, such as Mori-Tanka and Self-Consistent schemes, the effective thermal conductivity of different blended concretes is calculated at elevated temperatures, considering thermally induced chemical porosity increase of hydrates, initial microcrack density, aggregate degradation, and aggregate-matrix bonding via interfacial transition zones (ITZ).

A stoichiometric model based on an Arrhenius equation is used to predict the volume fraction of chemical dehydration products and porosity at the level of hydrates. The porosity increase and initial crack density lowers the thermal conductivity on the cement paste level, which is calculated using the Mori-Tanaka homogenization framework by considering randomly distributed spherical pores and three orthogonal oriented penny-shaped inclusions, respectively embedded in the matrix material. The effective thermal conductivity of mortar and concrete is determined within the same framework using an analytical expression based on the Kapitza resistance, which characterizes the ITZ morphology.

Concretes with different water-to-cement ratios, aggregate types, and cement paste conductivities are analyzed after the validation process in a sensitivity study comparing the influence on the effective thermal conductivities of concrete at elevated temperatures. Furthermore, the influence of the ITZ morphology and initial crack density is studied in detail. Based on the discussed analyses, it is demonstrated that the model predicts the thermal conductivity deterioration of different concretes or cement compositions from 20°C to 850°C with adequate accuracy.



10:00am - 10:20am

On the numerical analysis of macro- and microscopic residual stresses in 3D

S. Hellebrand, D. Brands, J. Schröder

University of Duisburg-Essen, Germany

Current research aims at the targeted introduction of residual stresses into components during their manufacturing process instead of minimizing them, for example, by subsequent heat treatments. Hot bulk forming processes offer a good opportunity to modify residual stresses in a specific way, since the interactions of thermal, mechanical and metallurgical kind can be exploited. In general, such a hot bulk forming process of a steel component can be divided into three steps: First, the component is heated to over 1000°C, which leads to a full austenitization of the material and an assumed to be stress-free initial configuration. Subsequently, forming takes place at this high temperature before the component is cooled down to room temperature. This third step results in a diffusion controlled or diffusionless phase transformation on the microscale based on the cooling rate, see [1].

In this contribution, the focus is on the last process step, i.e., cooling. Different cooling media lead to different phase transformations, which in turn lead to different residual stress distributions in the component. Motivated by the definition of residual stresses, which are characterized by the scale they act on, multi-scale finite element simulations of this cooling process are performed. The comparison of two- and three-dimensional boundary value problems shows the importance of the third dimension to represent the temperature development in the component and to predict residual stress distributions well. For this reason, a three-dimensional FE^2 calculation is presented, see [2], in which the microscale is determined by a three-dimensional representative volume element. The resulting residual stresses on macro- and microscale are evaluated and discussed.

[1] B.-A. Behrens, J. Schröder, D. Brands, K. Brunotte, H. Wester, L. Scheunemann, S. Uebing, C. Kock. Numerische Prozessauslegung zur gezielten Eigenspannungseinstellung in warmmassivumgeformten Bauteilen unter Berücksichtigung von Makro- und Mikroskala, Forschung im Ingenieurwesen (Engineering Research), 10.1007/s10010-021-00482-x, 2021.

[2] J. Schröder. A numerical two-scale homogenization scheme: the FE2-method. In J. Schröder and K. Hackl (Eds.), Plasticity and Beyond - Microstructures, Crystal-Plasticity and Phase Transitions, Volume 550 of CISM Courses and Lectures, 1–64. Springer, (2014).



10:20am - 10:40am

Predicting yield stress in a nano-precipitate strengthened Austenitic steel using an ICME approach

C. A. Stewart1, E. A. Antillon1, M. Sudmanns2,3, J. A. El-Awady2, K. E. Knipling1, P. G. Callahan1

1U.S. Naval Research Laboratory, 4555 Overlook Ave SW, Washington, DC 20375; 2Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; 3RWTH Aachen University, 52074 Aachen, Germany

A recent thrust in structural alloys research is the development of advanced Austenitic steels strengthened by nano-scale precipitates. Of the candidate precipitate phases, nanoscale dispersions of the ordered BCC (B2) NiAl phase have been demonstrated to provide significant increases in yield strength, while allowing reasonable ductility despite the intermetallic nature of this phase. The chemical complexity of the alloy involving small sizes of the particles on the order of few nm severely complicates the physically based prediction of macroscale mechanical properties induced by the characteristics of the particles and their ensembles.

Therefore, we use an integrated computational materials engineering (ICME) approach towards materials design with the aim of predicting mechanical properties such as yield strength based on an input material microstructure. Given the small size and high density of precipitates in the current alloy, we develop a coarse-grained approach for predicting a representative critical resolved shear stress (CRSS) inside local volume elements following the percolation idea for flow-stress from Kocks and Mecking [1]. Using this approach, we model realistic nano-precipitate size distributions in large scale Discrete Dislocation Dynamics (DDD) simulations with the aim of predicting macroscale mechanical properties.

This work seeks to fill the gap in modeling plastic deformation phenomena in stainless steels incorporating chemical heterogeneities on the nanoscale and resulting mechanical properties. Informed by atomistic simulations (DFT/MD), discrete microstructural data extracted from atom probe tomography, and meso-scale modeling (DDD) we present a unique coarse-graining approach in predicting material yield strength for materials with nanoprecipitates.

[1] U.F. Kocks, H. Mecking, Progress in Materials Science 48 (2003) 171–273

 
1:40pm - 3:20pmMS01-1: ANN and data-driven approaches in material and structural mechanics
Location: EI9
Session Chair: Denny Thaler
Session Chair: Paul Seibert
 
1:40pm - 2:00pm

A novel approach to compressible hyperelastic material modeling using physics-augmented neural networks

L. Linden1, K. Kalina1, J. Brummund1, D. Klein2, O. Weeger2, M. Kästner1

1Institute of Solid Mechanics, Chair of Computational and Experimental Solid Mechanics, TU Dresden, Germany; 2Cyber-Physical Simulation Group & Graduate School of Computational Engineering, Department of Mechanical Engineering & Centre for Computational Engineering, TU Darmstadt, Germany

The long-standing challenge of simultaneously satisfying all physical requirements for hyperelastic constitutive models, which have been widely debated over the last few decades, could be regarded as "the main open problem of the theory of material behavior"[3].

This is particularly true for neural network (NN)-based constitutive modeling of hyperelastic materials, especially for the compressible case.

Therefore, a hyperelastic constitutive model based on physics-augmented neural networks (PANNs) is presented which fulfills all common physical requirements by construction, and in particular, is applicable for compressible material behavior.

This model combines established hyperelasticity theory with the latest machine learning advancements, using an input-convex neural network (ICNN) to express the hyperelastic potential.

The presented model satisfies common physical requirements, including compatibility with the balance of angular momentum, objectivity, material symmetry, polyconvexity, and thermodynamic consistency [1,2].

To ensure that the model produces physically sensible results, analytical growth terms and normalization terms are used. These terms, which have been developed for both isotropic and transversely isotropic materials, guarantee that the undeformed state is exactly stress-free and has zero energy [1].

The non-negativity of the hyperelastic potential is numerically verified by sampling the space of admissible deformations states.

Finally, the applicability of the model is demonstrated through various examples, such as calibrating the model on data generated with analytical potentials and by applying it to finite element (FE) simulations.

Its extrapolation capability is compared to models with reduced physical background, showing excellent and physically meaningful predictions with the proposed PANN.

[1] Linden, L., Klein, D. K., Kalina, K. A., Brummund, J., Weeger, O. and Kästner, M., Neural networks meet hyperelasticity: A guide to enforcing physics, (submitted 2023).

[2] Kalina, K. A., Linden, L., Brummund, J. and Kästner, M., FEANN - An efficient data-driven multiscale approach based on physics-constrained neural networks and automated data mining, Comput. Mech. (2023).

[3] Truesdell, C. and Noll, W., The Non-Linear Field Theories of Mechanics. 3rd ed. Springer Berlin Heidelberg, 2004.



2:00pm - 2:20pm

Discrete data-adaptive approximation of hyperelastic energy functions

S. Wiesheier, J. Mergheim, P. Steinmann

Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany

The prevailing paradigm to model the behavior of rubber-like materials is hyperelasticity. However, phenomenological constitutive modeling is prone to uncertainty and results in loss of information as data coming from experiments are not used directly in calculations. Aside, selecting an appropriate strain energy function for the problem under consideration is left to the engineer and is often based on experience.

Data-driven approaches are a promising alternative to constitutive modeling. We present a new data-adaptive approach to model hyperelastic rubber-like materials at finite strains. Our proposed modeling procedure combines the advantages of phenomenological hyperelasticity with the data-driven paradigm of directly including experimental data in calculations. Import constraints, such as thermodynamic consistency, material objectivity and frame indifference and material symmetry are satisfied a priori. In essence, we suggest formulating a finite-element-like approximation of the strain energy function as a sum of basis functions multiplied by parameters. The basis functions are expanded over the space of invariants which is, in the most generic form, formed by the principal invariants of the right Cauchy-Green tensor. Support points are distributed in the space of invariants, which are the points at which the parameters are defined. In other words, the parameters are the values of the discrete strain energy function at the support points. We consider linear Lagrangian polynomials as basis functions which boils down to (bi)linear interpolation of the parameters. The parameters are determined based on measured full-field displacements, e.g. obtained from Digital-Image-Correlation, and reaction forces by solving a non-linear optimization problem. Within this optimization problem, the 2-norm of the residual vector, which is the difference between measured and computed displacements and reaction forces, is minimized by altering the parameters. The proposed discrete approximation to the strain energy function is flexible enough to discover any admissible form of strain energy function and the fact that our approach does not rely on measured stresses is an advantage over many data-driven approaches presented to date.

We verify our approach and show that computation times are similar compared to those of phenomenological models. By numerical examples, we illustrate that only a moderate number of parameters is required to approximate well-known smooth strain energy functions sufficiently well and demonstrate the ability of our approach to re-identify an extended number of parameters. We also show the robustness of our approach against noisy experimental data.



2:20pm - 2:40pm

Viscoelastic Constitutive Artificial Neural Networks (vCANNs) – a framework for data-driven anisotropic nonlinear finite viscoelasticity

K. P. Abdolazizi1, K. Linka1, C. J. Cyron1,2

1Hamburg University of Technology; 2Institute of Material Systems Modeling, Helmholtz-Zentrum Hereon

Finite linear viscoelastic (FLV) or quasi-linear viscoelastic (QLV) models are commonly used to model the constitutive behavior of polymeric materials. However, these models are limited in their ability to accurately represent the nonlinear viscoelastic behavior of materials, particularly in capturing their strain-dependent viscous behavior. To address this issue, we have developed viscoelastic Constitutive Artificial Neural Networks (vCANNs), a novel physics-informed machine learning framework. vCANNs rely on the concept of generalized Maxwell models with nonlinear strain (rate)-dependent properties represented by neural networks. With their flexibility, vCANNs can automatically identify accurate and sparse constitutive models for a wide range of materials. To test the effectiveness of vCANNs, we trained them using stress-strain data from various synthetic and biological materials under different loading conditions, e.g., relaxation tests, cyclic tension-compression tests, and blast loads. The results show that vCANNs can learn to accurately and efficiently represent the behavior of these materials without human guidance.



2:40pm - 3:00pm

Physics-Informed Neural Networks (PINNs) for solving inverse problems: constitutive model calibration

H. Xu1,2, P. Markovic1,2, A. E. Ehret1,2, E. Mazza1,2, E. Hosseini1

1Empa, Swiss Federal Laboratories for Materials Science and Technology, Dübendorf, Switzerland; 2ETH Zurich, Institute for Mechanical Systems, Zürich, Switzerland

Ensuring the safe and reliable operation of critical load-bearing components requires maintaining their mechanical integrity. Constitutive material models play a crucial role in analyzing mechanical integrity, and their accuracy is essential for assessing the structural integrity of load-bearing components. Notably, mechanical integrity assessments of high temperature components require constitutive models representing the highly nonlinear deformation response of alloys under various loading scenarios and across a wide temperature range. The Chaboche viscoplastic model is among the most well-known constitutive models for representing the isotropic-kinematic hardening behavior of materials. This model employs a set of differential equations to define the viscoplastic strain rate tensor as a function of the stress tensor and several scalar and tensorial internal variables. Calibrating this model for different temperature and loading conditions however requires using experimental data from various mechanical tests and determining a large number of model parameters, which is typically achieved by performing a computationally expensive inverse analysis. To address this computational challenge, we propose a new method that leverages scientific machine learning to accelerate solving the inverse problem. Specifically, we use the Physics Informed Neural Networks (PINNs) framework to incorporate the Chaboche model formulation into neural networks. In this contribution, we illustrate the framework in application to Hastelloy X, by calibrating and determining >30 model parameters based on observations from various cyclic tests at different strain rates in the temperature range of 22-1000°C.



3:00pm - 3:20pm

Advancements in multiscale ML-based constitutive modeling of history-dependent materials

Y. Heider

RWTH Aachen University, Germany

Many materials exhibit history-dependency in their response. This is evident in the inelastic response of solid materials or in the hysteretic retention curve of multiphase porous materials. Within multiscale simulation of history-dependent materials, the underlying work focuses on testing and comparing different supervised machine learning (ML) approaches to generate suitable constitutive models. This includes the application of recurrent neural networks (RNN), the application of 1D convolutional neural network (1D CNN), and the application of the eXtreme Gradient Boosting (XGBoost) library.

The database used in the supervised learning relies on lower scale two-phase lattice Boltzmann simulations, applied to deformable and anisotropic representative volume elements (RVEs) of the porous materials as presented in [1,2]. In the training, the inputs will include the capillary pressure and its history in addition to the porosity, whereas the output will include the degree of saturation. The comparison among the different ML approaches will include the accuracy in predicting the correct saturation degree and the efficiency concerning the training.

REFERENCES

[1] Heider, Y; Suh H.S.; Sun W. (2021): An offline multi-scale unsaturated poromechanics model enabled by self-designed/self-improved neural networks. Int J Numer Anal Methods;1–26.

[2] Chaaban, M.; Heider, Y.; Markert, B. (2022): A multiscale LBM–TPM–PFM approach for modeling of multiphase fluid flow in fractured porous media. Int J Numer Anal Methods Geomech, 46, 2698-2724.

 
3:50pm - 5:50pmMS01-2: ANN and data-driven approaches in material and structural mechanics
Location: EI9
Session Chair: Yousef Heider
Session Chair: Lennart Linden
 
3:50pm - 4:10pm

Reconstructing orientation maps in MCRpy

P. Seibert1, A. Safi2, A. Raßloff1, K. Kalina1, B. Klusemann2,3, M. Kästner1,4

1Institute of Solid Mechanics, TU Dresden, Germany; 2Institute of Materials Mechanics, Helmholtz-Zentrum Hereon, Germany; 3Institute of Product and Process Innovation, Leuphana Universtiy of Lüneburg, Germany; 4Dresden Center for Computational Materials Science, TU Dresden, Germany

Many data-driven approaches in computational material engineering and mechanics rely on realistic volume elements for conducting numerical simulations. Examples include multiscale simulations based on neural networks or reduced-order models as well as the exploration and optimization of structure-property linkages. This motivates microstructure characterization and reconstruction (MCR). In previous contributions, MCRpy [1] has been introduced as a modular open-source tool for descriptor-based MCR, where any descriptors can be used for characterization and any loss function combining any descriptors can be minimized using any optimizer for reconstruction. A key feature of MCRpy is that differentiable descriptors are available and can be used in conjunction with gradient-based optimizers. This allows the underlying optimization problem to converge several orders of magnitude faster than with the previously used stochastic optimizers [2, 3]. While MCRpy and the gradient-based reconstruction have been presented in previous contributions for microstructures with multiple phases, the present contribution extends these concepts towards orientation maps.

After a brief introduction to MCRpy, the main difficulties of extending gradient- and descriptor-based microstructure reconstruction to orientation maps are discussed. Besides the symmetry of orientation itself after 360°, additional crystal symmetries need to be incorporated and singularities need to be avoided. For this reason, differentiable statistical descriptors are defined in terms of symmetrized harmonic basis functions defined on the 4D unit quaternion hypersphere [4]. Based on a generic combination of descriptors comprising two-point statistics of orientation information and the orientation variation, the optimization problem is defined in the fundamental region of a neo-Eulerian orientation space. These and other measures are motivated and discussed in detail. The capabilities of the method are demonstrated by exemplarily applying it to various microstructures. In this context, it is mentioned that all algorithms are made publicly available in MCRpy and it is demonstrated how to use them. Furthermore, it is shown how to extend MCRpy by defining a new microstructure descriptor in terms of any desired orientation representation or basis function and readily using it for reconstruction without additional implementation effort.

[1] Seibert, Raßloff, Kalina, Ambati, Kästner, Microstructure Characterization and Reconstruction in Python: MCRpy, IMMJ, 2022

[2] Seibert, Ambati, Raßloff, Kästner, Reconstructing random heterogeneous media through differentiable optimization, COMMAT, 2021

[3] Seibert, Raßloff, Ambati, Kästner, Descriptor-based reconstruction of three-dimensional microstructures through gradient-based optimization, Acta Materialia, 2022

[4] Mason, Analysis of Crystallographic Texture Information by the Hyperspherical Harmonic Expansion, PhD Thesis, 2009



4:10pm - 4:30pm

Comparison of model-free and model-based data-driven methods in computational mechanics

A. A. Khedkar, J. Stöcker, S. Zschocke, M. Kaliske

Technische Universität Dresden, Germany

In the context of homogenization approaches, data-driven methods entail advantages due to the ability to capture complex behaviour without the assumption of a specific material model. Constitutive model based data-driven methods approximating the constitutive relations by training artificial neural networks and the method of constitutive model free data-driven computational mechanics, directly incorporating stress-strain data in the analysis, are distinguished. Neural network based constitutive descriptions are one of the most widely used data-driven approaches in computational mechanics. In contrast to this, the method of distance minimizing data-driven computational mechanics enables to bypass the material modelling step entirely by iteratively obtaining a physically consistent solution, which is close to the material behaviour represented by the data. A generalization of this method providing increased robustness with respect to outliers in the underlying data set is the maximum entropy data-driven solver. Additionally, a tensor voting enhancement based on incorporating locally linear tangent spaces enables to interpolate in regions of sparse sampling.

In this contribution, a comparison of artificial neural networks and data-driven computational mechanics is carried out based on nonlinear elasticity. General differences between machine learning, distance minimizing as well as entropy maximizing based data-driven methods concerning pre-processing, required computational effort and solution procedure are pointed out. In order to demonstrate the capabilities of the proposed methods, numerical examples with synthetically created datasets obtained by numerical material tests are executed.



4:30pm - 4:50pm

Achieving desired shapes through laser peen forming: a data-driven process planning approach

S. T. Sala1, F. E. Bock1, D. Pöltl2, B. Klusemann1,2, N. Huber1,3, N. Kashaev1

1Institute of Materials Mechanics, Helmholtz-Zentrum Hereon, Max-Planck Str. 1, 21502 Geesthacht, Germany.; 2Institute for Production Technology and Systems, Leuphana University of Lüneburg, Universitätsallee 1, 21335 Lüneburg, Germany.; 3Institute of Materials Physics and Technology, Hamburg University of Technology, Eißendorfer Straße 42, 21073 Hamburg, Germany.

The accurate bending of sheet metal structures is critical in a variety of industrial and scientific contexts, whether it is to modify existing components or achieve specific shapes. Laser peen forming (LPF) is an advanced process for sheet metal applications that involves using mechanical shock waves to deform a specific area to a desired radius of curvature. The degree of deformation achieved through LPF is affected by various experimental factors such as laser energy, the number of peening sequences, and specimen thickness. Therefore, it is important to understand the complex dependencies and select the appropriate LPF process parameters for forming or correction purposes. This study aims to develop a data-driven approach to predict the deformation obtained from LPF for different process parameters. The experimental data is used to train, validate, and test an artificial neural network (ANN). The trained ANN successfully predicted the deformation obtained from LPF. An innovative process planning approach is developed to demonstrate the usability of ANN predictions in achieving the desired deformation in a treated area. The effectiveness of this approach is demonstrated on three benchmark cases involving thin Ti-6Al-4V sheets: deformation in one direction, bi-directional deformation, and modification of an existing deformation in pre-bent specimens via LPF.



4:50pm - 5:10pm

Data-driven discovery of governing equations in Continuum Dislocation Dynamics

B. Heininger, G. Kar, T. Hochrainer

Technische Universität Graz, Austria

Crystal plasticity is the result of the motion of line like crystal defects, the dislocations. While many traits of crystal plasticity may be described by phenomenological models, the description of the well-known patterning of dislocations as well as the phenomenon of single crystal work-hardening caused by dislocation multiplication during plastic deformation, ask for continuum models rooted more directly in the collective behavior of dislocations. A promising homogenization approach in this realm is the so-called Continuum Dislocation Dynamics (CDD) framework, which is based on conservation laws for tensorial dislocation density measures. In other words, the CDD theory can be considered as a continuum representation of dislocation networks through a hierarchy of tensorial dislocation variables. [1]

In this work, we derive nonlinear expressions for source terms as required in CDD for modeling work-hardening, which is arguably the most salient feature of metal-plasticity. [2] For that purpose we use modern data-driven discovery methods, like the Sparse Identification of Nonlinear Dynamics (SINDy), to describe the highly nonlinear dynamics of dislocation multiplication. The SINDy algorithm is capable of identifying the few predominant terms in the corresponding governing equations based on a model library of predefined, possibly high-dimensional spaces of nonlinear functions using sparse regression techniques. [3]

The SINDy algorithm is applied on a large database of Discrete Dislocation Dynamics (DDD) simulations of the plastic deformation of FCC single crystalline copper under constant strain rate in 120 different loading directions with neglected cross-slip. The extraction of the underlying data of dynamic CDD tensor variables, consisting of density, curvature and velocity tensors of n-th order, from the DDD data is performed by a recently developed algorithm.

References

[1] Thomas Hochrainer, S. Sandfeld, M. Zaiser, and P. Gumbsch. Continuum dislocation dynamics: towards a physically theory of plasticity. Journal of the Mechanics and Physics of Solids, 63(1):167–178, 2014.

[2] Markus Sudmanns, Markus Stricker, Daniel Weygand, Thomas Hochrainer and Katrin Schulz. Dislocation multiplication by cross-slip and glissile reaction in a dislocation based continuum formulation of crystal plasticity. Journal of the Mechanics and Physics of Solids, 132:103695, 2019.

[3] Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15):3932–3937, 2016.



5:10pm - 5:30pm

Hamiltonian Neural Network enhanced Markov-Chain Monte Carlo methods for subset simulations

D. Thaler1, F. Bamer1, S. L. N. Dhulipala2, M. D. Shields3

1Institute of General Mechanics, RWTH Aachen University, Aachen, Germany; 2Computational Mechanics and Materials, Idaho National Laboratory, Idaho Falls, USA; 3Department of Civil and Systems Engineering, Johns Hopkins University, Baltimore, USA

The crude Monte Carlo method delivers an unbiased estimate of the probability of failure. However, the accuracy of the approach, i.e., the variance of the estimate, depends on the number of evaluated samples. This number must be very large for estimations of a low probability of failure. If the evaluation of each sample is computationally expensive, the crude Monte Carlo simulation strategy is impracticable. To this end, subset simulations are used to reduce the required number of evaluations. Subset simulations require a Markov Chain Monte Carlo sampler, e.g., the random walk Metropolis-Hastings algorithm [1]. The algorithm, however, struggles with sampling in low-probability regions, especially if they are narrow. Therefore, advanced Markov Chain Monte Carlo simulations are preferred. In particular, the Hamiltonian Monte Carlo method explores the target distribution space rapidly. Driven by Hamiltonian dynamics, this sampler provides a non-random walk through the target distribution [2]. The incorporation of subset simulation and Hamiltonian Monte Carlo methods has shown promising results for reliability analysis [3]. However, gradient evaluations in the Hamiltonian Monte Carlo method are computationally expensive, especially when dealing with high-dimensional problems and evaluating long trajectories. Integrating Hamiltonian Neural Networks in Hamiltonian Monte Carlo simulations significantly speeds up the sampling [4]. The extension to latent Hamiltonian neural networks improves the expressivity by adding neurons to the last layer. Furthermore, the enhancement of the No U-Turns Sampler to the Hamiltonian Monte Carlo results in the efficient proposal of the following states [5]. During the exploration of low-probability regions, an online error monitoring calls the standard NUTS sampler if the latent Hamiltonian Neural Network estimates are inaccurate. Based on this recent enhancement, we provide an efficient sampling strategy for subset simulations using latent Hamiltonian neural networks to replace the gradient calculation and speed up the Hamiltonian Monte Carlo simulation.

[1] W.K. Hastings. Biometrika 57 (1970) 97-109.

[2] M. Betancourt. arXiv preprint, arXiv:1701.02434 (2017).

[3] Z. Wang, M. Broccardo, J. Song. Struct. Saf. 76 (2019) 51-67.

[4] D. Thaler, S.L.N. Dhulipala, F. Bamer, B. Markert, M.D. Shields. Proc. Appl. Math. Mech. (2023);22:e202200188.

[5] S.L.N. Dhulipapla, Y. Che, M.D. Shields. arXiv preprint, arXiv:2208.06120v1 (2022).



5:30pm - 5:50pm

Locking in physics informed neural network solutions of structural mechanics problems

L. Striefler, B. Oesterle

Hamburg University of Technology, Institute for Structural Analysis

Artificial intelligence (AI) applications have recently gained widespread attention due to their capabilities in the domains of speech and image recognition as well as natural language processing. This has drawn research attention towards AI and artificial neural networks (ANNs) in particular within numerous branches of applied mathematics and computational mechanics. The challenge of generating extensive training data for supervised learning of ANNs can be addressed by incorporating laws of physics into ANNs. Most of so-called physics informed neural network (PINN) [1] frameworks for structural mechanics applications incorporate the partial differential equations (PDEs) governing a specific problem within the loss function in the form of energy methods [2] or collocation methods [3].

Many structural mechanics problems are governed by stiff PDEs resulting in locking effects which have already been recognized in the early days of finite element analysis. Locking effects are present for all known discretization schemes, not only for finite elements, independent of the polynomial order or smoothness of the shape functions. This applies to both Galerkin-type solution methods and also collocation methods based on the Euler-Lagrange equations of the specific boundary value problem [4].

In this contribution, we examine the impact of stiff PDEs or locking effects on the accuracy and efficiency of PINN-based numerical solutions of problems in structural mechanics. First investigations on the use of PINNs for solving shear deformable beam and plate problems are presented. Different types of beam and plate formulations, as well as different types of collocation-based loss functions are evaluated and compared with respect to accuracy and efficiency.

REFERENCES

[1] M. Raissi, P. Perdikaris, G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, Vol. 378, pp. 686-707. 2019

[2] E. Samaniego, C. Anitescu, S. Goswami, V.M. Nguyen-Thanh, H. Guo, K. Hamdia, X. Zhuang, T. Rabczuk. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering, Vol. 362, 112790. 2020

[3] H. Guo, X. Zhuang, T. Rabczuk. A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate. Computers Materials & Continua. Vol. 59(2), pp. 433-456. 2019

[4] B. Oesterle, S. Bieber, R. Sachse, E. Ramm, M. Bischoff. Intrinsically locking-free formulations for isogeometric beam, plate and shell analysis. Proc. Appl. Math. Mech. 2018, 18:e20180039. 2018

 

Date: Wednesday, 13/Sept/2023
9:00am - 10:40amMS16-1: Modeling, simulation and quantification of polymorphic uncertainty in real word engineering problems
Location: EI9
Session Chair: F. Niklas Schietzold
Session Chair: Selina Zschocke
 
9:00am - 9:20am

The consideration of aleatory and epistemic uncertainties in the data assimilation by using a multilayered uncertainty space

M. Drieschner, C. Herrmann, Y. Petryna

Technische Universität Berlin, Chair of Structural Mechanics, Gustav-Meyer-Allee 25, 13355 Berlin, Germany

This study has been performed within the research project MuScaBlaDes "Multi scale failure analysis with polymorphic uncertainties for optimal design of rotor blades", which is part of the DFG Priority Programme (SPP 1886) "Polymorphic Uncertainty Modelling for the Numerical Design of Structures" started in 2016.

The modeling of real engineering structures is a tough challenge and always accompanied by uncertainties. Geometry, material and all boundary conditions should be quantified as accurately as possible. The quality of the numerical prediction of the system behavior and of desired system outcomes depends on the underlying model. Real measurements on the structure provide the possibility to assess and to verify the numerics. In general, discrepancies exist between the predicted and the measured values. Within the data assimilation framework, it is possible to consider both for the estimation of the system state. Additionally, the estimation of unknown parameters at once can be achieved in nonlinear problems by using the ensemble Kalman filter (EnKF).

In this contribution, the EnKF is extended by parameters, which influence the system state and which are subject to aleatory or epistemic uncertainty. These parameters have to be quantified by suitable uncertainty models first, and then integrated into the numerical simulation. Stochastic, interval and fuzzy variables are used leading to a multilayered uncertainty space and a nested numerical simulation in which the EnKF is embedded. Besides an academic example, the practical applicability is demonstrated on real engineering structures with synthetic and also with real measurement data.



9:20am - 9:40am

Surrogate assisted data-driven multiscale analysis considering polymorphic uncertain material properties

S. Zschocke, W. Graf, M. Kaliske

Institute for Structural Analysis, Technische Universität Dresden, Germany

Composite materials, such as (reinforced) concrete, which are designed by combining different constituents to obtain materials with beneficial properties for specific applications, are involved in many current research topics. The combination of different materials yields heterogeneities. These must be taken into account in the numerical simulation in order to obtain realistic results. Traditionally, the FE2 method based on the concept of numerical homogenization is used to obtain the macro-structural constitutive response at each integration point through a nested finite element analysis, whereby the meso-structural behavior is characterized by representative volume elements (RVE).

The main drawback of this method is the large computational effort because the representative volume elements, which are usually very complex, need to be evaluated at every material point. An approach to reduce the computational effort is the concept of decoupled numerical homogenization. Therefore, a database representing the macroscopic material behavior is derived by solving the boundary value problem of the considered RVE for different applied boundary conditions. Subsequently, the approach of data-driven computational mechanics is utilized to receive an approximate solution for the boundary value problem on the macroscale with direct reference to stress-strain data obtained from mesoscale evaluations. In order to receive accurate results by data-driven analyses, a sufficient data set density with respect to the present problem is essential.

With respect to the definition of the concrete mesostructure, aleatoric uncertainties are introduced by natural variability especially in the material behavior. Additional epistemic uncertainties are caused by manufacturing tolerances and an insufficient amount of measurement data. A combined consideration is realized by polymorphic uncertainty models. The acquisition of data sets consisting of uncertain macroscopic stress-strain states leads to a large number of required evaluations of the considered RVEs and correspondingly high computational effort, which is addressed by incorporating surrogate models for uncertainty quantification. The large number of uncertainty propagations that must be performed for data set generation is the main challenge in creating the surrogates. Accordingly, overhead and training time caused by surrogate creation need to be as low as possible in order to avoid impracticably high computational cost. In this contribution, a polynomial chaos assisted data set acquisition approach enabling the efficient consideration of polymorphic uncertainty is presented and applied in the context of data-driven computational homogenization.



9:40am - 10:00am

Sensitivity analysis in the presence of polymorphic uncertainties based on tensor surrogates

D. Moser

IGPM - RWTH

We will explore sensitivity analysis for mechanical engineering problems in presence of polymorphic uncertainty. Polymorphic uncertainty quantification allows for the incorporation of different sources of uncertainty, such as epistemic and aleatory, which have varying levels of complexity and dependence.

A measurement of distances between the most common polymorphic uncertainties will be at the core of the computation of sensitivity indices.

We will discuss how sensitivity analysis can aid in understanding the effects of input uncertainties on system performance and inform further polymorphic uncertainty quantification analysis. Additionally, we will cover methods for efficiently computing sensitivity measures for high-dimensional systems based on tensor surrogates.



10:00am - 10:20am

A computational sensitivity analysis tool for investigations of structural analysis models of real-world engineering problems

M. Fußeder, K.-U. Bletzinger

Chair of Structural Analysis, Technical University of Munich, Germany

The method of influence functions is a well-known engineering tool in structural analysis to investigate the consequences of load variations on deflections and stress resultants. Based on its strong relationship with adjoint sensitivity analysis [1], the traditional method of influence functions can be generalized as an engineering tool for sensitivity analysis [2]. The aim of our contribution is to give insights into these methodical extensions and to demonstrate their added value.

The traditional influence function approach can be seen as work balance based on Betti’s theorem. In our contribution we show how that work expression can be extended for sensitivity analysis with respect to various parameters. We discuss the significance of the resulting mechanically interpretable sensitivity analysis and its limitations. In that regard, we also specify how the graphical analysis procedure, for which the traditional influence function technique is well-known, can be generalized. The intention is to use those “sensitivity maps” to identify the positions of extreme influences and the individual effects of the partitions to the final sensitivity and its spatial distribution. In this way, structural analysis models of real-world engineering problems can be systematically explored and important model parameters to be considered in uncertainty quantification can be identified. Hence, our method has the potential to provide valuable support for preliminary investigations of structural models as a basis of polymorphic uncertainty analysis.

References

[1] A. Belegundu, Interpreting Adjoint Equations in Structural Optimization, Journal of Structural Engineering 112 (1986) 1971–1976. https://doi.org/10.1061/(ASCE)0733-9445(1986)112:8(1971).

[2] M. Fußeder, R. Wüchner, K.-U. Bletzinger, Towards a computational engineering tool for structural sensitivity analysis based on the method of influence functions, Engineering Structures 265 (2022) 114402. https://doi.org/10.1016/j.engstruct.2022.114402.

 
1:40pm - 3:00pmMS16-2: Modeling, simulation and quantification of polymorphic uncertainty in real word engineering problems
Location: EI9
Session Chair: Selina Zschocke
Session Chair: F. Niklas Schietzold
 
1:40pm - 2:00pm

Process steering of additive manufacturing processes under polymorphic uncertainty

A. Schmidt1, T. Lahmer2

1Materials Research and Testing Institute at the Bauhaus-Universität Weimar, Germany; 2Institute of Structural Mechanics, Bauhaus-Universität Weimar, Germany

During the last decade, additive manufacturing techniques have gained extensive attention. Especially extrusion-based techniques utilizing plastic, metal or even cement-based materials are widely used. Numerical simulation of additive manufacturing processes can be used to gain a more fundamental understanding of the relations between the process and material parameters on one hand and the properties of the printed product on the other hand.

Hence, the dependencies of the final structural properties on different influencing factors can be identified. Additionally, the uncertain nature of process and material parameters can be taken into account to reliably control and finally optimize the printing process. Therefore, numerical models of printing processes demand geometric flexibility while being computationally efficient.

An efficient numerical simulation of an extrusion-based printing process of concrete, applying a voxel-based finite element method is used in this study. Along with the progressing printing process, a previously generated FE mesh is activated step-by-step using a pseudo-density approach. Additionally, all material parameters vary spatially and temporally due to the time dependency of the curing process. In order to estimate material and process parameters realistically a polymorphic uncertainty approach is chosen incorporating interval-probability based random processes and fields.

By having a numerical model – at least at some level of abstraction – and an extensive description and possibility to consider uncertainties, the probabilities of the occurrence of the failure mechanisms (strength-based stability, geometric deviations, layer interface, and buckling) can be estimated. In an optimal steering of the process, failures should be minimized. However, reducing the failure probabilities of one mechanism may increase the ones of the other mechanisms, e.g., shape stability and layer interface might be in conflict.

In this study, process steering is rationalized using a reliability-based optimization approach taking into account the uncertain nature of the system’s material and process parameters. In the light of polymorphic uncertainty, tailored surrogate model strategies are investigated to boost efficiency for this numerically demanding task.



2:00pm - 2:20pm

Two propagation concepts for polymorphic uncertain processes – simulation- and uncertainty quantification-based

F. N. Schietzold, W. Graf, M. Kaliske

Institute for Structural Analysis, Technische Universität Dresden, Germany

The combination of both types of uncertainty – aleatoric and epistemic – in polymorphic uncertainty models is common as fundamental step for a realistic description of system parameters (geometric definitions, loads, boundary conditions and material properties) in structural safety assessment. Such polymorphic uncertainty models are defined by combination of basic uncertainty models, such as random variates, interval sets, fuzzy sets etc., where the two types of uncertainty are accounted for in different basic models. As combined models, p-boxes, fuzzy probability based random variates etc. are documented.

In addition to the consideration of the two types of uncertainty, functional dependencies of uncertain quantities are observed in real world problems. Functional dependencies are due to temporal variation, referred to as uncertain process, or due to spatial variation, referred to as uncertain field.

This contribution focuses on temporally dependent polymorphic uncertainty in safety assessment of – in this application case, structural – systems. Therefore, uncertainty quantification is required, which means estimating the uncertain system responses (uncertain output) of a structural analysis (basic solution), based on the uncertain structural parameters (uncertain input). When considering polymorphic uncertain processes, a key challenge arises from the coupling and propagation of the temporal dependency in the uncertain input and in the basic solution. For this propagation, two concepts are presented.

The first concept is propagation of temporal dependency by the uncertainty analysis. Therefore, each single basic solution is not necessarily time dependent. Contrarily, the time dependency is reached by sampling from a time dependent uncertain input parameter in the uncertain analysis and each sample is applied for a single computation of the basic solution. Finally, the chaining of such basic solutions and the interdependence between them leads to time-dependent output of the total uncertainty quantification.

The second concept is propagation of time dependency in the basic solution. Therefore, each sample of the uncertainty analysis is a full realization of a time dependent function, in particular a full deterministic process. The basic solution in this concept is required to be time-dependent and the realization of the process is the deterministic input defining the function of the parameter in time.

In this contribution, both concepts are presented, and the challenges and advantages in their implementations are outlined. Moreover, general problems of polymorphic uncertainty models are pointed out based on the shown concepts and solutions for their unbiased modeling and re-sampling are introduced. As numerical examples, application cases in the simulation of the life-cycle (production process and structural operation) of compressed wood components are shown, where both concepts are applied in multiple simulation phases.



2:20pm - 2:40pm

Human-induced vibrations of footbridges: modeling with polymorphic uncertainties

M. Fina, M. Schweizer, W. Wagner, S. Freitag

Karlsruhe Institute of Technology, Germany

The development of new materials allows to increase the span length of footbridges constructed as lightweight structures. However, slender footbridges are more sensitive to vibrations caused by human-induced vibrations. This can reduce the comfort for pedestrians significantly. In addition, eigenfrequencies of slender footbridges are often in the range of the step frequency. A resonance case has to be avoided to ensure the structural safety. The gait of a pedestrian and thus the step frequency is very difficult to quantify in a dynamic load model. It depends on many factors, e.g., body height and weight, gender, age, psychological aspects and even the economic and social status of a human have an influence. There are many parameters with a lack of knowledge to quantify these factors in a load model. Therefore, pedestrian load models are very simplified in current design guidelines. An adequate quantification of aleatoric and epistemic uncertainties is not yet sufficiently addressed in the modeling of human-induced vibrations of footbridges.

In this contribution, uncertain parameters for a pedestrian load model are quantified with polymorphic uncertainty models based on available data. Then, dynamic structural analyses are performed with human-induced vibrations, which are approximated by surrogate models. The results are fuzzy stochastic processes of the structural accelerations, velocities and displacements. In current design codes, the comfort levels are defined with respect to acceptable accelerations. Due to the subjective perception of structural accelerations, the comfort levels are also defined with uncertainty models. Associated results are presented for a real-world footbridge using a 3D finite element model.



2:40pm - 3:00pm

Flexibility and uncertainty quantification using the solution space method for crashworthiness

P. Ascia, F. Duddeck

Technische Universität München, Germany

In the present landscape, researchers quantify the natural variability or lack of knowledge of a system to counteract its effects. What if, instead of trying to reduce this uncertainty, we try to exploit it during the development? In this work we propose how to use the knowledge on the said uncertainty to increase the design flexibility of the sub-systems of a new product. Imagine the development process being supported by the solution space method and its corridors on the performance of each sub-system. From a certain point of view, these corridors quantify an interval epistemic uncertainty of the development process. The method, however, allows to change the intervals while maintaining the same overall target performance. We exploit the flexibility of the method to find on which parts of the new product is worth investing to reduce the variability and which ones to allow a larger interval. A larger interval yields a bigger flexibility in the design, hence less development effort. The method we propose balances in the development process between reducing the variability of certain sub-systems and increasing the flexibility on the design of other sub-systems.

 
3:30pm - 4:30pmMS08: Numerical simulations of flows in porous media
Location: EI9
Session Chair: Marco De Paoli
 
3:30pm - 3:50pm

Towards a simulation of repeated wave-induced liquefaction processes

H. Keese1, J. Rothschink2, O. Stelzer2, T. Nagel1,3

1TU Bergakademie Freiberg, Germany; 2Federal Waterways Engineering and Research Institute, Germany; 3Freiberg Center for Water Research (ZeWaF), Germany

A riverbed is a porous medium consisting of a granular skeleton and the pore fluid, which itself comprises water and air. In quasi-saturated conditions, the degree of water saturation ranges from 85 - 99 %. Hydrodynamic boundary conditions affected by, e.g., ship passage influence the hydro-mechanical state of the riverbed. When the intergranular contact forces disappear due to an increase in (excess) pore water pressure, liquefaction occurs, at this point the soil behaves like a fluid instead of a solid. This can lead for example to sediment movement, destabilization of bank protection measures, washed-out submarine pipelines or damaged coastal structures. Modeling of this process requires strong hydro-mechanical coupling and the possibility to represent large deformations. The FEniCS framework was used to solve the underlying partial differential equations in a Lagrangian setting using the finite element method. In addition to the process description, the underlying material model for the soil particle phase must be able to correctly represent the transition from a solid-like to a fluid-like behavior. To date, no approach available in geotechnical software is able to satisfactorily represent the entire process for all associated phases. First steps towards this direction will be shown coupling FEniCS with MFront, which offers the possibility to implement different material models and to incorporate them into different programs via a generic interface. Through this procedure the use of different software packages and even numerical methods becomes practically feasible. First steps towards the identification of a material model, which can represent the bidirectional phase change during liquefaction, will be shown. Experimental data sets generated in a soil column in combination with an alternating flow apparatus serve as a basis for comparison.



3:50pm - 4:10pm

A matrix-free discontinuous Galerkin solver for unsteady Darcy flow in anisotropic porous media

B. Z. Temür1, N. Fehn1, P. Munch2, M. Kronbichler2, W. A. Wall1

1Technical University of Munich, Garching, Germany; 2University of Augsburg, Augsburg, Germany

Flow in porous media can be described by the Darcy model in a wide range of applications from soil mechanics to biomechanics. Many relevant applications manifest large scale problems that require transient simulations and finely resolved discretizations. With currently available algorithmic approaches, this can lead to impractically high computing costs or demand to exclude certain effects. For example, current poroelastic models of the human lungs generally solve the steady-state Darcy equation, leaving transient effects unstudied. To address this matter, we propose a new solver for the unsteady Darcy flow problem in anisotropic porous media with spatially and temporally variable porosity and permeability fields.

We use the discontinuous Galerkin method with L2-conforming tensor-product elements for the spatial discretization, and the BDF method for the temporal discretization of the Darcy flow equations. We solve the resulting coupled pressure-velocity system by matrix-free implementation techniques for operator evaluation in Krylov solvers as well as preconditioners. To ensure fast convergence of the solvers, we identify spectrally equivalent preconditioners based on the so-called block preconditioning technique with approximate inverses of the velocity-velocity block and of the Schur complement of the coupled system. For the velocity-velocity block, we design a matrix-free cellwise inverse mass operator with variable coefficients. To minimize arithmetic work, we exploit the tensor-product structure of shape functions using a technique known as sum-factorization. On the other hand, a hybrid multigrid preconditioner for the Poisson problem with variable coefficients approximates the inverse of the Schur complement. We expect these methods to lay a new foundation for high-performance numerical simulations of general Darcy flow.

All methods and applications are implemented in the open source software projects ExaDG and deal.II.



4:10pm - 4:30pm

Pore-scale simulation of convective mixing in confined media

M. De Paoli1,2, C. J. Howland1, R. Verzicco1,3,4, D. Lohse1,5

1Physics of Fluids Group, University of Twente (Enschede, the Netherlands); 2Institute of Fluid Mechanics and Heat Transfer, TU Wien (Vienna, Austria); 3Gran Sasso Science Institute (L’Aquila, Italy); 4Dipartimento di Ingegneria Industriale, University of Rome “Tor Vergata” (Rome, Italy); 5MPI for Dynamics and Self-Organization (Göttingen, Germany)

We use numerical simulations to investigate the mixing dynamics of a convection-driven porous media flow. We consider a fully saturated homogenous and isotropic porous medium, in which the flow is driven by density differences induced by the presence of a solute. In particular, the fluid density is a linear function of the solute concentration. The configuration considered is representative of geological applications in which a solute is transported and dissolves as a result of a density-driven flow, such as in carbon sequestration in saline formations or water contamination processes. The mixing mechanism is made complex by the presence of rocks (solid objects), which represent obstacles in the flow and make the solute to further spread, due to the continue change of the fluid path. Making accurate predictions on the dynamics of this time-dependent system is crucial to provide reliable estimates of the evolution of subsurface flows, and in determining the controlling parameters, e.g., the injection rate of a current of carbon dioxide or the spreading of a pollutant in underground formations. To model this process, we consider here an unstable and time-dependent configuration defined as Rayleigh-Taylor instability, where a heavy fluid (saturated with solute) initially sits on top of a lighter one (without solute). The fluids are fully miscible, and the mixing process is characterised by the interplay of diffusion and advection: initially diffusion controls the flow and is responsible for the initial mixing of solute. At a later stage, the action of gravity promotes the formation of instabilities, and efficient fluid mixing takes place over the entire domain. The competition between buoyancy and diffusion is measured by the Rayleigh-Darcy number (Ra), the value of which controls the entire dynamics of the flow. We analyse the time-dependent evolution of this system at high Ra, and we quantify the effect of the Rayleigh-Darcy number on solute transport and mixing. Simulations are performed with a highly parallelized finite difference (FD) code coupled with immersed boundaries method (IBM) to account for the presence of the solid obstacles. We compare the results against experimental measurements in bead packs. The results are analysed at two different flow scales: i) at the Darcy, where the buoyancy-driven plumes control the flow dynamics, and ii) at the pore-scale, where diffusion promotes inter-pore solute mixing. Numerical and experimental measurements are used to design simple physical models to describe the mixing state and the mixing length of the system.