COMPLAS 2021 is the 16th conference of the COMPLAS Series.
The COMPLAS conferences started in 1987 and since then have become established events in the field of computational plasticity and related topics. The first fifteen conferences in the COMPLAS series were all held in the city of Barcelona (Spain) and were very successful from the scientific, engineering and social points of view. We intend to make the 16th edition of the conferenceanother successful edition of the COMPLAS meetings.
The objectives of COMPLAS 2021 are to address both the theoretical bases for the solution of nonlinear solid mechanics problems, involving plasticity and other material nonlinearities, and the numerical algorithms necessary for efficient and robust computer implementation. COMPLAS 2021 aims to act as a forum for practitioners in the nonlinear structural mechanics field to discuss recent advances and identify future research directions.
Scope
COMPLAS 2021 is the 16th conference of the COMPLAS Series.
A. Montanino, C. Olivieri, D. Gregorio, A. Iannuzzo
eccomas2022.
Abstract
Nowadays, there is a raising interest in the development of fast and robust tools to detect the consequences of settlements or loading changes in unreinforced masonry buildings, since they constitute a large part of world architectural heritage. Current tools, based on Finite Element Method or on Discrete Element Method are computationally cumbersome, from one side due to difficulties in dealing with unilateral materials, and on the other side, due to the need of formulating the problem as an explicit dynamics problem. The methods proposed here are based on the minimization problem of two different functionals, the Total Potential Energy, and the Total Complementary Energy, which allow to detect the stress and strain distribution developed under given load and given boundary settlements, through a minimization problem, which require a significantly lower computational cost and no material parameters, especially when rigidity assumption of the material is done. After illustrating the main characteristics of the two methods, they are applied to a case study, and the results are suitably described and discussed.
Abstract Nowadays, there is a raising interest in the development of fast and robust tools to detect the consequences of settlements or loading changes in unreinforced masonry buildings, [...]
This contribution is the proceeding of a presentation in pairs taking different viewpoints on the robustness of discretizations for poroelastic problems. These presentations are organised by the Young researcher committee to continue the tradition of fruitful interactions between applied mathematics and computational engineering. The engineering part of this contribution highlights key aspects of the theoretical framework and comments on robustness of common discretizations. Within the mathematical part of this contribution it is shown that the accurate approximation of the total stress tensor as well as the Darcy velocity are crucial to obtain reliability and robustness.
Abstract This contribution is the proceeding of a presentation in pairs taking different viewpoints on the robustness of discretizations for poroelastic problems. These presentations [...]
This work assesses the capability of the partially averaged Navier-Stokes (PANS) method to accurately reproduce self-sustained shock oscillations, also known as transonic buffet, occurring on supercritical aerofoils at high Reynolds numbers. Attention is paid to the comparison with unsteady Reynolds-averaged Navier Stokes (URANS) results to show the benefits of PANS, in resolving flow unsteadiness on affordable CFD grids. The role of the mesh metrics in the formulation of the PANS model is emphasized, as well as the relation of the mesh metrics with the spatiotemporal discretisation used for the numerical simulations. The aim is to extend the use of PANS to flow cases involving shock-wave boundary layer interactions to obtain accurate predictions without the need for very expensive computations.
Abstract This work assesses the capability of the partially averaged Navier-Stokes (PANS) method to accurately reproduce self-sustained shock oscillations, also known as transonic [...]
Additive manufacturing (AM) is an advanced method of manufacturing complex parts layer by layer until the required design is achieved. Laser powder bed fusion (L-PBF) is used to produce parts with high resolution because of low layer thickness. L-PBF is based on laser beam and material interaction where the powder material is melted and then solidified. This occurs in a short time frame of the order of 0.02 seconds and makes the whole process challenging to be studied in real time. Studies have shown the development of numerical methods and the use of simulation software to understand the laser beam and material interaction. This phenomenon is key to understanding the material behavior under melting and mechanical properties of the part produced by L-PBF process as it is directly linked with the solidification of the melted powder material. A detailed study of the laser beam and material interaction is needed on a microscale and mesoscale level as it provides a better understanding and helps in the development of the given material for the L-PBF process. This review provides a comprehensive understanding of the background for the use of simulation in AM and the different simulation scales of feature under interest. The main conclusion from this review is the need to develop a methodology to use simulation at micro and mesoscale level to understand the laser beam and material interaction and improve the efficiency of the L-PBF process using this data.
Abstract Additive manufacturing (AM) is an advanced method of manufacturing complex parts layer by layer until the required design is achieved. Laser powder bed fusion (L-PBF) is used [...]
Additive manufacturing (AM) has undergone different phases of technological changes from being a mere manufacturing method for consumer goods, prototyping, and tooling to industrial series production of functional end-use parts. The seven AM sub-categories allow the creation of unprecedented designs that are otherwise impossible using conventional manufacturing (CM) methods. The layer-by-layer approach to manufacturing enables the creation of metal components with hollows and overhangs, often requiring sacrificial support structures which are removed prior to or during the post-processing phase. Factors such as poor part quality, high investment cost, low material efficiency, and long manufacturing time hindered the widespread adoption of AM in the past. The adoption of laser-based powder bed fusion for metals was particularly hindered due to reasons such as the need for support structures, demand for post-processing, the numerous affecting processing parameters and the lack of understanding of the interaction between laser beam and material. Technological advances in AM have helped users reduce or omit some of the limitations to adoption, such as optimized support structures for better material efficiency. Simulation-driven tool is one means offering ways to time-efficient product development and more superior structural components amidst the raw material and cost reductions. This study elucidates how such benefits are feasible via using simulation tools. Simulation-driven optimization of the product design, process, and manufacturing is revealed to change the design, support structures and postprocessing required to bring parts to the required reliability. Virtual manufacturing planning also gives a prior understanding of how processing parameters such as laser scan velocity, laser power, scanning strategy, hatch distance and others can be controlled; to achieve optimal interaction between laser beam and material for the required part quality. Simulation-driven design for additive manufacturing (DfAM) allows for agile design optimizing with design parameters and rules, boosting resource efficiency and productivity. This research proposes a life cycle cost (LCC)driven DfAM tool, which potentially improves service life and life cycle cost. The results provide insight into the simulation-driven DfAM of laser-based PBF and demonstrate the potential for LCC-based approaches to enhance the confidence in adopting PBF for metals.
Abstract Additive manufacturing (AM) has undergone different phases of technological changes from being a mere manufacturing method for consumer goods, prototyping, and tooling to [...]
B. Liu, C. Cantwell, D. Moxey, M. Green, S. Sherwin
eccomas2022.
Abstract
A highly efficient matrix-free Helmholtz operator with single-instruction multipledata (SIMD) vectorisation is implemented in Nektar++ [1] and applied to the simulation of anisotropic heat transport in tokamak edge plasma. A tokamak is currently the leading candidate for a practical fusion reactor using the magnetic confinement approach to produce electricity through controlled thermonuclear fusion. Predicting the transport of heat in magnetized plasma is important to designing a safe tokamak design. Due to the ionized nature of plasma, the heat conduction of the magnetized plasma is highly anisotropic along the magnetic field lines. In this study, a variational form is proposed to simulate the anisotropic heat transport in magnetized plasma and the details of its mathematical derivation and implementation are presented. To accurately approximate the thermal load of plasma deposition on the wall of tokamak chamber, highly scalable and efficient algorithms are crucial. To achieve this, a matrix-free Helmholtz operator is implemented in the Nektar++ framework, utilising sum-factorisation to reduce the operation count and increase arithmetic intensity, and leveraging SIMD vectorisation to accelerate the computation on modern hardware. The performance of the implementation is assessed by measuring throughput and speed-up of the operators using deformed and regular quadrilateral and triangular elements.
Abstract A highly efficient matrix-free Helmholtz operator with single-instruction multipledata (SIMD) vectorisation is implemented in Nektar++ [1] and applied to the simulation of [...]
We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations.
Abstract We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes [...]
J. GRATIEN, C. Chevalier, T. Guignon, X. Tunc, P. Have, S. De Chaisemartin
eccomas2022.
Abstract
Applications to solve large and complex partial derivative equation systems often rely nowadays on frameworks like Arcane, Dune, Feel++. Linear solver packages like PETSc or Trilinos are used to manage linear systems and provide access to a wide range of algorithms. With the evolution of High-Performance Computing, the variety of the hardware features available in new architectures has considerably increased. ARM processors, AMD, Intel and Nvidia GP-GPUs, TPU and FPGA devices are now common. To handle the induced complexity, different strategies are adopted in each linear solver framework. One of them consists in introducing a new layer that provides abstractions to manage the performance portability and to enable several parallel programming models. In this paper, we evaluate the performance of linear solver packages that rely on tools like SYCL [16], Kokkos [8] or HARTS [11] to handle runtime systems like OpenMP, TBB, CUDA,. . . . A simulator to solve advection-diffusion problems has been developed with ALIEN, a C++ framework that provides a high level and unified API to handle large distributed matrices and vectors. We have benchmarked different solver algorithms, and have evaluated the efficiency of their implementations, and their capability to perform on different architectures, for instance, large number of cores, GP-GPU accelerators, or processors with large SIMD instructions.
Abstract Applications to solve large and complex partial derivative equation systems often rely nowadays on frameworks like Arcane, Dune, Feel++. Linear solver packages like PETSc [...]
A new hybrid algorithm for LDU -factorization for large sparse matrix combining iterative solver, which can keep the same accuracy as the classical factorization, is proposed. The last Schur complement will be generated by iterative solver for multiple right-hand sides using block GCR method with the factorization in lower precision as a preconditioner, which achieves mixed precision arithmetic, and then the Schur complement will be factorized in higher precision. In this algorithm, essential procedure is decomposition of the matrix into a union of moderate and hard parts, which is realized by LDU -factorization in lower precision with symmetric pivoting and threshold postponing technique.
Abstract A new hybrid algorithm for LDU -factorization for large sparse matrix combining iterative solver, which can keep the same accuracy as the classical factorization, is proposed. [...]
Graphics cards that are equipped with Tensor Core units designed for AI applications, for example the NVIDIA Ampere A100, promise very high peak rates concerning their computing power (156 TFLOP/s in single and 312 TFLOP/s in half precision in the case of the A100). This is only achieved when performing arithmetically intensive operations such as dense matrix multiplications in the aforementioned lower precision, which is an obstacle when trying to use this hardware for solving linear systems arising from PDEs discretized with the finite element method. In previous works, we delivered a proof of concept that the predecessor of the A100, the V100 and its Tensor Cores, can be exploited to a great extent when solving Poisson's equation on the unit square if a hardware-oriented direct solver based on prehandling via hierarchical finite elements and a Schur complement approach is used. In this work, using numerical results on an A100 graphics card, we show that the method also achieves a very high performance if Poisson's equation, which is discretized by linear finite elements, is solved on a more complex domain corresponding to a flow around a square configuration.
Abstract Graphics cards that are equipped with Tensor Core units designed for AI applications, for example the NVIDIA Ampere A100, promise very high peak rates concerning their computing [...]