Il corso di dottorato promuove serie di seminari svolti sia in presenza che in modalità telematica (webinars), tenuti da ricercatori e colleghi universitari di centri di ricerca e università di assoluto livello scientifico.

Si cerca di proporre tematiche che privilegino gli aspetti metodologici piuttosto che quelli specifico-tecnologici, in modo da coinvolgere l’interesse di tutti i dottorandi/e, indipendentemente dalla tematica del progetto di ricerca di ciascuno.

La partecipazione ai seminari non è formalmente obbligatoria ma la frequenza è fortemente raccomandata a tutti.


Joseph M. Powers - Detailed and reduced modeling of reacting fluid mechanics
The simulation of a reacting mixture of ideal gases is considered. The competition between advection, reac- tion, and diffusion in an unsteady spatially inhomogeneous geometry often induces phenomena which evolve on multiple length and time scales. While the continuum model equations are generally well accepted, ana- lytical solutions are generally not available, and computational solution is difficult due to the plethora of scales. It is demonstrated that a detailed solution of the model equations is able to capture some of the ob - servable harmonies in nature; in particular, predictions of oscillating detonations in a hydrogen-air mixture are able to match well with experiment. Also discussed are reduction strategies for systematically removing stiffness from reacting systems in the absence of advection and diffusion.


Joseph M. Powers - Verification in Scientific Computing: from Pristine to Practical to Perimeter-Extending
Methods of verification of predictions from scientific computing will be surveyed. Verification, sometimes characterized as “solving the equations right,” is often more amenable to mathematical methods than its equally important counterpart, validation, characterized as “solving the right equations.” First focus will be given to a variety of standard verification methods used in compressible and reactive fluid mechanics; these can be characterized as “pristine” and typically involve demonstration of how error norms converge as a function of discretization and comparison to asymptotic convergence rates. Prediction of such flows can be challenging to verify because of the vast disparity of length and time scales inherent in such flows. The pres - ence of thin viscous and reaction layers, surfaces of discontinuity found in inviscid limits, and inherent insta- bility each represent challenges to achieving verified solutions. Often however, one is faced with more prac- tical problems in verification, especially for cases where computational resources are scarce or the dynamics are such that it is difficult to define a normed error. For such cases, discussion will be presented of how com - putational scientists, as well as those that review the science, can practically address the topic of verification. Finally, discussion will be given as to how to expand the perimeter of what can be verified, especially with regard to problems that exhibit spatio-temporal instability, nonlinear dynamics, and transition to chaos.
Alessandro Balossino - Space exploration with small spacecraft: current status, limits, perspectives
Microsats and CubeSats have become very popular classes of spacecraft in the last few years, as they have shown their potential as cheap and flexible tools for educational purposes, in-orbit validation and even for the provision of commercial services in the fields of Earth observation and IoT. Most of these satellites have been deployed in low earth orbit, however there’s a big interest in using these very compact systems for the exploration of the solar system either as a secondary payload of bigger missions or as a stand-alone ones. The perspective of using CubeSats or microsats for exploration is appealing because - if a mission is designed properly - it is possible to perform good science with very little risk and lower costs and development times. However, there are several challenges that need to be overcome: these platforms have intrinsic limits in terms of power, propulsion capabilities, redundancies and also the standard design approach which foresees an extensive use of COTS components and reduced testing campaigns is not robust enough for this type of missions. After the first missions beyond LEO with CubeSats, such as the Italian-led ArgoMoon and LICIACube or the secondary payloads launched in the frame of Artemis-1 and Insight, it is now possible to draw a first balance of the potential of this approach and the gaps that still need to be filled. The goal of this symposium is to present the current status of this kind of missions and the perspectives in the near a long term.
Jonathan Lunine - Looking for Life across the Solar System
21 Febbraio 2023
In its 2019 Strategy for Astrobiology, the US National Academy wrote "Are we alone in the universe? Sages and scientists, philosophers and poets have posed variants of this question since time immemorial. Today, we are formulating research programs that may someday provide an answer.” What are those research programs? Where and how do we search for life? Specifically in our solar system, there are several planets or moons that have environments which could support life or at one time might have. I will discuss three of the best locales--Mars, Europa, Enceladus—and describe concepts and plans for spacecraft missions to further explore the potential of these places for life—and to actually search there for signs of life.


A. Urbano - Modelli surrogati per iniettori coassiali basati su CFD e Deep Learning
2 marzo 2022
Facing the need to increase the accuracy of rocket engines design tools, the present seminar describes an innovative methodology under development for the design and optimization of rocket engine combustion chambers using numerical simulations and deep learning. The New Space has brought about a paradigm shift in the space world. New private players entered in the market. They do not necessarily bring technological innovations, but innovations in production and development methodologies. The big challenge in this context is therefore to be able to reduce the price of access to space, which means, among others, reducing the cost of developing the propulsion system. Indeed, Liquid Rocket Engines (LRE), which are the best candidates for future space applications, are complex systems, characterized by several subsystems interacting. They are usually designed in the preliminary phase making use of system tools that rely on semi-empirical correlations for the components. The high uncertainty associated with these models propagate on the whole system design. In particular, when dealing with combustion chambers and injectors, low order models fail to be predictable: these components are characterized by the high nonlinear phenomena characterizing turbulent diffusion flames at high pressure. In the end, LRE development strongly relies on experimental tests going up to full scale tests, that are very expensive. On the other hand, numerical simulations of rocket combustion chambers have become popular in the last years. Computational fluid dynamics simulations have demonstrated the ability to be predictable in terms of heat fluxes, flame dynamics and overall performance estimations. Depending on the required level of details and phenomena to be captured, both Reynolds Averaged Naviers-Stockes (RANS) equations and Large Eddy Simulations (LES) can be used to simulate rocket engines combustion chambers in real configurations. However, the restitution times are too long to be useful in the framework of a whole propulsive system design (hours for RANS, thousands of hours for LES). The objective of the work presented in this seminar is to introduce a methodology to jointly use CFD data and artificial intelligence (in particular deep learning) in order to extract surrogate models for injector and combustion chambers of rocket engines. These surrogate models are meant to be used for the design and optimization of single components and in the framework of a multi-disciplinary analysis of the whole propulsive system. First, the experimental test case of a single coaxial injector that is taken as a reference point in order to define the correct numerical setup, is presented. Then, a design of experiments is generated by varying nine parameters: seven geometrical (diameters, contraction area ratios) and two operative conditions (mass flow rate and O/F). RANS simulations are carried out to generate the dataset with the commercial code Ansys Fluent, generating around 3600 simulations. The data are then used to train surrogate models of different fidelity. These consists in global or averaged quantities (0D), the wall heat flux profile (1D) and the temperature field (2D). Attention is given on the selection of the proper machine learning technique. For low dimensional outputs, results show that deep neural networks outperform other standard machine learning tools, namely Radial Basis Function and Kriging. Regarding high-dimensional outputs, convolutional neural networks with gradient-based loss functions are found effective to capture the large and smooth temperature variations, as well as the thin and sharp temperature gradients at the flame front. Eventually the models are used in the framework of an optimization problem. Results highlight the benefits of new design and optimization tools based on deep learning, capable of real-time predictions of complex flow fields. The seminar ends with an overview of the challenges that will be faced in order to extend the methodology to LES data.


Introduction to PyTorch per ML
2 Dicembre 2021
Speaker: Prof. Fabrizio Silvestri Department: Ingegneria informatica, automatica e gestionale
Introduction to Machine Learning (ML)
1 dicembre 2021
Speaker: Prof. Fabrizio Silvestri Department: Ingegneria informatica, automatica e gestionale
Introduction to Python programming
30 novembre 2021
Speaker: Ing. Gael Cascioli Department: Ingegneria meccanica e aerospaziale


Nonlocal models in computational Science and Engineering: treatment of interfaces in heterogeneous materials and media, image processing, and model learning.
In this second talk I will address in details some of the challenges and applications mentioned in the first talk. Specifically, I will describe two techniques to tackle the unresolved treatment of nonlocal interfaces in the simulation of heterogeneous material and media. The first technique is based on the minimization of an energy principle and yields a well-posed and physically consistent nonlocal interface problem; the second is based on a new fractional model for anomalous diffusion with increased variability. Then, I will describe a technique for optimal image denoising using nonlocal operators as filters. The optimal imaging problem is formulated as a bilevel optimization problem where the control variables are the denoising parameters. Several numerical results on benchmark images illustrate the applicability and improved accuracy of our approach. If time allows, I will also present two recently developed machine-learning techniques for nonlocal model identification. These techniques are physics-informed, data-driven, tools that allow us to reconstruct model parameters from sparse observations. I will also show one- and two-dimensional numerical tests that illustrate robustness and accuracy of our approaches
New Large Constellations of Low Earth Orbit Satellites - Astronomy and Space Debris Challenges
Over the next decade plans have been advanced for 100,000 new satellites in Low Earth Orbit (LEO, altitude less than 2000 km).  This will increase the total number of objects in this orbital regime by at least a factor of 5, including active satellites and debris larger than 10 cm.  I'll review why these constellations of satellites are planned, why so many are needed, and what the basic design parameters of a satellite constellation are.  The first of these constellations have been launched: SpaceX Starlink satellites and OneWeb satellites.  For the appearance of the night sky to the unaided eye and ground and space based optical astronomy, the night sky will never be the same.  These new satellites could be brighter than most of the objects in orbit today, producing contamination by satellite streaks in astronomical images.  The growing spatial density of objects in LEO leads to an increased risk of collision between objects in LEO and the increase in the space debris population.
Nonlocal models in computational science and engineering: theory and challenges
Nonlocal models such as peridynamics and fractional equations can capture effects that classical partial differential equations fail to capture. These effects include multiscale behavior, discontinuities in the solutions such as cracks, and anomalous behavior such as super- and sub-diffusion. For this reason, they provide an improved predictive capability for a large class of engineering and scientific applications including fracture mechanics, subsurface flow, turbulence, and image processing, to mention a few.  However, the improved accuracy of nonlocal formulations comes at the price of modeling and computational challenges that may hinder the usability of these models. Challenges include the nontrivial prescription of nonlocal boundary conditions, the unresolved treatment of nonlocal interfaces, the identification of model parameters, often sparse and subject to noise, and the incredibly high computational cost.  In this talk I will first introduce nonlocal models and describe a recently developed nonlocal calculus for their analysis. Then, I will discuss simulation challenges and describe in detail how we are addressing some of them at Sandia National Labs.
Multifidelity Strategies in Uncertainty Quantification: an overview on some recent trends in sampling based approaches
In the last decades, the advancements in the areas of computer hardware/architectures and scientific computing algorithms enabled engineers and scientists to more rapidly study and design complex systems by heavily relaying on numerical simulations. The increased need for predictive numerical simulations exacerbated the requirement for an accurate quantification of the errors of the numerical simulations beyond the more classical algorithmic verification activities. As a consequence, Uncertainty Quantification (UQ) has been introduced as a task that allows for a formal characterization and propagation of the physical and numerical uncertainty through computational codes in order to obtain statistics of the system's response. Despite the recent efforts and successes in advancing the UQ algorithms’ efficiency, the simultaneous combination of a large number of uncertainty parameters (which often correlates to the complexity of the numerical/physical assumptions) and the lack of regularity of the system's response still represents a formidable challenge for UQ. One of the possible ways of circumventing these difficulties is to rely on sampling-based approaches which are generally robust, easy to implement and they possess a rate of convergence which is independent from the number of parameters. However, for many years the extreme computational cost of these methods prevented their widespread use for UQ in the context of high-fidelity simulations. More recently several multilevel/multifidelity Monte Carlo strategies have been proposed to decrease the Monte Carlo cost without penalizing its accuracy. Several different versions of multifidelity methods exist, but they all share the main idea: whenever a set/cloud/sequence of system realizations with varying accuracy can be obtained, it is often more efficient to fuse data coming from all of them instead of relying to the higher-fidelity model only. In this talk we summarize our recent efforts in investigating novel ways of increasing the efficiency of these multifidelity approaches. We will provide several theoretical and numerical results and we will discuss a collection of numerical examples ranging from simple analytical/verification test cases to more complex and realistic engineering systems. 
The COVID-19 epidemic in Italy --- a modeling perspective (or What I did in the 2 Months Quarantine)

Combustion Engines Are Not Dead Yet: Future of Power and Transportation
Despite the widespread perception about battery-electric and fuel-cell vehicles as future transportation, there are many reasons to believe that “all electric vehicles” scenario is not only unrealistic but also undesirable. The presentation will attempt to present an objective assessment of the future transportation portfolio and the role of advanced internal combustion engines running on conventional and alternative fuels. In particular, objective well-to-wheel life cycle assessment for various competing vehicle technologies will be presented, through which it will become clear that advanced high efficiency internal combustion engines running on carbon-neutral liquid fuels are the most feasible future direction for transportation at scale. Overviews will be given on relevant ongoing research activities and their opportunities and challenges will be addressed.
Development of reduced-order models based on dimensionality reduction, classification and regression for reacting flow applications.
In this second part, the reduced representations are used to derive reduced-order models, in combination to typical ML-based tasks such as classification and regression. Examples of applications of these ROM are provided in the context of Large Eddy Simulations of turbulent reacting flows, as well as for the development of digital twins of combustion assets.
Simple Flows Using a Second Order Theory of Fluids

Dimensionality reduction and feature extraction from high-fidelity combustion data
The use of machine learning algorithms to predict the behaviors of complex systems is booming. However, the key for an effective use of machine learning tools in multi-physics problems, including combustion, is to couple them to physical and computer models, to embody in them all the prior knowledge and physical constraints that can enhance their performances, and to improve them based on the feedback coming for the validation experiments. In other words, we need to adapt the scientific method to bring machine learning into the picture and make the best use of the massive amount of data we have produced thanks to the advances in numerical computing. The webinars review some of the open opportunities for the application of data-driven, reduced-order modelling to combustion systems. In particular, the first webinar focuses on dimensionality reduction in the context of reacting flow applications. Different approaches (based on modal decomposition, neural networks, kernel methods, ...) are presented and compared, based on their ability to identify low-dimensional manifold and provide relevant features.
Physics-Informed Neural Networks for Optimal Control
Physics-Informed Neural Networks (PINN) refer to recently defined a class of machine learning algorithms where the learning process for both regression and classification tasks is constrained to satisfy differential equations derived by the straightforward application of known physical laws. Indeed, Deep Neural Networks (DNN) have been successfully employed to solve a variety of ODEs and PDEs arising in fluid mechanics, quantum mechanics, just to mention a few. Optimal control problems, i.e. finding a feasible control that minimize a cost functional while satisfying physical, state and control constraints, are generally difficult to solve and one may nned to resort to specialized numerical methods. The application of Pontryagin minimum principle generates a complex two-point boundary value problem that is very sensitive to the initial guess (“curse of complexity”). The application of dynamic programming principles generate a high-dimensional PDE named Hamilton-Jacobi-Bellman (“Curse of Dimensionality”). In this talk we show the PINN can be employed to solve optimal control problems by tackling their solution using deep and/or shallow NNs. We show that such methods can be coupled with the Theory of Functional Connections (TFC, by Mortari et al.) to create numerical frameworks that generate efficient and accurate solutions with potential for real-time applications.


Problems in Researches of Turbulence
Premixed Turbulent Flames at High Reynolds Number in Spatially Evolving Slot Bunsen Configuration
Methane/air premixed and partially premixed turbulent flames at high Reynolds number are characterized using Direct Numerical Simulations (DNS) with detailed chemistry in a spatially evolving slot Bunsen configuration. Four simulations are performed at increasing Reynolds number up to 22400, defined based on the bulk velocity, slot width, and the reactants' properties, using 22 billion grid points, making it one of the largest simulations in turbulent combustion. The study covers different aspects of flame-turbulence interaction. It is found that the thickness of the reaction zone is similar to that of a laminar flame, while the preheat zone has a lower mean temperature gradient, indicating flame thickening. The characteristic length scales of turbulence are investigated and the effect of the Reynolds number on these quantities is assessed. The tangential rate of strain is responsible for the production of flame surface in the mean and surface destruction is due to the curvature term. To perform these simulations, few preliminary steps were required: (i) a skeletal mechanism was developed reducing GRI-3.0; (ii) a convergence study is performed to select the proper spatial and temporal discretization and (iii) simulations of fully developed turbulent channel flows are performed to generate the inlet conditions of the jet.
Theory of optimal dynamical systems, optimal spectrum and applications to fluid dynamics
Flow control with moving wavy wall
Introduction to Reinforcement Learning
Reinforcement Learning is an area of Machine Learning aiming at providing an agent with optimal decision abilities based on experience given in terms of future rewards. News-breaking successes in applications such as Atari games, Go, Poker, etc., where software defeated best human players, significantly increased public attention and scientific research around this topic. In this seminar, we will discuss the main ideas, challenges, algorithms and open problems related to (deep) reinforcement learning. Specifically, we will focus on real examples and we will provide practical hints and solutions to apply RL on multiple problems, ranging from video-games to robot control.
Introduction to Pattern Recognition and Data Mining Techniques for Supervised and Unsupervised Learning Problems
This seminar aims at introducing theoretical, technical and practical aspects in the design and development of machine learning systems for the analysis of signals, measurements and, more generally, of big data, based on Computational Intelligence techniques such as Bayesian learning, neural networks, fuzzy logic, evolutionary computation, etc. The focus will be on engineering applications for the solution of both supervised and unsupervised learning problems, in particular concerning optimization, approximation, regression, prediction, interpolation, filtering, clustering and classification. Main topics of the seminar will be: introduction to machine learning and data driven modeling problems (inference, regularization, overfitting, structural optimization); data analysis and conditioning (feature extraction, denoising, detrending, normalization); supervised and unsupervised learning problems; fundamentals of clustering algorithms; fundamental of classification algorithms; fundamental of regression models.
Secrets of fish swimming
Firstly, Vortex dynamics principles of direction control, speed control and energy saving of fish school are discussed in this lecture. A parallel Computational Fluid Dynamics(CFD) package for the two- and three-dimensional moving boundary problem is obtained, which combines the adaptive multi-grid finite volume method and the methods of immersed boundary and VOF (Volume of Fluid). With the CFD package, it is found that due to the interactions of vortices in the wakes, without proper control, a fish swims in a fish school with a given flapping rule can not keep the right position in the queue. In order to understand the secret of fish swimming, a new control strategy of fish motion is proposed for the first time, i.e., the locomotion speed is adjusted by the flapping frequency of the caudal, and the direction of swimming is controlled by the swinging of the head of a fish. The vortex dynamics principle of direction control is to generate vortices in favor of turning and preventing the harmful separation from the head. The vortex dynamics principle of speed control of fish school is that the merger of vortices with the same sign can speed up the fish. The vortex dynamics principle of energy saving of the swimming of a fish school is that when using the favorable vortical wake in front or by the side, the flapping frequency of the fish can be reduced up to 18.35%, therefore save energy. In addition, the new control strategy, which separates the speed control and direction control, is important in the construction of biomimetic robot fish, with which it will greatly simplify the control devices of biomimetic robot fish. Secondly, topology optimization method is applied to a caudal fin for the first time, which shows that with topology optimized caudal fin, a fish can swim easier, faster and more flexible. Key words: Self-propelled swimming;Control strategy of fish swimming; Direction control; Locomotion speed control; Topology optimization of a caudal fin.
Numerical simulations of moving boundaries problems

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma