The PhD course promotes series of seminars held both in presence and online (webinars), held by researchers and university colleagues from research centers and universities of absolute scientific level.

We try to propose topics that privilege the methodological aspects rather than the specific-technological ones, in order to involve the interest of all doctoral students, regardless of the topic of each one's research project.

Participation in the seminars is not formally compulsory but attendance is strongly recommended to all.


Joseph M. Powers - Detailed and reduced modeling of reacting fluid mechanics
The simulation of a reacting mixture of ideal gases is considered. The competition between advection, reac- tion, and diffusion in an unsteady spatially inhomogeneous geometry often induces phenomena which evolve on multiple length and time scales. While the continuum model equations are generally well accepted, ana- lytical solutions are generally not available, and computational solution is difficult due to the plethora of scales. It is demonstrated that a detailed solution of the model equations is able to capture some of the ob - servable harmonies in nature; in particular, predictions of oscillating detonations in a hydrogen-air mixture are able to match well with experiment. Also discussed are reduction strategies for systematically removing stiffness from reacting systems in the absence of advection and diffusion.


Joseph M. Powers - Verification in Scientific Computing: from Pristine to Practical to Perimeter-Extending
Methods of verification of predictions from scientific computing will be surveyed. Verification, sometimes characterized as “solving the equations right,” is often more amenable to mathematical methods than its equally important counterpart, validation, characterized as “solving the right equations.” First focus will be given to a variety of standard verification methods used in compressible and reactive fluid mechanics; these can be characterized as “pristine” and typically involve demonstration of how error norms converge as a function of discretization and comparison to asymptotic convergence rates. Prediction of such flows can be challenging to verify because of the vast disparity of length and time scales inherent in such flows. The pres - ence of thin viscous and reaction layers, surfaces of discontinuity found in inviscid limits, and inherent insta- bility each represent challenges to achieving verified solutions. Often however, one is faced with more prac- tical problems in verification, especially for cases where computational resources are scarce or the dynamics are such that it is difficult to define a normed error. For such cases, discussion will be presented of how com - putational scientists, as well as those that review the science, can practically address the topic of verification. Finally, discussion will be given as to how to expand the perimeter of what can be verified, especially with regard to problems that exhibit spatio-temporal instability, nonlinear dynamics, and transition to chaos.
Alessandro Balossino - Space exploration with small spacecraft: current status, limits, perspectives
Microsats and CubeSats have become very popular classes of spacecraft in the last few years, as they have shown their potential as cheap and flexible tools for educational purposes, in-orbit validation and even for the provision of commercial services in the fields of Earth observation and IoT. Most of these satellites have been deployed in low earth orbit, however there’s a big interest in using these very compact systems for the exploration of the solar system either as a secondary payload of bigger missions or as a stand-alone ones. The perspective of using CubeSats or microsats for exploration is appealing because - if a mission is designed properly - it is possible to perform good science with very little risk and lower costs and development times. However, there are several challenges that need to be overcome: these platforms have intrinsic limits in terms of power, propulsion capabilities, redundancies and also the standard design approach which foresees an extensive use of COTS components and reduced testing campaigns is not robust enough for this type of missions. After the first missions beyond LEO with CubeSats, such as the Italian-led ArgoMoon and LICIACube or the secondary payloads launched in the frame of Artemis-1 and Insight, it is now possible to draw a first balance of the potential of this approach and the gaps that still need to be filled. The goal of this symposium is to present the current status of this kind of missions and the perspectives in the near a long term.
Jonathan Lunine - Looking for Life across the Solar System
February 21, 2023
In its 2019 Strategy for Astrobiology, the US National Academy wrote "Are we alone in the universe? Sages and scientists, philosophers and poets have posed variants of this question since time immemorial. Today, we are formulating research programs that may someday provide an answer.” What are those research programs? Where and how do we search for life? Specifically in our solar system, there are several planets or moons that have environments which could support life or at one time might have. I will discuss three of the best locales--Mars, Europa, Enceladus—and describe concepts and plans for spacecraft missions to further explore the potential of these places for life—and to actually search there for signs of life.


A. Urbano -Surrogate models for coaxial injectors based on CFD and Deep Learning
March 2, 2022
Facing the need to increase the accuracy of rocket engines design tools, the present seminar describes an innovative methodology under development for the design and optimization of rocket engine combustion chambers using numerical simulations and deep learning. The New Space has brought about a paradigm shift in the space world. New private players entered in the market. They do not necessarily bring technological innovations, but innovations in production and development methodologies. The big challenge in this context is therefore to be able to reduce the price of access to space, which means, among others, reducing the cost of developing the propulsion system. Indeed, Liquid Rocket Engines (LRE), which are the best candidates for future space applications, are complex systems, characterized by several subsystems interacting. They are usually designed in the preliminary phase making use of system tools that rely on semi-empirical correlations for the components. The high uncertainty associated with these models propagate on the whole system design. In particular, when dealing with combustion chambers and injectors, low order models fail to be predictable: these components are characterized by the high nonlinear phenomena characterizing turbulent diffusion flames at high pressure. In the end, LRE development strongly relies on experimental tests going up to full scale tests, that are very expensive. On the other hand, numerical simulations of rocket combustion chambers have become popular in the last years. Computational fluid dynamics simulations have demonstrated the ability to be predictable in terms of heat fluxes, flame dynamics and overall performance estimations. Depending on the required level of details and phenomena to be captured, both Reynolds Averaged Naviers-Stockes (RANS) equations and Large Eddy Simulations (LES) can be used to simulate rocket engines combustion chambers in real configurations. However, the restitution times are too long to be useful in the framework of a whole propulsive system design (hours for RANS, thousands of hours for LES). The objective of the work presented in this seminar is to introduce a methodology to jointly use CFD data and artificial intelligence (in particular deep learning) in order to extract surrogate models for injector and combustion chambers of rocket engines. These surrogate models are meant to be used for the design and optimization of single components and in the framework of a multi-disciplinary analysis of the whole propulsive system. First, the experimental test case of a single coaxial injector that is taken as a reference point in order to define the correct numerical setup, is presented. Then, a design of experiments is generated by varying nine parameters: seven geometrical (diameters, contraction area ratios) and two operative conditions (mass flow rate and O/F). RANS simulations are carried out to generate the dataset with the commercial code Ansys Fluent, generating around 3600 simulations. The data are then used to train surrogate models of different fidelity. These consists in global or averaged quantities (0D), the wall heat flux profile (1D) and the temperature field (2D). Attention is given on the selection of the proper machine learning technique. For low dimensional outputs, results show that deep neural networks outperform other standard machine learning tools, namely Radial Basis Function and Kriging. Regarding high-dimensional outputs, convolutional neural networks with gradient-based loss functions are found effective to capture the large and smooth temperature variations, as well as the thin and sharp temperature gradients at the flame front. Eventually the models are used in the framework of an optimization problem. Results highlight the benefits of new design and optimization tools based on deep learning, capable of real-time predictions of complex flow fields. The seminar ends with an overview of the challenges that will be faced in order to extend the methodology to LES data.


Introduction to PyTorch per ML
December 2, 2021
Speaker: Prof. Fabrizio Silvestri Department: Ingegneria informatica, automatica e gestionale
Introduction to Machine Learning (ML)
Dicembre 1, 2021
Speaker: Prof. Fabrizio Silvestri Department: Ingegneria informatica, automatica e gestionale
Introduction to Python programming
November 30, 2021
Speaker: Ing. Gael Cascioli Department: Ingegneria meccanica e aerospaziale


Nonlocal models in computational Science and Engineering: treatment of interfaces in heterogeneous materials and media, image processing, and model learning.
In this second talk I will address in details some of the challenges and applications mentioned in the first talk. Specifically, I will describe two techniques to tackle the unresolved treatment of nonlocal interfaces in the simulation of heterogeneous material and media. The first technique is based on the minimization of an energy principle and yields a well-posed and physically consistent nonlocal interface problem; the second is based on a new fractional model for anomalous diffusion with increased variability. Then, I will describe a technique for optimal image denoising using nonlocal operators as filters. The optimal imaging problem is formulated as a bilevel optimization problem where the control variables are the denoising parameters. Several numerical results on benchmark images illustrate the applicability and improved accuracy of our approach. If time allows, I will also present two recently developed machine-learning techniques for nonlocal model identification. These techniques are physics-informed, data-driven, tools that allow us to reconstruct model parameters from sparse observations. I will also show one- and two-dimensional numerical tests that illustrate robustness and accuracy of our approaches
New Large Constellations of Low Earth Orbit Satellites - Astronomy and Space Debris Challenges
Over the next decade plans have been advanced for 100,000 new satellites in Low Earth Orbit (LEO, altitude less than 2000 km).  This will increase the total number of objects in this orbital regime by at least a factor of 5, including active satellites and debris larger than 10 cm.  I'll review why these constellations of satellites are planned, why so many are needed, and what the basic design parameters of a satellite constellation are.  The first of these constellations have been launched: SpaceX Starlink satellites and OneWeb satellites.  For the appearance of the night sky to the unaided eye and ground and space based optical astronomy, the night sky will never be the same.  These new satellites could be brighter than most of the objects in orbit today, producing contamination by satellite streaks in astronomical images.  The growing spatial density of objects in LEO leads to an increased risk of collision between objects in LEO and the increase in the space debris population.
Nonlocal models in computational science and engineering: theory and challenges
Nonlocal models such as peridynamics and fractional equations can capture effects that classical partial differential equations fail to capture. These effects include multiscale behavior, discontinuities in the solutions such as cracks, and anomalous behavior such as super- and sub-diffusion. For this reason, they provide an improved predictive capability for a large class of engineering and scientific applications including fracture mechanics, subsurface flow, turbulence, and image processing, to mention a few.  However, the improved accuracy of nonlocal formulations comes at the price of modeling and computational challenges that may hinder the usability of these models. Challenges include the nontrivial prescription of nonlocal boundary conditions, the unresolved treatment of nonlocal interfaces, the identification of model parameters, often sparse and subject to noise, and the incredibly high computational cost.  In this talk I will first introduce nonlocal models and describe a recently developed nonlocal calculus for their analysis. Then, I will discuss simulation challenges and describe in detail how we are addressing some of them at Sandia National Labs.
Multifidelity Strategies in Uncertainty Quantification: an overview on some recent trends in sampling based approaches
In the last decades, the advancements in the areas of computer hardware/architectures and scientific computing algorithms enabled engineers and scientists to more rapidly study and design complex systems by heavily relaying on numerical simulations. The increased need for predictive numerical simulations exacerbated the requirement for an accurate quantification of the errors of the numerical simulations beyond the more classical algorithmic verification activities. As a consequence, Uncertainty Quantification (UQ) has been introduced as a task that allows for a formal characterization and propagation of the physical and numerical uncertainty through computational codes in order to obtain statistics of the system's response. Despite the recent efforts and successes in advancing the UQ algorithms’ efficiency, the simultaneous combination of a large number of uncertainty parameters (which often correlates to the complexity of the numerical/physical assumptions) and the lack of regularity of the system's response still represents a formidable challenge for UQ. One of the possible ways of circumventing these difficulties is to rely on sampling-based approaches which are generally robust, easy to implement and they possess a rate of convergence which is independent from the number of parameters. However, for many years the extreme computational cost of these methods prevented their widespread use for UQ in the context of high-fidelity simulations. More recently several multilevel/multifidelity Monte Carlo strategies have been proposed to decrease the Monte Carlo cost without penalizing its accuracy. Several different versions of multifidelity methods exist, but they all share the main idea: whenever a set/cloud/sequence of system realizations with varying accuracy can be obtained, it is often more efficient to fuse data coming from all of them instead of relying to the higher-fidelity model only. In this talk we summarize our recent efforts in investigating novel ways of increasing the efficiency of these multifidelity approaches. We will provide several theoretical and numerical results and we will discuss a collection of numerical examples ranging from simple analytical/verification test cases to more complex and realistic engineering systems. 
The COVID-19 epidemic in Italy --- a modeling perspective (or What I did in the 2 Months Quarantine)

Combustion Engines Are Not Dead Yet: Future of Power and Transportation
Despite the widespread perception about battery-electric and fuel-cell vehicles as future transportation, there are many reasons to believe that “all electric vehicles” scenario is not only unrealistic but also undesirable. The presentation will attempt to present an objective assessment of the future transportation portfolio and the role of advanced internal combustion engines running on conventional and alternative fuels. In particular, objective well-to-wheel life cycle assessment for various competing vehicle technologies will be presented, through which it will become clear that advanced high efficiency internal combustion engines running on carbon-neutral liquid fuels are the most feasible future direction for transportation at scale. Overviews will be given on relevant ongoing research activities and their opportunities and challenges will be addressed.
Development of reduced-order models based on dimensionality reduction, classification and regression for reacting flow applications.
In this second part, the reduced representations are used to derive reduced-order models, in combination to typical ML-based tasks such as classification and regression. Examples of applications of these ROM are provided in the context of Large Eddy Simulations of turbulent reacting flows, as well as for the development of digital twins of combustion assets.
Simple Flows Using a Second Order Theory of Fluids

Dimensionality reduction and feature extraction from high-fidelity combustion data
The use of machine learning algorithms to predict the behaviors of complex systems is booming. However, the key for an effective use of machine learning tools in multi-physics problems, including combustion, is to couple them to physical and computer models, to embody in them all the prior knowledge and physical constraints that can enhance their performances, and to improve them based on the feedback coming for the validation experiments. In other words, we need to adapt the scientific method to bring machine learning into the picture and make the best use of the massive amount of data we have produced thanks to the advances in numerical computing. The webinars review some of the open opportunities for the application of data-driven, reduced-order modelling to combustion systems. In particular, the first webinar focuses on dimensionality reduction in the context of reacting flow applications. Different approaches (based on modal decomposition, neural networks, kernel methods, ...) are presented and compared, based on their ability to identify low-dimensional manifold and provide relevant features.
Physics-Informed Neural Networks for Optimal Control
Physics-Informed Neural Networks (PINN) refer to recently defined a class of machine learning algorithms where the learning process for both regression and classification tasks is constrained to satisfy differential equations derived by the straightforward application of known physical laws. Indeed, Deep Neural Networks (DNN) have been successfully employed to solve a variety of ODEs and PDEs arising in fluid mechanics, quantum mechanics, just to mention a few. Optimal control problems, i.e. finding a feasible control that minimize a cost functional while satisfying physical, state and control constraints, are generally difficult to solve and one may nned to resort to specialized numerical methods. The application of Pontryagin minimum principle generates a complex two-point boundary value problem that is very sensitive to the initial guess (“curse of complexity”). The application of dynamic programming principles generate a high-dimensional PDE named Hamilton-Jacobi-Bellman (“Curse of Dimensionality”). In this talk we show the PINN can be employed to solve optimal control problems by tackling their solution using deep and/or shallow NNs. We show that such methods can be coupled with the Theory of Functional Connections (TFC, by Mortari et al.) to create numerical frameworks that generate efficient and accurate solutions with potential for real-time applications.
Lattice models arising from neural networks and their long term dynamics
 Lattice  systems arising from neural networks, referred to as neural lattice models, have attracted much attention recently.   They can be broadly classified  into two types:   one developed as the discretization of continuous neural field models, namely neural field lattice systems, and the other as the limit of finite dimensional discrete neural networks when their sizes  become  increasingly large.    In the lecture we will introduce a few interesting neural lattice models and investigate their long term dynamics.  These dynamics provide insight into the stability of large neural networks, as well as justification of  finite dimensional approximations for numerical simulations of such networks. 
From Art to Science: the Flower Constellations
This year the Flower Constellations theory celebrates its 16th birthday. Many years were needed to fully understand the implications and to develop the theory. This new satellite constellations design tool is now ready for applications. The theory introduces a new class of space objects characterized by shape preserving configurations where the whole constellation behaves as a rigid object. By using minimal parameterization (Hermite normal form) the 2D Lattice Flower Constellations allows to include all spatial and temporal symmetric solutions, while the extension to 3D Lattice allows designers to use any inclination when selecting elliptical orbits under J2 perturbation. Recently, the Necklace theory applied to 2D and 3D Flower Constellations exponentially increases the space of potential solutions while keeping limited the number of satellites and launches (costs). The evolution of the mathematical theory is presented, showing some potential configurations to improve existing applications as well as configurations proposing new applications! The number of applications are many, including, positioning, communication, radio occultation, interferometric, and surveillance systems. In particular, the Flower Constellations theory allows to design conjunction-free constellations with many thousands of satellites and a new class of orbits/constellations, called J2 propelled systems, where the Earth oblateness perturbation is used (rather than control) to cover spatial volumes around the Earth to measure or monitor physical quantities.
Deep Learning Models for Space Guidance and Control
Autonomous exploration of small and large bodies of the solar system requires the development of a new class of intelligent systems capable of integrating in real-time stream of sensor data and autonomously take optimal decisions. Over the past few years, there has been an explosion of machine learning techniques involving the use of deep neural networks to solve a variety of problems ranging from object detection to image recognition and natural language processing. The recent success of deep learning is due to concurrent advancement of fundamental understanding on how to train deep architectures, the availability of large amount of data and critical advancements in computing power (use of GPUs). One can ask how such techniques can be employed to provide integrated and closed loop solutions for space autonomy as well as Guidance, Navigation and Control (GNC). In this talk we discuss the fundamentals of deep reinforcement learning and meta-learning (“learn-to-learn) and their application to GNC in a variety of scenarios relevant to space exploration.
One lecture on data assimilation using some actual data of the Corona disease as an example.
The lecture introduces adjoint methods for fluid dynamics.  The adjoint equations for the Navier-Stokes equations describe sensitivities of a certain quantity of interest (think of drag of an air-foil for example) to a certain input parameter (think of the profile of the air-foil).  Classical sensitivity analysis would assume a profile and vary for example the chamber thickness to calculate the sensitivity. One change results in one sensitivity. Instead of changing the thickness and probing the drag the adjoint equations provide an equation to directly calculate the sensitivity for changing all surface points of the profile.  The method is capable of calculating the sensitivities of millions of parameters in one step. This comes of course with an effort, but immediately pays off, when more than a handful parameters are to be investigated.  In this lecture, the method is introduced and an application to a particularly simple example of an optimisation of an Epidemiology model for the COVID-19 pandemic presented, to illustrate the method. Then applications to several, more complicated, fluid dynamical problems are discussed.
Higher order numerical schemes and their convergence for random differential equations
 Random ordinary differential equations (RODEs) are ordinary differential equations that include a stochastic process in their vector field.  They seem to have had a shadow existence to Ito stochastic differential equations, but have been around for as long as if not longer and have many important applications.  In the engineering and physics literature, a simpler kind of RODE is investigated with the vector field being chosen randomly rather than depending on a stochastic process that give rises to stochastic ordinary differential equations (SODEs).  Unlike SODEs, RODEs can be analyzed pathwise with deterministic calculus but require further treatment beyond that of classical ODE theory.   The sample paths of RODEs may be just Holder continuous and not even differentiable, and thus classical Taylor expansions do not apply.  In this lecture we will introduce the special Taylor expansions for RODEs and use them to derive higher order numerical schemes.  Convergence analysis will also be provided.
Direct Numerical Simulation of Engine Knock
Knocking in internal combustion engines is a commonly encountered phenomenon but fundamental understanding of the underlying physical mechanism is lacking. In the present study, direct numerical simulations (DNS) of reactive mixtures with temperature and composition fluctuations are conducted in order to provide insights into the auto-ignition and subsequent development of knock and detonation. High order discretization schemes with shock-capturing algorithms allow an accurate realization of the flame-acoustic interaction and the evolution of detonation accompanied by high pressure peaks. Parametric studies using one- and two-dimensional DNS at engine-like conditions allow a systematic characterization of the onset of knock in terms of key effects such as bulk mixture conditions and heat release rate. The original theory by Bradley and coworkers is revised to properly predict the onset of detonation and further validated by simulation and experimental data.
Applications of Functional Interpolation to Optimization
This lecture summarizes what the Theory of Functional Connections (TFC) is and presents the most important applications to date. The TFC performs analytical functional interpolation. This allows to derive analytical expressions with embedded constraints, expressions describing all possible functions satisfying a set of constraints. These expressions are derived for a wide class of constraints, including points and derivatives constraints, relative constraints, linear combination of constraints, component constraints, and integral constraints.  An immediate impact of TFC is on constrained optimization problems as the whole search space is reduced to just the space of solutions fully satisfying the constraints. This way a large set of constrained optimization problems are turned into unconstrained problems, allowing more simple, fast, and accurate methods to solve them. For instance, TFC allows to obtain fast and machine-error accurate solutions of linear and nonlinear ordinary differential equations.  TFC has been extended to n-dimensions (Multivariate TFC). This allows to derive efficient methods to solve partial and stochastic differential equations. This lecture also provides some examples in aerospace applications as, for instance, accurate perturbed orbit propagation, perturbed Lambert problem, energy-efficient docking, and energy (or fuel-efficient) optimal landing on large bodies.


Problems in Researches of Turbulence
Premixed Turbulent Flames at High Reynolds Number in Spatially Evolving Slot Bunsen Configuration
Methane/air premixed and partially premixed turbulent flames at high Reynolds number are characterized using Direct Numerical Simulations (DNS) with detailed chemistry in a spatially evolving slot Bunsen configuration. Four simulations are performed at increasing Reynolds number up to 22400, defined based on the bulk velocity, slot width, and the reactants' properties, using 22 billion grid points, making it one of the largest simulations in turbulent combustion. The study covers different aspects of flame-turbulence interaction. It is found that the thickness of the reaction zone is similar to that of a laminar flame, while the preheat zone has a lower mean temperature gradient, indicating flame thickening. The characteristic length scales of turbulence are investigated and the effect of the Reynolds number on these quantities is assessed. The tangential rate of strain is responsible for the production of flame surface in the mean and surface destruction is due to the curvature term. To perform these simulations, few preliminary steps were required: (i) a skeletal mechanism was developed reducing GRI-3.0; (ii) a convergence study is performed to select the proper spatial and temporal discretization and (iii) simulations of fully developed turbulent channel flows are performed to generate the inlet conditions of the jet.
Theory of optimal dynamical systems, optimal spectrum and applications to fluid dynamics
Flow control with moving wavy wall
Introduction to Reinforcement Learning
Reinforcement Learning is an area of Machine Learning aiming at providing an agent with optimal decision abilities based on experience given in terms of future rewards. News-breaking successes in applications such as Atari games, Go, Poker, etc., where software defeated best human players, significantly increased public attention and scientific research around this topic. In this seminar, we will discuss the main ideas, challenges, algorithms and open problems related to (deep) reinforcement learning. Specifically, we will focus on real examples and we will provide practical hints and solutions to apply RL on multiple problems, ranging from video-games to robot control.
Introduction to Pattern Recognition and Data Mining Techniques for Supervised and Unsupervised Learning Problems
This seminar aims at introducing theoretical, technical and practical aspects in the design and development of machine learning systems for the analysis of signals, measurements and, more generally, of big data, based on Computational Intelligence techniques such as Bayesian learning, neural networks, fuzzy logic, evolutionary computation, etc. The focus will be on engineering applications for the solution of both supervised and unsupervised learning problems, in particular concerning optimization, approximation, regression, prediction, interpolation, filtering, clustering and classification. Main topics of the seminar will be: introduction to machine learning and data driven modeling problems (inference, regularization, overfitting, structural optimization); data analysis and conditioning (feature extraction, denoising, detrending, normalization); supervised and unsupervised learning problems; fundamentals of clustering algorithms; fundamental of classification algorithms; fundamental of regression models.
Secrets of fish swimming
Firstly, Vortex dynamics principles of direction control, speed control and energy saving of fish school are discussed in this lecture. A parallel Computational Fluid Dynamics(CFD) package for the two- and three-dimensional moving boundary problem is obtained, which combines the adaptive multi-grid finite volume method and the methods of immersed boundary and VOF (Volume of Fluid). With the CFD package, it is found that due to the interactions of vortices in the wakes, without proper control, a fish swims in a fish school with a given flapping rule can not keep the right position in the queue. In order to understand the secret of fish swimming, a new control strategy of fish motion is proposed for the first time, i.e., the locomotion speed is adjusted by the flapping frequency of the caudal, and the direction of swimming is controlled by the swinging of the head of a fish. The vortex dynamics principle of direction control is to generate vortices in favor of turning and preventing the harmful separation from the head. The vortex dynamics principle of speed control of fish school is that the merger of vortices with the same sign can speed up the fish. The vortex dynamics principle of energy saving of the swimming of a fish school is that when using the favorable vortical wake in front or by the side, the flapping frequency of the fish can be reduced up to 18.35%, therefore save energy. In addition, the new control strategy, which separates the speed control and direction control, is important in the construction of biomimetic robot fish, with which it will greatly simplify the control devices of biomimetic robot fish. Secondly, topology optimization method is applied to a caudal fin for the first time, which shows that with topology optimized caudal fin, a fish can swim easier, faster and more flexible. Key words: Self-propelled swimming;Control strategy of fish swimming; Direction control; Locomotion speed control; Topology optimization of a caudal fin.
Numerical simulations of moving boundaries problems

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma