Daniel K. Sewell (University of Iowa) - An introduction to the statistical analysis of network data
9 e 10 Settembre - Aula VII (ex Castellano) ore 10-16 (con pausa)

Simone Russo - L’invalidità previdenziale: studio dell’incidenza della disabilità nella popolazione in età lavorativa e analisi delle determinanti attraverso dati di registro
5 Giugno - Aula Master (Viale Regina Elena 295) ore 10

Enrico Tucci - L’emigrazione dall’Italia attraverso l’integrazione e l’analisi di rilevazioni statistiche e fonti ufficiali
5 Giugno - Aula Master (Viale Regina Elena 295) ore 10

Modellizzazione statistica dei valori estremi
16-17 Aprile - Sala 34 ore 10-14

Overdispersed-Poisson Model in Claims Reserving: Closed Tool for One-Year Volatility in GLM Framework
29 Marzo - Aula V ore 14.15

Asset-liability management for occupational pension funds under market and longevity risk: a case study and alternative modelling approaches
22 Marzo - Aula V ore 15
The modelling of institutional ALM problems has a long history in stochastic programming starting in the late 80’s with the first industry developments such as the well-known Yasuda Kasai model (Ziemba, Turner, Carino et al, 1994) specifically for pension fund management (PF ALM). Due to economic and demographic pressures in most OECD countries and an increasing interest on PF ALM developments by the industry and by policy makers, we witness now-a-day a growing demand for R&D projects to the scientific community. Taking the view of a PF manager, the presentation will develop around the definition of a generic pension fund (PF) asset-liability management (ALM) problem and analyse the key underlying methodological implications of: (i) it's evolution from an early stage multistage stochastic programming (MSP) with recourse to most recent MSP and distributionally robust (DRO) formulations, (ii) a peculiar and rich risk spectrum including market risk as well as liability risk, such as longevity risk and demographic factors leading to (iii) valuation or pricing approaches based on incomplete market assumptions and, due to recent International regulation, (iv) a risk-based capital allocation for long-term solvency. The above represent fundamental stochastic and mathematical problems of modern financial optimisation. Two possible approaches to DRO are considered, based on a stochastic control framework or by explicitly introducing an uncertainty set for probability measures and formulating the inner DRO problem as a probability distance minimization problem over a given space of measures. Keywords: asset-liability management, multistage stochastic programming, distributional uncertainty, distributionally robust optimization, solvency ratio, liability pricing, longevity risk, capital allocation.
Incertezza e riproducibilità nella ricerca biomedica
22 Febbraio - Sala 34 ore 11

New formulation of the logistic-normal process to analyze trajectory tracking data
28 Gennaio - Sala 34 ore 10.30
Improved communication systems, shrinking battery sizes and the price drop of tracking devices have led to an increasing availability of trajectory tracking data. These data are often analyzed to understand animals behavior using mixture-type model. In this work, we propose a new model based on the logistic-normal process. Due to a new formalization and the way we specify the core- gionalization matrix of the associated multivariate Gaussian process, we show that our model, differently from other proposals, is invariant with respect to the choice of the reference element and of the order- ing of the components of the probability vectors. We estimate the model under a Bayesian framework, using an approximation of the Gaussian process needed to avoid impractical computational time. We perform a simulation study with the aim of showing the ability of the model to retrieve the parameters used to simulate the data. The model is then applied to the real data where a wolf is observed before and after procreation. Results are easy to interpret, showing differences in the two phases. Joint work with: Enrico Bibbona (Politecnico di Torino), Clara Grazian (Università di Pescara), Sara Mancinelli (università "Sapienza" di Roma)
How to select a sample?
27 Novembre - Sala 34 ore 14.30
The principles of sampling can be synthesized in randomization, restriction and over-representation. Define a sample design – define stratification, equal/unequal selection probability, etc. – means to use prior information and it is equivalent to assume a model on the population. Several well-known sampling designs are optimal related to models that maximizes the entropy. In the Cube method the prior information are used to derive a sample that match the total or means of auxiliary variables. In this respect, the sample is called balanced. Furthermore, if distances between statistical units – based on geographical coordinates or defined via auxiliary variables – are available, it could be interesting to spread the sample in the space in order to make the design more efficient. In this perspective, new spatial sampling methods, such as the GRTS, the local pivotal method and the local cube, will be covered.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma