Remote participants can follow the talks via Zoom. The Zoom link will be shared with registered participants before the start of the workshop.

The full programme can be found here in PDF format, or see below:

Sunday 12-6


18:30 - 20:30 | Welcome Reception

Grand Cafe Soiron, Vrijthof 18, 6211 LD Maastricht; https://goo.gl/maps/ni4SQoL8CFbAqSe59

Monday 13-6


8:45 - 9:10 | Registration and Coffee

Ad Fundum

9:10 - 9:15 | Welcome and Opening

A1.22

9:15 - 11:00 | Session 1: Networks

Room: A1.22, Chair: Stephan Smeekes

Matteo Barigozzi (University of Bologna)

FNETS: Factor-adjusted network estimation and forecasting for high-dimensional time series

We propose fnets, a methodology for network estimation and forecasting of high-dimensional time series exhibiting strong serial- and cross-sectional correlations. We operate under a factor-adjusted vector autoregressive (VAR) model where, after controlling for common factors accounting for pervasive co-movements of the variables, the remaining idiosyncratic dependence between the variables is modelled by a sparse VAR process. Network estimation of fnets consists of three steps: (i) factor-adjustment via dynamic principal component analysis, (ii) estimation of the parameters of the latent VAR process by means of ℓ1-regularised Yule-Walker estimators, and (iii) estimation of partial correlation and long-run partial correlation matrices. In doing so, we learn three networks underpinning the latent VAR process, namely a directed network representing the Granger causal linkages between the variables, an undirected one embedding their contemporaneous relationships and finally, an undirected network that summarises both lead-lag and contemporaneous linkages. In addition, fnets provides a suite of methods for separately forecasting the factor-driven and the VAR processes. Under general conditions permitting heavy tails and weak factors, we derive the consistency of fnets in both network estimation and forecasting. Simulation studies and real data applications confirm the good performance of fnets.

Luca Margaritella (Lund University )

Inference in Non-stationary High-Dimensional VARs

We use the lag-augmentation idea of Toda and Yamamoto (1995) and build an inferential procedure to test for Granger causality in high-dimensional unit-root non-stationary vector autoregressive (VAR) models. We prove that we can restrict the augmentation to only the variables of interest for the testing, thereby reducing the loss of power coming from the misspecification of the model. By means of a post-double selection procedure where we use the lasso to reduce the parameter space, we are able to partial-out the effect of nuisance parameters and establish uniform asymptotics. We apply our procedure to the untransformed FRED-MD dataset to investigate the main macroeconomic drivers of inflation.

Graziano Moramarco (University of Bologna)

A Factor-Augmented Autoregression For Multilayer Networks

We propose a vector autoregressive (VAR) model for time series with complex network structures. The coefficients of the VAR reflect many different types of connections between economic agents (“multilayer network”), which are summarized into a smaller number of network matrices (“network factors”) through a novel tensor-based principal component ap- proach. We provide consistency results for the estimation of the factors and the coefficients of the factor-augmented multilayer network VAR. Our approach combines two different dimension-reduction techniques and can be applied to ultra-high dimensional datasets. In an empirical application, we use it to investigate the cross-country interdependence of GDP growth rates based on a variety of international trade and financial linkages. The model provides a rich characterization of macroeconomic network effects and exhibits good forecast performance compared to popular dimension-reduction methods.


11:00 - 11:15 | Coffee Break


11:15 - 12:15 | Session 2: Bayesian Analysis

Room: A1.22, Chair: Nalan Basturk

Barbara Guardabascio (University of Perugia)

The Time-Varying Multivariate Autoregressive Index Model

Many economic variables feature changes in their conditional mean and volatility, and Time Varying Vector Autoregressive Models are often used to handle such complexity in the data. Unfortunately, when the number of series grows, they present increasing estimation and interpretation problems. This paper tries to address this issue proposing a new Multivariate Autoregressive Index model that features time varying means and volatility. Technically, we develop a new estimation methodology that mix switching algorithms with the forgetting factors strategy of Koop and Korobilis (2012). This substantially reduces the computational burden and allows to select or weight, in real time, the number of common components and other features of the data using Dynamic Model Selection or Dynamic Model Averaging without further computational cost. Using USA macroeconomic data, we provide a structural analysis and a forecasting exercise that demonstrates the feasibility and usefulness of this new model.

Daniele Bianchi (Queen Mary, University of London)

Sparse multivariate modeling for stock returns predictability

We develop a new variational Bayes estimation method for large-scale multivariate predictive regressions. Our approach allows to elicit permutation-invariant shrink- age priors directly on the regression coefficients matrix rather than a Cholesky-based linear transformation, as typically implemented in existing MCMC and variational Bayes approaches. Both a simulation and an empirical study on the cross-industry predictability of equity risk premiums in the US, show that by directly shrinking weak industry inter-dependencies one can substantially improve both the statistical and economic out-of-sample performance of multivariate regression models compared to a naive recursive mean forecast. This holds across alternative continuous shrink- age priors, such as the adaptive Bayesian lasso, adaptive normal-gamma and the horseshoe.


12:15 - 13:15 | Lunch

Ad Fundum

13:15 - 14:45 | Session 3: Statistical Learning

Room: A1.22, Chair: Jakob Raymaekers

Marcelo Medeiros (Pontifical Catholic University of Rio de Janeiro)

Global Inflation: What do we learn from a large dataset and machine learning methods?

Forecasting inflation is an important and difficult task. Most of the papers usually focus on a single or a small set of countries. In this paper, we consider the problem of simultaneously forecasting inflation from a large panel of countries. Our strategy is to explore potential (nonlinear) links among countries, and we do not rely on any additional variables apart from inflation and deterministic components, such as seasonal dummies.

Takashi Yamagata (University of York)

Discovering the network Granger causality in large vector autoregressive models

We propose a new method of discovering the network Granger causality based on multiple tests in large-scale vector autoregressive (VAR) models. The procedures are designed to control the false discovery rate (FDR) of each element in the coefficient matrices in the large VAR models. The first framework is based on the multiple testing using the limiting normal distributions of t-statistics constructed by the debiased lasso estimator. The second procedure is based on the bootstrap. Their statistical properties for FDR control and power guarantees are thoroughly investigated. Through Monte Carlo simulations and real data analysis, we confirm that the bootstrap procedure works very well.


14:45 - 15:00 | Poster Pitches

A1.22

15:00 - 16:30 | Poster Session (with Coffee)

Ad Fundum

Robert Adamek (Maastricht University)

Local Projection Inference in High Dimensions

In this paper we consider the problem of estimating impulse responses by local projections in a high-dimensional setting. We propose to use the desparsified (de-biased) lasso to estimate these local projection, and introduce a modification to this method where the response parameter of interest is not penalized in the initial lasso step. We first derive the asymptotic distribution of this estimator in a general high-dimensional time series setting, and show how it can be used for uniformly valid inference on estimated impulse responses. We then perform a simulation study demonstrating the small sample performance of this modified estimator, and use it in empirical applications to estimate impulse responses to a shock in monetary policy, and a shock in government spending.

Mario Enrique Arrieta-Prieto (Universidad Nacional de Colombia)

Selection of a Linear Combination of Dynamic Common Factors as a Coincident Index: an Application to the Case of the Colombian Economy

Coincident indices are of vital importance for the macroeconomic short- and long- term planning strategies of a country or a region. The use of statistical techniques in their development, such as Dynamic Common Factors (DCFs), allows to obtain efficient, yet reliable, estimation procedures. The main goal of this work is to propose a general methodology to create a coincident index based on linear combinations of DCFs, and to test it on simulated scenarios and on a case study for the Colombian economy. A complete methodological approach to produce point estimates, confi- dence regions, and to test hypotheses is presented. Besides, the results show how promising this new proposal is with respect to previous achievements in the scientific community.

Giorgia De Nora (Queen Mary, University of London)

Factor Augmented Vector-Autoregression with narrative identification. An application to monetary policy in the US

I extend the Bayesian Factor-Augmented Vector Autoregressive model (FAVAR) to incorporate an identification scheme based on an exogenous variable approach. A Gibbs sampling algorithm is provided to estimate the posterior distributions of the models parameters. I estimate the effects of a monetary policy shock in the United States using the proposed algorithm, and find that an increase in the Federal Fund Rate has contractionary effects on both the real and financial sides of the economy. Furthermore, the paper suggests that data-rich models play an important role in mitigating price and real economic puzzles in the estimated impulse responses as well as the discrepancies among the impulse responses obtained with different monetary policy instruments.

Leonardo N. Ferreira (Queen Mary, University of London)

Forecasting with VAR-teXt and DFM-teXt models: exploring the predictive power of central bank communication

This paper explores the complementarity between traditional econometrics and machine learning and applies the resulting model – the VAR-teXt – to central bank communication. The VAR-teXt is a vector autoregressive (VAR) model augmented with information retrieved from text, turned into quantitative data via a Latent Dirichlet Allocation (LDA) model, whereby the number of topics (or textual factors) is chosen based on their predictive performance. A Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of the VAR- teXt that takes into account the fact that the textual factors are estimates is also provided. The approach is then extended to dynamic factor models (DFM) generating the DFM-teXt. Results show that textual factors based on Federal Open Market Committee (FOMC) statements are indeed useful for forecasting.

Miguel Herculano (University of Nottingham)

Investor Sentiment and Global Economic Conditions

Investor Sentiment is measured at both global and local levels as the common component of pric- ing errors investors make when valuing stocks. Investor sentiment and macroeconomic factors are jointly modelled within a hierarchical dynamic factor model allowing for time-varying param- eters and stochastic volatility. We extend existing methods to enable estimation of the model with the prescribed hierarchy which permits a cross-country analysis. Our approach allows us to control for macroeconomic conditions that may contaminate investor sentiment indices. We find that global investor sentiment is a key driving force behind domestic sentiment and global economic conditions.

Sebastian Kühnert

On Estimating Operators of Functional Time Series

In functional time series analysis one is often interested in estimating the parameters of an assumed model, where such parameters are often linear operators. This talk broadly outlines the estimation procedure for operators of well-known types of processes in arbitrary separable Hilbert spaces under mild weak dependence conditions. The estimators are obtained by using dimension reduction and certain Yule-Walker equations. Further, to derive the asymptotic behaviour of the estimation errors and due to the definitions of these equations, Tikhonov-Regularization comes into play where also certain lagged (cross-)covariance operators are estimated.

Miguel Ángel Ruiz Reina (University of Malaga)

Reduction Dimensional Time Series: Clustering Entropy for Seasonal Data

In this research, a new dimensional reductional time series method developed on Information Theory clusters has been developed for seasonal analysis. The new automatic cluster method classifies information based on Shannon entropy. The internal verification guarantees that the clusters present space-time similarities. The unsupervised cluster method automatically adjusts for the number of spatial and temporal observations under study. Our empirical approach allows us to measure accommodation decisions for multichoice tourists who visit Spain and decide their accommodations. The results show that the clusters are dynamic and fit the analysed data set.

Hugo Schyns (Maastricht University)

A Neural Network with Shared Dynamics for Multi-Step Prediction of Value-at-Risk and Volatility

We develop a Long Short-Term Memory (LSTM) neural network for the joint pre- diction of volatility, realized volatility and Value-at-Risk. Regularization by means of pooling the dynamic structure for the different outputs of the models is shown to be a powerful method for improving forecasts and smoothing Value-at-Risk (VaR) estimates. The method is applied to daily and high-frequency returns of the S&P500 index over a period of 25 years.

Li Sun (University of Liege)

Non-stationary variable selection in time-varying extreme Value regression models

We introduce a time-varying peaks-over-threshold regression model that finds its use in the study of the severity distribution of extreme market crashes in changing economic conditions. The proposed model allows for stationary and local-to-zero unit-root predictors, as well as an autoregressive structure in the scale parameter. We prove the consistency and derive the asymptotic distribution of the maximum likelihood estimator (MLE) under this setting. To select relevant candidate explanatory variables that are non-stationary, we provide a modified adaptive L1-regularized MLE that achieves the oracle property. With this method, we perform model selection and estimation as if the true underlying process was given. We show in an extensive simulation study the good finite sample property of the proposed method and it superiority over several alternatives.


16:30 - 18:00 | Session 4: Sparsity

Room: A1.22, Chair: Gianluca Cubadda

Weining Wang (University of York)

Learning Network with Focally Sparse Structure

This paper studies network connectedness with focally sparse structure. We uncover the network effect with a flexible sparse deviation from a predetermined adjacency matrix. More specifically, the sparse deviation structure can be regarded as latent or as misspecified linkages to be estimated. To obtain high-quality estimators for the parameters of interest, we propose using a double-regularized, high-dimensional generalized method of moments (GMM) framework. Moreover, this framework also enables us to conduct inference on the parameters. Theoretical results on consis- tency and asymptotic normality are provided, while accounting for general spatial and temporal dependency of the underlying data-generating processes. Simulations demonstrate good performance of our proposed procedure. Finally, we apply the methodology to study the spatial network effect of stock returns.

Sumanta Basu (Cornell University)

Frequency-domain graphical models for multivariate time series

Graphical models offer a powerful framework to capture intertemporal and contemporaneous relationships among the components of a multivariate time series. For stationary time series, these relationships are encoded in the multivariate spectral density matrix and its inverse. We will present adaptive thresholding and penalization methods for estimation of these objects under suitable sparsity assumptions. We will discuss new optimization algorithms and investigate consistency of estimation under a double-asymptotic regime where the dimension of the time series increases with sample size. If time permits, we will introduce a frequency-domain graphical modeling framework for multivariate nonstationary time series that captures a new property called conditional stationarity.


19:00 - 22:00 | Dinner

Restaurant Petit Bonheur, Achter de Molens 2, 6211 JC Maastricht; https://goo.gl/maps/9RSxe7wEEbnPTZfc6

Tuesday 14-6


9:00 - 10:30 | Session 5: Factors Models

Room: A1.22, Chair: Alain Hecq

Gianluca Cubadda (Università di Roma “Tor Vergata”)

The Vector Error Correction Index Model: Representation and Statistical Inference

This paper extends the multivariate index autoregressive model by Reinsel (1983, Biometrika) to the case of cointegrated time series of order (1,1). In this new modelling, which we call as the Vector Error-Correction Index Model (VECIM), the first differences of cointegrated time series are driven by some linear combinations of the variables that are labelled as the indexes. When the number of indexes is small compared to the sample size, the VECIM achieves a significant dimension reduction w.r.t. the classical Vector Error Correction Model (VECM), thus allowing to analyze cointegration even in medium vector autoregressive model, a setting where maximum likelihood inference for the VECM does not work well. We show that the indexes follow a VECM of smaller dimension that the number of series and that the VECIM allows to decompose the reduced form shocks into sets of common and uncommon shocks, and that the former can be further decomposed into permanent and transitory shocks. Moreover, we offer a switching algorithm to optimally estimate the parameters of the VECIM. Finally, we document the practical value of the proposed approach both by simulations and an empirical application.

Marco Avarucci (University of Glasgow)

The Main Business Cycle Shock(s) Frequency-Band Estimation of the Number of Dynamic Factors

We introduce a consistent estimator for the number of shocks driving dynamic macroeconomic models that can be expressed, in reduced form, as large dynamic factor models. The noticeable feature of our novel estimator is that it can be applied to single frequencies as well as to given frequency bands, making it perfectly suited to disentangle the shocks affecting the macro economy at the business cycle frequencies, in the long run, and at any frequency band of interest. Our estimator is able to estimate accurately the number of shocks driving DSGE models, exploiting an insight that tightly relates certain DSGE models to dynamic factor models. When applying our estimator to the FRED-QD US dataset, we demonstrate that the US macro economy is driven by two main shocks. One of them has the features of a demand shock and the other of a supply shock. The demand shock explains most of the cyclical fluctuations of the main macroeconomic aggregates, but is not disconnected from inflation.

Lorenzo Trapani (University of Nottingham)

High Dimensional Threshold Regression with Common Stochastic Trends

We study inference for threshold regression in the context of a large panel factor model with common stochastic trends. We develop a Least Squares estimator for the threshold level, deriving almost sure rates of convergence and proposing a novel, testing based, way of constructing confidence intervals. We also investigate the properties of the PC estimator for the loadings and common factors in both regimes, and develop a procedure to estimate the number of common trends in each regime. Our theoretical findings are corroborated through a comprehensive set of Monte Carlo experiments and an application to US mortality data.


10:30 - 10:45 | Coffee Break


10:45 - 12:15 | Session 6: Inference in High Dimensions and Panel Data

Room: A1.22, Chair: Ines Wilms

Anders Kock (University of Oxford)

Hypothesis Testing in High Dimensions

Siem Jan Koopman (VU Amsterdam)

Panel time series models with time-varying effects

We explore panel models where we allow the fixed effects to be time-varying. Dynamic panel data models are readily used for the modeling and analysis of panel data sets that are available over a sequence of different time periods. This and related approaches are appropriate when the time dimension is relatively small, that is for small T. However, they can become more challenging and less appropriate when the number of time periods increases. For such cases we explore standard panel models but with coefficients that are allowed to vary over time, in order to model the serial dependence structures in the panel data. In our proposed analysis, we treat the estimation of the cross-sectional and time series features simultaneously. These developments lead to a maximum likelihood procedure that incorporates the well-known fixed effects and random effects estimation methods. The proposed methodology relies on a specific linear state space formulation that allows for collapsed versions of the Kalman filter and smoother, rendering simple and computationally efficient approaches to estimation, signal extraction and forecasting. Illustrations are provided.


12.15 - 13.15 | Lunch

Ad Fundum

// add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.odd').parent('tbody').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); }); $(document).ready(function () { window.buildTabsets("TOC"); }); $(document).ready(function () { $('.tabset-dropdown > .nav-tabs > li').click(function () { $(this).parent().toggleClass('nav-tabs-open'); }); }); (function () { var script = document.createElement("script"); script.type = "text/javascript"; script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"; document.getElementsByTagName("head")[0].appendChild(script); })();