Programme

The programme can be found here in PDF, or below:

Tuesday 14-6


12:15 - 13:25 | Registration and Lunch

Ad Fundum

13:25 - 13:30 | Opening and Welcome

A1.22

13:30 - 15:30 | Session 1: Statistical Analysis in the Ocean

Room: A1.22, Chair: Marina Friedrich

Erik van Sebille (Utrecht University)

Markov models for modelling patterns in the ocean

The ocean is in constant motion, with water circulating within and flowing between basins. As the water moves around, it carries heat and nutrients, as well as planktonic organisms and plastic litter around the globe. The most natural way to study the pathways of water and the connections between ocean basins is using trajectories, either from observations of drifters or computed by simulating virtual particles in fine-resolution ocean models. But interpreting this Lagrangian data and teasing out information on structures is not trivial and an active field of research. Here, I will show a few recent examples of how techniques from Set Oriented Numerics and Markov theory can be applied to dynamic ocean flows, and how these compare to techniques from e.g. machine learning. In particular I will raise the question of how to deal with non-stationary dynamics caused by climate change in these types of analysis.

Bror Jonsson (Plymouth Marine Laboratory )

Dominant Timescales of Variability in Global Satellite Chl and SST revealed with a MOving Standard deviation Saturation (MOSS) approach

Marine ecosystems are not only defined by the general abundance of primary producers, but also the In many regions of the ocean, primary production is punctuated by hotspots and blooms that exhibit a high spatial and temporal variability in the phytoplankton biomass. A classic example of how periodic changes in primary production has profound effects on the food web and export of carbon is the annual North Atlantic spring bloom. The suggestion that temporal changes in phytoplankton biomass might be as important as the mean standing stock has lead to interest in how to best evaluate the variability of phytoplankton on different temporal and spatial scales. Variability has been investigated on daily, intra-annual, annual, and inter-annual time scales, as well as on sub-kilometer spatial scales.

Satellite-derived proxies for biomass such as Chlorophyll (Chl) and Particulate Organic Carbon (POC) provide unprecedented coverage in time and space to better estimate the variability in phytoplankton biomass on different scales. One key challenge when working with satellite-derived data is, however, the data gaps due to factors such as clouds, sun angle, and sun glint, which obfuscate the satellite’s view of the ocean. The erroneous data that are flagged are not evenly distributed, but show a patchiness that reflects the temporal and spatial scales of synoptic weather systems in different regions. Consequently, on average only 20% of the derived Chl fields are useful Such sparse and unevenly distributed datasets create a major challenge for common time-series analysis tools, such as Fourier analysis or Empirical Orthogonal Functions (EOFs), thus hindering efforts to understand the frequency distribution of the data. A common and very successful approach is to aggregate the daily satellite fields to monthly averages for more or less full spatial coverage. The resulting analyses provide spatial distributions and insight about changes in phytoplankton biomass on seasonal or longer timescales, but truncate high-frequency variability.

To better meet the specific challenges with time series analysis of sparse satellite derived properties, we suggest a new method to estimate dominating timescales of variability: MOving Standard deviation Saturation (MOSS). The approach is similar to semi-variograms and earlier analyses of spatial patchiness, but describes temporal variability rather than the spatial autocorrelation or patchiness in the satellite field. The technique is based on calculating the standard deviation (\(\sigma\)) of the time series data over moving windows of a set time interval, and repeating for different time-interval windows. The average \(\sigma\) for each time-window size (\(\bar{\sigma}\)) increases from zero for a time window that includes just one data point, to \(\sigma\) of the full time series. The largest possible time window is in effect the full time series. The shape of the resulting curve of \(\bar{\sigma}\) vs. the time-interval window size (\(w\)) is then analyzed to identify a dominating time scale, \(\tau_d\) of the time series based on the half saturation constant (\(K_M\)).

Our results show that the method has the ability to assess dominating timescales in time series where data coverage is sparse. Analysis of synthetic data sets suggests a threshold where estimated timescales start diverge from actual ones is at about 10% coverage. The main consequences of sparse data is MOSS curves with too gentle slopes, which would exaggerate the dominant timescales. We compensate for this problem by scaling all \(K_M\) differently for different coverage in the original data sets. The scaling further allows us to interpreted the resulting values as timescales of variability.

Jake Grainger (Lancaster University)

Parametric estimation of the frequency-direction spectrum for ocean waves

Understanding the behaviour of wind-generated ocean waves is important for many offshore and coastal engineering activities. The frequency-direction spectrum is important for characterising such waves, and plays a central role in understanding the impact of such waves on structures and vessels. Estimating the frequency-direction spectrum is challenging, as the physical process in question is spatio-temporal and continuous, but we usually only observe the sampled 3D displacement of a buoy floating on the surface (a multivariate time series). Existing parametric techniques for recovering the frequency-direction spectrum are good at estimating location parameters (e.g. the peak frequency of waves), but struggle to recover more intricate shape parameters (e.g. the spread of energy around the peak frequency). We demonstrate how, by transforming the model of interest into a model for the recorded series, we can use a multivariate pseudo-likelihood approach to recover the parameters of interest. Our novel method is statistically more powerful and resolves more parameters than the current state-of-the-art, thus providing a better characterisation of the ocean. We demonstrate our methodology on data recorded in the North Sea, focussing on storms and extreme events, which are of high significance to engineers and environmental scientists.

Addison Rice (Utrecht University)

Assessing the imperfect time series: a sediment trap case study

Processes governing the export of matter from the sea surface to the ocean floor are complex and poorly understood. Sediment traps are one of the few ways in which we can understand what happens to particulate matter as it sinks through the water column. These devices funnel sinking particles into a bottle and rotate to change the open bottle at pre-set set times. Each deployment collects 12-40 samples, with each sample representing typically 10-30 days. Export processes remain poorly resolved in part due to the difficulty in obtaining long records: ships need to regularly access remote parts of the ocean to service a sediment trap, and these expensive devices are sometimes lost when attempting to recover samples. As a result, the few long-term sediment trap records often contain data gaps. Parameters such as the open time for each sample bottle and the water depth of the sediment trap can vary from one year to another, complicating time series analysis. Furthermore, the complexity of the natural system precludes a simple relationship with sea surface conditions. Particles take time to sink to the depth of the sediment trap, and this time delay could vary seasonally, inter-annually, or even in episodic events. The sensitivity of sea surface temperature (SST) proxies to a warming ocean is of concern to scientists who study variations in SST from the geologic record. The biomarker-based UK’37 SST proxy was measured in sediment trap samples from Bannock Basin. This proxy should reflect variations in SST, but could incorporate a temperature from somewhat lower in the water column. The sediment trap record from Bannock Basin ran from 1991-2011, with gaps from 1993-1999 and 2006-2008 and smaller gaps throughout the record. Traps were set at multiple water depths for part of this record (approximately 500, 1500, and 2500m). Each sample in this record represents 10-21 days of sinking particles, depending on the deployment. Assessing the extent of anthropogenic warming in this sample set should be possible. These proxy values can be compared to satellite-based temperatures to determine whether the proxy shows the same magnitude of warming as the satellite-based data. However, seasonal variability hinders common trend analyses like the Mann-Kendall test, and variable sampling intervals complicate binning techniques. Furthermore, comparison between sediment trap data and sea surface temperatures is only possible after considering the (likely variable) time delay for particles to reach the sediment trap. Analytical questions to consider: How does the extent of warming in the surface ocean (as measured by satellites) compare with the extent of warming in the proxy? Is there a seasonal change in the depth of proxy production? Is there a seasonal or interannual change in the time it takes for the proxy to reach the sediment trap?


15:30 - 16:00 | Coffee Break


16:00 - 18:00 | Session 3: Policy and Climate Change Impacts 1

Room: A1.23, Chair: Ines Wilms

Dominik Hirschbuehl (Joint Research Centre of the European Commission)

A sustainability transition on the move? Evidence based on the disconnect from market fundamentals

In a context where European stock prices have been trending upwards for two years, one of the main concerns is that stocks perceived as more sustainable from an environmental, social and governance (ESG) perspective could be particularly exposed to exuberance. To shed some light on the magnitude of the deviation of stock prices from fundamentals we apply a Markov-switching augmented version of the present-value model, estimated in a Bayesian framework. Using monthly data on European stock market from 2005 to 2022, our model suggests that currently the non-fundamental component in the EU stock market is about one fourth of the total price. When looking at particular market segments, the model shows that green and ESG stocks behave broadly in line with the market. However, in recent years ESG stocks have shown a significant, though small, disconnect from the market. These findings suggest that investor preferences are indeed shifting towards sustainability, while not posing an immediate threat to market stability.

Simone Maxand (Europa-Universitat Viadrina)

A panel SVAR for European climate policy

Carbon emissions are priced in Europe in two ways: first, as a cap-and-trade-system the EU emission trading system sets a price on carbon. Following prior literature, this price is determined by energy prices, the macroeconomic development, weather and renewables. We analyse these interactions by a monthly smooth transition SVAR model identified by non-Gaussianity and narrative sign restrictions and find heterogeneous effects over two regimes of economic activity. Second, individually set carbon taxes cover a certain share of emissions in the separate member countries. We study these tax effects in a panel SVAR model which accounts for the joint market development. Joining the two models brings the development and effects of the identified structural carbon price/tax shocks on EU and country level together. This allows to study the interaction and separate development of the two carbon pricing tools over time by means of mixed frequency approaches. This model framework additionally enables to set further country specific environmental policies in relation to the European macroeconomy.

Viktoria Vidahazy (Graduate Institute of International and Development Studies)

The Effects of Climate Change on the Financial Market

Climate change represents great physical and transition risks to the economy in the form of disasters and unaccustomed changes. Although these risks are recognized in the literature, there is still no unequivocal consensus on how to react or who should react to climate change and shield the economy against these risks. However, looking at the financial side of the economy, changes in investors’ behaviors can give us ideas on how climate change can disrupt the financial market. In my work, I examine financial markets’ reactions to climate change. Specifically, I focus on capital flows’ different behaviors to disastrous climate events. Capital flows are key contributors to the global economy through financial markets: they can improve the financial sector’s competitiveness in a country, promote investment, and help to smooth consumption. However, their large size and volatility represent potential risks to the global and local financial systems. The literature has assessed potential drivers of capital flows, but we lack information about the effect of climate change. I examine how climate events can be new drivers of capital flows, especially for sudden changes. Disastrous climate events may drive capital flows via two channels. First, climate events can be country-specific pull factors as investors might pull back from a country after a weather shock. Second, they might represent a global push factor: after an extreme climate event strikes outside the country, investors might still withdraw even without suffering actual loss, if the country is prone to disasters. To understand financial markets’ true reactions to climate change, I aim to identify its impact by examining local and regional climate effects. To quantify the local aspect, I proxy the severity of climate events through two methods. First, I evaluate the duration of disastrous events with respect to countries’ climate history. Therefore, a longer flood event will count more than a shorter one. Second, I control for the population’s exposure to climate events. Hence, I capture the extent to which the population has been affected by a disaster, rather than focusing on possibly large but separate disasters in a country. This method can highlight that a storm might have a more significant effect in the densely populated Chicago than in the Sahara. To capture whether climate change represents a regional push factor, I count regional disasters around a country quarterly. Both local and regional methods use climate event indices that capture climate change by focusing on common disasters. Specifically, the indices count droughts, extreme temperatures, floods, and storms. These new indices measure climate change more accurately, as they incorporate more than one disaster contrary to the common procedure in the literature, focusing solely on storms, for example. I find that both regional and local indices cause sudden increases in capital inflows. My research has sound policy implications: countries experiencing sudden changes in capital flows are more susceptible to macroeconomic instability and may suffer more easily from other financial risks. Therefore, understanding whether climate change is a new driver of capital flows, domestic institutions can prepare their countries better for sudden changes.

Javier Ojea Ferreiro (Joint Research Centre of the European Commission)

The impact of climate transition risks on financial stability. A systemic risk approach.

Transitioning to a low-carbon economy involves risks for the value of financial assets, with potential ramifications for financial stability. We quantify the systemic impact on financial firms arising from changes in the value of financial assets under three climate transition scenarios that reflect different levels of vulnerability to the transition to a low-carbon economy, namely, orderly transition, disorderly transition, and no transition (hot house world). We describe three systemic risk metrics computed from a copula-based model of dependence between financial firm returns and financial asset market returns: climate transition expected returns, climate transition value-at-risk, and climate transition expected shortfall. Empirical evidence for European financial firms over the period 2013-2020 indicates that the climate transition risk varies across sectors and countries, with banks and real estate firms experiencing the highest and lowest systemic impacts from a disorderly transition, respectively. We find that default premium, yield slope and inflation are the main drivers of climate transition risk, and that, in terms of capital shortfall, the cost of rescuing more risk-exposed financial firms from climate transition losses is relatively manageable. Simulation of climate risks over a five-year period shows that disorderly transition can be expected to imply significant costs for banks, while financial services and real estate firms remain more sheltered.


19:00 - 22:00 | Dinner (Restaurant Bouchon d’en Face)

Restaurant Bouchon d’en Face, Wycker Brugstraat 54, 6221 ED Maastricht; https://goo.gl/maps/oXEBJtstUSNNaCWu9

Wednesday 15-6


9:00 - 9:15 | Coffee


9:15 - 12:00 | Session 4: Statistical Modelling of Climate Sytems

Room: A1.22, Chair: Emmanuel Mahieu

Eric Hillebrand (Aarhus University)

A New Statistical Reduced Complexity Climate Model

In this paper, we propose a new, fully statistical, reduced complexity climate model. The starting point for our model is a number of physical equations for the global climate system, which we show how to cast in non-linear state-space form. We propose to estimate the model using the method of maximum likelihood using the extended Kalman flter. By considering a range of different scenarios for greenhouse gas emissions, a simulation study uncovers substantial differences in the performance of the estimation procedure, depending on the precise scenario considered. These investigations can help decide what kind of data are best suited for estimating/calibrating the parameters of reduced complexity climate models. In an empirical exercise, we use a data set of historical observations from 1959-2020 to estimate the model. A likelihood ratio test sheds light on the most appropriate equation for converting the atmospheric concentration of carbon dioxide (GtC) into forcings (W/m2). We then use the estimated model and assumptions on future greenhouse gas emissions to project global mean surface temperature out to the year 2100. We propose a simulation-based approach to construct uncertainty bands to the projections, as well as quantify how much of the uncertainty is ” (uncertainty arising from the internal variability of the climate system) and how much is ” (uncertainty arising from unknown model parameters). We find that epistemic uncertainty is by far the most important contributor to the uncertainty on the projected future global temperature increase.

Marc Gronwald (Xi’an Jiaotong-Liverpool University)

Long-run co-movement of global temperature anomalies and forcings from greenhouse gases

This paper analyses the long-run co-movement of two important climate time series: forcings from greenhouse gas emissions and global temperature anomalies. It applies a recently proposed measure for long-run covariability as well as the so-called Thick Pen measure of association. While the former method allows one to estimate correlation as well as linear regression coecients for long-run projections of time series, the latter is capable of measuring co-movement over time, at as well as across time-scales. The paper finds, first, that the long-run correlation coecient for components with periodicities exceeding 60 years is 0.86; this is roughly within the range of the correlations found for macroeconomic relationships analysed using the same method. The long-run linear regression coecient is estimated to be 0.28. Both estimates are significantly different from zero. These estimates, second, are sensitive to the extent of smoothing-out of short-run fluctuations. Third, the application of the Thick Pen method of association not only confirms this general pattern of the results; there is also some evidence of a time-varying relationship. These findings have policy relevance as having a sucient understanding of this empirical relationship is relevant for the design of climate policies.

Jingying Zhou Lykke (Aarhus University)

Estimation of a two-component energy balance model using historical data records

This paper estimates the two-component energy balance model as a linear state space system (EBM-SS model) using historical data. It is a joint model for the temperature in the mixed layer, the temperature in the deep ocean layer, and radiative forcing. The EBM-SS model allows for the modeling of non-stationarity in forcing, the incorporation of multiple data sources for the latent processes, and the handling of missing observations. We estimate the EBM-SS model using observational datasets at the global level for the period 1955 – 2020 by maximum likelihood. We show in the empirical estimation and in simulations that using multiple data sources for the latent processes reduces parameter estimation uncertainty. When fitting eight observational global mean surface temperature (GMST) anomaly series into the EBM-SS model, the physical parameter estimates and the GMST projection under Representative Concentration Pathway (RCP) scenarios are comparable to those from Coupled Model Intercomparison Project 5 (CMIP5) models and the climate emulator Model for the Assessment of Greenhouse Gas Induced Climate Change (MAGICC) 7.5. This provides evidence that utilizing a simple climate model and historical records alone can produce meaningful GMST projections.

10:45 - 11:00 | Mid-Session Break

Luca Margaritella (Lund University)

High-dimensional Granger causality for climatic attribution

We test for causality in high-dimensional vector autoregressive models (VARs) to disentangle and interpret the complex causal chains linking radiative forcings and global temperatures. We consider both direct predictive causality in the sense of Granger and direct-indirect causality in the sense of Sims, developing a framework of impulse response analysis in high-dimensions via local projections. By allowing for high dimensionality in the model we can enrich the information set with all relevant natural and anthropogenic forcing variables to obtain reliable causal relations. These variables have mostly been investigated in an aggregated form or in separate models in the previous literature. Additionally, our framework allows to ignore the order of integration of the variables and to directly estimate the VAR in levels, thus avoiding accumulating biases coming from unit-root and cointegration tests. This is of particular appeal for climate time series which are well known to contain stochastic trends as well as yielding long memory. We are thus able to display the causal networks linking radiative forcings to global and hemispheric temperatures but also to causally connect radiative forcings among themselves, therefore allowing for a careful reconstruction of a timeline of causal effects among forcings. The robustness of our proposed procedure makes it an important tool for policy evaluation in tackling global climate change.

Francesco Giancaterini (Maastricht University)

Is global warming (time) reversible?

This paper, exploiting the properties of mixed causal and noncausal models, proposes two new strategies to detect time reversibility in time series. Monte Carlo experiments show that our proposed approaches perform accurately in finite samples. Furthermore, the research aims to investigate whether time reversibility is a characteristic feature of the dynamic process of global warming. To this end, we investigate nine climate indicators.


9:45 - 12:00 | Session 5: Policy and Climate Change Impacts 2

Room: A1.23, Chair: Jakob Raymaekers

Vinzenz Peters (Maastricht University)

Resilience to Extreme Weather Events and Local Financial Structure of Prefecture-Level Cities in China

In recent decades, China has experienced extraordinary economic development, transforming itself from a poor, agrarian economy to an economic powerhouse that is the world’s largest exporter and that has seen its people’s living standards increase tremendously (Zünd and Bettencourt, 2019). At the same time, China is among the countries that experience the most and most severe natural hazard-related disasters in the world (World Bank and GFDRR, 2020), putting the achieved economic development at risk. Progressing climate change emphasizes the need to thoroughly understand and control the impacts of such shocks, since extreme weather events are expected to increase in terms of frequency and severity in the future (IPCC, 2022).

In this paper, we utilize weather data that is collected at 0.5° x 0.5° grid cells to investigate the local economic effects of extreme weather events in 284 Chinese prefecture-level cities between 2004 and 2013. Specifically, we estimate impulse response functions of GDP per capita and employment growth to capture the dynamic responses of cities affected by extreme wind and precipitation for up to five years after an event. We then use these results to obtain a benchmark against which we measure the relative resilience of cities to extreme weather events and to explore the importance of the financial structure of the local economy for resilience.

China lends itself particularly well to the purpose of our study for several reasons. It is frequently “treated”, generating a useful sample size while focusing on one country only, and it has a highly centralized disaster management system that coordinates mitigation efforts across the whole country (World Bank and GFDRR, 2020). This allows us to assume relative homogeneity in institutional factors driving this disaster preparedness across our sample, while controlling for prior experience with extreme weather events at the local level (Kahn, 2005; Hsiang and Jina, 2014). In addition, the direct damages from floods and typhoons in China are still considerably underinsured, with on average only 2% of economic losses covered by insurance in recent years (Munich RE, 2021). This suggests an important role for other actors in the economy, such as financial intermediaries, to insure against these adverse shocks, at least implicitly.

Against this background, we contribute to the literature in at least three distinct ways. First, economic studies on the indirect effects of extreme weather events frequently focus on high levels of spatial aggregation, such as the country or regional level, while deploying measures of event intensity that are constructed from damages and direct impacts. Natural hazards, however, are inherently local by nature (Botzen et al., 2019), and relying on damage measures to proxy for the intensity of an event may bias the results of such studies (Felbermayr and Gröschl, 2014). Therefore, research in this field has started to systematically look at a more local level, while incorporating credibly exogenous measures of physical intensity to account for event severity (e.g., Felbermayr et al., 2022). We contribute to this stream of research by presenting one of the first studies with a comprehensive focus on Chinese prefecture-level cities and by utilising physical intensities of extreme weather events to quantify and control for the size of the shocks.

Second, considering climate change, the academic and societal interest in resilience to climate-related disasters is increasing rapidly. Resilience, however, is hard to quantify and must be defined carefully to match the context in which it is discussed. To improve the understanding of resilience in this discussion, we adapt a method of constructing resilience indicators suggested by Martin et al. (2016) for recessionary and financial shocks and apply it to the study of extreme weather events. This allows us to measure the relative resilience of prefecture-level cities given the actual physical intensity of the shock that a city experienced in a particular year.

Third, we elaborate our analysis by utilizing these indicators in a simple empirical framework to study the relationship between local financial structure and city-level resilience, controlling for other factors known to affect economic resilience to natural hazards (Lazzaroni and van Bergeijk, 2014; Noy and Yonson, 2018). Historically, the Chinese financial system is heavily dominated by its banking system, which in turn was dominated by large, state-owned banks. Since the turn of the millennium, more financial market liberalization has taken place and new financial intermediaries have gained in market shares (Sun, 2020). Accounting for the peculiar features of the Chinese financial system, we exploit this variation to identify the link between features of the local financial structure, such as the pre-event level of debt and the presence of different types of banks, and economic resilience. To the best of our knowledge, we are the first to investigate this specific determinant of resilience, certainly for the case of China.

Empirically , we employ the bias-corrected method of moments estimator for dynamic panel models that was recently suggested by Breitung et al. (2021) to account for the temporal constraints of our data in our impulse response models. We find that while extreme wind speed events exert negative effects on economic activity only in the year of their occurrence, extreme precipitation events depress the development of local economies for several years. While employment growth proves resilient in the short run, the effects of extreme precipitation events appear to also affect lagged labour market dynamics. On impact, the service sector seems to carry most of the burden for both types of events, while the longer-run consequences of extreme precipitation are primarily borne by the industrial sector.

With respect to the role of local financial structure, our results suggest that high levels of pre-event indebtedness reduce the overall resilience of local economies. Interestingly, and in contrast to the well-established finding that the Chinese state-owned banks are associated with depressed economic growth (e.g., Ferri, 2009; Lin et al., 2015), our findings also suggest that the “Big Four” state-owned banks are instrumental in post-event recovery. It seems that while market competition promotes efficiency and growth in normal times, it may leave an economy more vulnerable to extreme events such as extreme weather shocks. Since such events are expected to become more frequent and severe, our results can inform the academic, political, and societal debates about regional economic resilience to climate-related shocks.

Thales West (VU Amsterdam)

The effectiveness of policies to reduce deforestation in the Brazilian Amazon

10:45 - 11:00 | Mid-Session Break

Marco Quatrosi (University of Ferrara )

Emission Trading in a high dimensional context: to what extent carbon markets are integrated with the broader system?

The following work will be providing further insights on the influence European Emission Allowance (EUA) prices exert on carbon dioxide trends and relevant variables of the economic-financial climate-environmental system considering a large set of time series. The methodological approach will employ Hierarchical Vector Autoregression (Nicholson et al., 2020) dealing with a high dimensional context. Despite the scarce application to macroeconomic analysis, this technique appears to be more suitable to deal with the multiple dimensions (economic-environmental) of the research issue taken into account. Results of the two specifications highlighted how CO2 appears to be more influenced by commodity prices (e.g., natural gas), climate variables (e.g., rainfall, temperatures) along with past industrial performances. Impulse-Response Function on the standardized first differences of the series showed that a shock of carbon prices could potentially exert significant turbulence on the carbon dioxide series fading in intensity as time goes by. Forecast Error Variance Decomposition (FEVD) analysis, identified how the influence of carbon prices appears to be rather weak for the variables considered. Furthermore, most of the variance is still explained by the own’s variable lagged terms. Overall, despite some instances (e.g., CO2) there appears to be a clear (negative) effect on the influence of carbon prices on the system. As the cornerstone of the EU climate policy, this work sheds light on the influence the EU ETS exerts on a system of multidimensional variables. These findings provide ulterior insights to policymakers for better taking into account possible sources of carbon price shocks (e.g., overlapping policies) and tailoring existing adjustment mechanisms (e.g., Market Stability Reserve) for the stability of the European Emission Trading Scheme.

Max Kotz (Potsdam Institute for Climate Impact Research)

The persistence of climate impacts: identification difficulties in the presence of high-dimensionality and autocorrelation

Recent work has demonstrated the sensitivity of macroeconomic productivity to novel aspects of the distribution of daily weather, emphasising the importance of a nuanced and high-dimensional approach to macroeconomic assessments of climate impacts. However, the persistence of impacts remains poorly quantified due to conflicting results of different statistical approaches and the difficulties of identification in the presence of high-dimensionality and auto-correlation. These limitations are critical to our understanding of the magnitude of long term climate damages. I will present different methods to identify and assess this persistence which draw from the approaches of econometrics, climate science and causal discovery. I will discuss limitations, contradictions and opportunities, before opening the room to wider input on possible ways forwards and alternative approaches.


12:00 - 13:00 | Lunch

Ad Fundum

13:00 - 14:30 | Session 6: Detecting Patterns in Weather and Climate Data 1

Room: A1.22, Chair: Eric Hillebrand

Siem Jan Koopman (VU Amsterdam)

Common Trends in Atmospheric Data

Wouter Mol (Wageningen University) and Chiel van Heerwaarden (Wageningen University)

Cloud shadows and peaks: power law distributions from event detection in time series

Clouds block and scatter sunlight to create local areas of increased solar radiation next to shadows. This results in large fluctuations in incoming solar radiation on the surface in both time and space, on the scale of solar panel parks, neighbourhood rooftops, or from seconds to hours. This has negative effects on energy grid stability and makes solar energy forecasts difficult. Using 10 years of high quality observed irradiance time series, we have enough data characterise the fluctuations, and extract useful scaling behaviour based on event detection in the time series. Shadow length durations are distributed according to a power law with an exponent nearly identical to how turbulence scales in the inertial range (Kolmogorov’s 5/3 exponent). Whether a coincidence or not, these results can be applied to develop or constrain models for better solar irradiance variability forecasts.

Jonas Lembrechts (University of Antwerp)

The SoilTemp database, an opportunity for modelling microclimate change and microclimate now-casting?

Current analyses and predictions of spatially explicit patterns and processes in ecology most often rely on climate data interpolated from standardized weather stations. This interpolated climate data represents long-term average thermal conditions at coarse spatial resolutions only. Hence, many climate-forcing factors that operate at fine spatiotemporal resolutions are overlooked. Since organisms living close to the ground relate more strongly to these microclimatic conditions than to free-air temperatures, microclimatic ground and near-surface data are needed to provide realistic forecasts of the fate of such organisms under anthropogenic climate change, as well as of the functioning of the ecosystems they live in. To fill this critical gap, we created SoilTemp, a global database of currently over 35.000 microclimate time series from over 70 countries. Additionally, we pioneered the establishment of large-scale microclimate networks monitoring the impact of extreme weather events on local microclimate conditions in the citizen science project ‘CurieuzeNeuzen in de Tuin’, in which 5000 citizens from across Flanders installed a smart mini-weather station in their own garden, agricultural field or nature reserve. These unique datasets allow us to tackle two critical research gaps, for which environmental time-series analyses would be the ideal tool: 1) modelling the rate at which the microclimate has changed and will further change as macroclimate warms, and 2) nowcasting of the impact of extreme weather events on local microclimatic conditions. Both these applications would have major implications for ecology, but also far beyond, as it would provide us with the true weather and climate conditions that nature is exposed to.


14:30 - 15:00 | Coffee Break


15:00 - 17:00 | Session 7: Detecting Patterns in Weather and Climate Data 2

Room: A1.22, Chair: Stephan Smeekes

Robinson Kruse-Becher (FernUni Hagen)

Adaptive nowcasting and forecasting of temperature trends under structural breaks

It is widely accepted that global warming is defined as an increasing trend in the global temperature mean. Gadea and Gonzalo (2020, JoE) propose a simple linear trend test which is powerful against many types of deterministic and stochastic trends. They find strong evidence in favor of positive trends, not only in temperature means, but also in various quantiles of global temperature distributions. We follow their important contribution and analyze global warming by modelling and short-term forecasting trends in distributional characteristics of global temperatures beyond the mean.

Our contributions are twofold: First, we apply a dynamic stochastic coefficient process as proposed by Giraitis, Kapetanios and Yates (2014, JoE) to model temperature anomalies. Thereby, we decompose the climate series into a random persistent attractor and a dynamic part with a time-varying autoregressive coefficient. Such a model has been applied previously to e.g. inflation and real exchange rates. Both time-varying quantities can be estimated consistently by nonparametric estimation methods and point-wise confidence intervals are provided. As a result, the estimated random attractor is informative about the time-varying underlying trend. The dynamic autoregressive parameter provides a time-varying measure for the persistence in the dynamic component.

Second, we apply adaptive now- and forecasting methods for climate time series of distributional characteristics. In the literature, historical structural changes in trends of temperatures are clearly documented. Furthermore, they become also evident by applying the dynamic stochastic coefficient processes. When forecasting under recent and ongoing structural changes, robust adaptive methods can be quite useful to enhance forecast accuracy in the short-run, see Giraitis, Kapetanios and Price (2013, JoE). Therein, recent and past observations are weighted in various ways (e.g. exponential smoothing, rolling window and nonparametric). Most schemes are quite sensitive to the unknown underlying trend, potential structural breaks and persistence. The authors prove the validity and usefulness of selecting tuning parameters (e.g. bandwidth and weighting coefficients) via cross-validation techniques for a wide class of processes. Especially during times of high uncertainty and ongoing structural changes, adaptive forecasts can be helpful in assessing the situation in near future. Due to the adoption and (effective) implementation of strengthened climate policies, it may be likely that temperature anomalies undergo structural changes dampening the general upward trend of temperature anomalies. In addition to annual data, we also exploit monthly data to provide monthly updates of nowcasts for yearly temperature anomalies. This approach is similar to the macroeconometric concept of ‘’aggregating the (monthly) forecasts’’ versus ‘’forecasting the (yearly) aggregate’’. Next, we also provide monthly forecasts and reconcile them with annual predictions according to the temporal hierarchy using methods provided in Athanasopoulos, Hyndman, Kourentzes and Pertopoulos (2017, EJOR).

As regards data availability, we study a novel update of the data set used in Gadea and Gonzalo (2020). We generate an unbalanced annual raw station panel data set collected from more than 10,000 weather stations around the globe (1850-2020, provided by the Climate Research Unit CRU). This large data set enable us to investigate how successful adaptive forecasting devices work around historical structural changes in global temperature distributions at several breakpoints, over the last centuries. Moreover, we study recent developments and consider the case of rapidly changing trends in lower temperature quantiles relative to upper quantiles and mean averages. Gadea and Gonzalo (2020) have shown in their analysis that distributional characteristics are crucial since an increase in lower quantiles of temperature distributions would give important indications for future climate analyses and local and international efforts to reduce and bound global warming.

Marina Friedrich (VU Amsterdam) and Emmanuel Mahieu (University of Liège)

Trend and Break Estimation in Atmospheric Ethane

Susana Barbosa (INESC TEC)

Detection of abrupt changes and early warning signs of tipping points in climate time series

Climate time series of the past exhibit clear evidences of abrupt changes in the Earth’s climate. One of the most striking examples of abrupt transitions are the so called Dansgaard-Oeschger events identified in paleoclimatic records, particularly in ice core proxies such as the Greenland NGRIP record of oxygen isotopic composition. These events correspond to large variations in temperature (~8 – 16 ºC) occurring very abruptly (within a few decades). The physical processes and the internal variability within the earth’s system leading to such abrupt climate transitions are still poorly understood. From a time series perspective the identification of abrupt changes in climate records in an objective and consistent way is of paramount importance to increase understanding on these climate transitions. The task is hindered by the typical low resolution and short span of paleoclimate records, and by the inherent dating uncertainties in palaeoclimate time series. Of particular interest are data-driven approaches that can be applied to multiple records (ice cores, speleothems, marine and lake sediments) in a consistent way. In addition to the identification of abrupt changes, the temporal variability of the time series before the occurrence of abrupt transitions is of particular interest, as statistical early warning signals (typically changes in variance and/or autocorrelation) have been identified in both theoretical and observational studies. The identification of statistical early warning signals is particularly relevant not only for the study of the past climate, but also as precursory signals to be identified in current climate records anticipating tipping points in the near future. Here we address the challenges and opportunities in the study of abrupt transitions using as illustration the NGRIP time series of oxygen isotope ratios and palaeoclimate time series of Greenland’s ions and aerosols concentration.

Etienne Wijler (VU Amsterdam)

Spatiotemporal Modelling in Large Dynamic Systems with Applications to Atmospheric Pollution

We consider sparse estimation of a class of high-dimensional spatio-temporal models. Unlike classical spatial autoregressive models, we do not rely on a predetermined spatial interaction matrix. Instead, under the assumption of sparsity, we estimate the relationships governing both the spatial and temporal dependence in a fully data-driven way by penalizing a set of Yule-Walker equations. While this regularization can be left unstructured, we also propose a customized form of shrinkage to further exploit diagonally structured forms of sparsity that follow intuitively when observations originate from spatial grids such as satellite images. We derive finite sample error bounds for this estimator, as well estimation consistency in an asymptotic framework wherein the sample size and the number of spatial units diverge jointly. A simulation exercise shows strong finite sample performance compared to competing procedures. As an empirical application, we model satellite measured NO2 concentrations in London. Our approach delivers forecast improvements over a competitive benchmark and we discover evidence for strong spatial interactions between sub-regions.

// add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.odd').parent('tbody').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); }); $(document).ready(function () { window.buildTabsets("TOC"); }); $(document).ready(function () { $('.tabset-dropdown > .nav-tabs > li').click(function () { $(this).parent().toggleClass('nav-tabs-open'); }); }); (function () { var script = document.createElement("script"); script.type = "text/javascript"; script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"; document.getElementsByTagName("head")[0].appendChild(script); })();