2016

Abstracts, see below.
21/1, Håvard Rue, NTNU, Penalising model component complexity: A principled practical approach to constructing priors
26/1, Stephen Senn, Luxembourg Institute of Health, ​P-values: The Problem is not What You Think
28/1, Olga Izyumtseva, Kiev State university, Ukraine, Self-intersection local time of Gaussian processes. New approach
2/2, ​Erik-Jan Wagenmakers, University of Amsterdam, ​A Predictive Perspective on Bayesian Inference
11/2, Olle Häggström, ​Here Be Dragons: Science, Technology and the Future of Humanity
18/2, Vitali Wachtel, Augsburg, Germany, Recurrence times and conditional limits for  Markov chains with asymptotically zero drift
25/2, Marcin Lis, ​Planar Ising model as a gas of cycles
3/3, Krzysztof Podgorski, Lund University, ​Event based statistics for dynamical random fields
10/3, ​Bengt Johannesson, Volvo, ​Design Strategies of Test Codes for Durability Requirement of Disk Brakes in Truck Application
22/3, Hermann Þórisson,  University of Iceland, Palm Theory and Shift-Coupling
14/4, ​Omiros Papaspiliopoulos, Universitat Pompeu Fabra, Stochastic processes for learning and uncertainty quantification
21/4, ​Alexandre Antonelli, Dept of Biological and Environmental Sciences, GU, ​Hundreds of millions of DNA sequences and species observations: Challenges for synthesizing biological knowledge
28/4, ​Peter J. Diggle, CHICAS, Lancaster Medical School, Lancaster University, Model-Based Geostatistics for Prevalence Mapping in Low-Resource Settings
12/5, ​Ute Hahn, Aarhus University, Monte Carlo envelope tests for high dimensional data or curves
26/5, Martin Foster, University of York, ​A Bayesian decision-theoretic model of sequential experimentation with delayed response
2/6, John Einmahl, Tilburg University, Refining empirical data depth using statistics of extremes
18/8, Tony Johansson, Carnegie-Mellon, Cycles and matchings in k-out random graphs
25/8, Paavo Salminen, Åbo Akademi University, On the moments of exponential integral functionals of additive processes
1/9, Fima Klebaner, Monash University, On the use of Sobolev spaces in limit theorems for the age of population
15/9, Joseba Dalmau, Université Paris-Sud, The distribution of the quasispecies
22/9, Sophie Hautphenne, EPFL, A pathwise iterative approach to the extinction of branching processes with countably many types
29/9, Umberto Picchini, Lund University, A likelihood-free version of the stochastic approximation EM algorithm (SAEM) for inference in complex models
6/10, Jakob Björnberg, Random loop models on trees
20/10, ​Georg Lindgren, Lund University, Stochastic properties of optical black holes
27/10, Daniel Ahlberg, IMPA and Uppsala University, Random coalescing geodesics in first-passage percolation​
3/11, KaYin Leung, Stockholm University, Dangerous connections: the spread of infectious diseases on dynamic networks
24/11, Yi He, Tilburg University, Asymptotics for Extreme Depth-based Quantile Region Estimation
1/12, Marianne Månsson, Statistical methodology and challenges in the area of prostate cancer screening
8/12, ​Holger Rootzén, Human life is unbounded -- but short
15/12, Martin Schlather, University of Mannheim, Simulation of Max-Stable Random Fields
 

21/1, Håvard Rue, NTNU: Penalising model component complexity: A principled practical approach to constructing priors
Abstract: Setting prior distributions on model parameters is the act of characterising the nature of our uncertainty and has proven a critical issue in applied Bayesian statistics. Although the prior distribution should ideally encode the users' uncertainty about the parameters, this level of knowledge transfer seems to be unattainable in practice and applied statisticians are forced to search for a ``default'' prior.  Despite the development of objective priors, which are only available explicitly for a small number of highly restricted model classes, the applied statistician has few practical guidelines to follow when choosing the priors. An easy way out of this dilemma is to re-use prior choices of others, with an appropriate reference.
In this talk, I will introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model  component to be a flexible extension of a base model.  Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to reparameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations, like random effect models, spline smoothing, disease mapping, Cox proportional hazard models with time-varying frailty, spatial Gaussian fields and multivariate probit models. Further, we show how to control the overall variance arising from many model components in hierarchical models.
 
26/1, Stephen Senn, Luxembourg Institute of Health: P-values: The Problem is not What You Think
Abstract: There is a modern folk-history of statistical inference that goes like this. Following the work of Bayes and Laplace, scientists had been treading a path of inferential virtue for a century and a half. Along came RA Fisher and seduced them to folly. Chief among the sins he corrupted them to indulge in were significance tests and P-values. These encourage scientists to ‘find’ effects more easily than they should. This, combined with the earthly success that positive results can bring, has caused scientists to become addicted to a dangerous inferential drug that is poisoning science. Now, thanks to the power of computing, the omens are good that they can be weaned from their addiction and brought back onto the path of inferential virtue.
I claim, however, that the history is false. P-values did not develop as an alternative to Bayesian significance tests which, in any case, are themselves an inferential device which many who claim to be Bayesian do not use, but as an alternative interpretation of a standard Bayesian result in common use. A key, but rather neglected, figure in 20th century statistics is CD Broad who pointed to a severe problem with the Bayes-Laplace application of inverse probability: reasonable support, let alone proof, of the truth of a scientific law cannot be obtained from standard priors. The Bayesian significance test was developed by Harold Jeffreys in response to Broad’s analysis but, at least in medical statistics, it has remained a little-used approach to practical Bayesian inference.
The real issue is not one of conflict between P-values and Bayesian inference but one between two different forms of Bayes. This conflict is sharp but eradicating P-values will do nothing to resolve it.
 
28/1, Ola Izyumtseva, Kiev State University: Self-intersection local time of Gaussian processes. New approach
Abstract: The talk is devoted to the local times and self-intersection local times for Gaussian processes. Local times in one dimension and self-intersection local times in two dimension are important geometric characteristics related to large number of process properties. The well-known example which illustrates such relationship is Le Gall asymptotic expansion for the area of Wiener sausage containing renormalized self-intersection local times. In contrast to local times of one dimensional process self-intersection local times of two dimensional process for proper definition require renormalization. Up to recently, such a renormalization was constructed by S. Varadhan, E.B. Dynkin, J. Rosen only for a Wiener process. The mentioned results essentially relay on Markov property, self-similarity and other nice properties of Wiener process. We introduce a new approach based on investigation of geometry of a Hilbert-valued function generating the Gaussian process and apply it to establish existence of local times and renormalized self-intersection local times. Our approach is applicable to a wide class of Gaussian non-Markov processes. The application of self-intersection local times for the modelling of random polymers will also be discussed. (In collaboration with A.A. Dorogovtsev).
 
2/2, Erik-Jan Wagenmakers, University of Amsterdam, A Predictive Perspective on Bayesian Inference
Abstract: In mathematical psychology, Bayesian model selection is often used to adjudicate between competing accounts of cognition and behaviour. One of the attractions of Bayesian model selection is that it embodies an automatic Occam's razor – a reward for parsimony that is the result of an averaging process over the prior distribution.
Here I provide a predictive interpretation of Bayes inference, encompassing not only Bayesian model selection, but also Bayesian parameter estimation. This predictive interpretation supports a range of insights about the fundamental properties of learning and rational updating of knowledge.
 
25/2, Marcin Lis, Planar Ising model as a gas of cycles
Abstract: I will talk about a new representation of the planar Ising model in terms of a Poisson point process of cycles in the graph. This is a direct analog of the continuous result of Werner which represents the Conformal Loop Ensemble (a random collection of continuous simple curves) via a Poisson point process of Brownian loops in the plane. Surprisingly, the new correspondence on the discrete level is valid for all values of the temperature parameter (unlike the continuum one, which only holds at criticality). I will discuss implications of this fact and outline possible applications to solve some of the remaining open questions about the Ising model.
 
3/3, Krzysztof Podgorski, Lund University, Event based statistics for dynamical random fields
Abstract: The sea surface is a classical example of stochastic field that is evolving in time.
Extreme events that are occurring on such a surface are random and of interest for practitioners - ocean engineers are interested in large waves and damage they may cause to an oil platform or to a ship. Thus data on the ocean surface elevation are constantly collected by system of buoys, ship- or air-borne devices, and satellites all around the globe. These vast data require statistical analysis to answer important questions about random events of interest. For example, one can ask about statistical distribution of wave sizes, in particular, how distributed large waves are or how steep they are. Waves often travel in groups and a group of waves typically causes more damage to a structure or a ship than an individual wave even if the latter is bigger than each one in the group. So one can be interested in how many waves there is per group or how fast groups are travelling in comparison to individual waves.
In the talk, a methodology that analyse statistical distributions at random events defined on random process is presented. It is based on a classical result of Rice and allows for computation of statistical distributions of events sampled from the sea surface.
The methodology initially was applied to Gaussian models but in fact, it is also valid for quite general dynamically evolving stochastic surfaces. In particular, it is discussed how sampling distributions for non-Gaussian processes can be obtained through
Slepian models that describe the distributional form of a stochastic process observed at level crossings of a random process. This is used for efficient simulations of the behaviour of a random processes sampled at crossings of a non-Gaussian moving average process. It is observed that the behaviour of the process at high level crossings  is fundamentally different from that in the Gaussian case, which is in line with some recent theoretical results on the subject.
 
10/3, Bengt Johannesson, Volvo, Design Strategies of Test Codes for Durability Requirement of Disk Brakes in Truck Application
Abstract: Durability requirements for disk brakes in truck application are improved based on the actual customer usage. A second moment reliability index is used to design test codes for the assessment of variable amplitude disk brake fatigue life. The approach is based on the mean estimates of logarithms of equivalent strength of the brake and customer load variables. The index gives possibilities to take all uncertainties in the fatigue life assessment into account, including scatter in material, production, and usage but also systematic errors like model errors in test set up, stress calculations, damage hypothesis, as well as statistical uncertainties.
 
22/3, Hermann Thorisson, University of Iceland, Palm Theory and Shift-Coupling
Abstract: Palm versions w.r.t. stationary random measures are mass-stationary, that is, the origin is at a typical location in the mass of the random measure. For a simple example, consider the stationary Poisson process on the line conditioned on having a point at the origin. The origin is then at a typical point (at a typical location in the mass) because shifting the origin to the n:th point on the right (or on the left) does not alter the fact that the inter-point distances are i.i.d. exponential. Another (less obvious) example is the local time at zero of a two-sided standard Brownian motion.
In this talk we shall first consider mass-stationarity on the line and the shift-coupling problem of how to shift the origin from a typical location in the mass of one random measure to a typical location in the mass of another random measure. We shall then extend the view beyond the line, moving through the Poisson process in the plane and d-dimensional space towards general random measures on groups.
 
14/4,  ​Omiros Papaspiliopoulos, Universitat Pompeu Fabra, stochastic processes for learning and uncertainty quantification
Abstract: in the talk I will focus on the use of stochastic processes for building algorithms for probing high-dimensional distributions as appear in statistical learning problems, in particular Bayesian learning, and for uncertainty quantification. More specifically, I will give an accessible overview of how stochastic differential equations are used to build Markov chain Monte Carlo algorithms and will then move on to some current work I am involved with that combines these ideas with auxiliary variables in order to obtain a good tradeoff between mixing
and computational efficiency.
 
21/4, Alexandre Antonelli, Professor in Systematics and Biodiversity, Dept of Biological and Environmental Sciences, University of Gothenburg: Hundreds of millions of DNA sequences and species observations: Challenges for synthesizing biological knowledge
The loss of biodiversity is one of the most serious threats to society, yet we still lack a basic understanding of how many species there are, where they occur, how they evolved, and how they may be affected by global warming and land use. In this talk I will present some of the major prospects and challenges faced by biologists in the 21st century, focusing on the methodological tools that our group is developing to make sense out of rapidly increasing data volumes. I will focus on i) building the Tree of Life that unites all living organisms and their timing of origination, ii) mapping the distribution of all life on Earth, and iii) understanding how the world’s various ecosystems originated and responded to changes in the landscape and climate. Biodiversity research is now entering a new and exciting phase, where mathematical and statistical sciences hold the potential to make a huge contribution by increased collaboration and innovative solutions.
 
28/4, Peter J Diggle, CHICAS, Lancaster Medical School, Lancaster University: Model-Based Geostatistics for Prevalence Mapping in Low-Resource Settings
Abstract: In low-resource settings, prevalence mapping relies on empirical prevalence data from a finite, often spatially sparse, set of surveys of communities within the region of interest, possibly supplemented by remotely sensed images that can act as proxies for environmental risk factors. A standard geostatistical model for data of this kind is a generalized linear mixed model with logistic link, binomial error distribution and a Gaussian spatial process as a stochastic component of the linear predictor.
In this talk, I will first review statistical methods and software associated with this standard model, then consider several methodological extensions whose development has been motivated by the requirements of specific applications including river-blindness mapping Africa-wide.
Diggle, P.J. and Giorgi, E. (2016). Model-based geostatistics for prevalence Mapping in
low-resource settings (with Discussion). Journal of the American Statistical Association
(to appear).
 
12/5, Ute Hahn, Aarhus University: Monte Carlo envelope tests for high dimensional data or curves
Abstract: Monte Carlo envelopes compare an observed curve with simulated counterparts to test the hypothesis that the observation is drawn from the same distribution as the simulated curves. The simulated curves are used to construct an acceptance band, the envelope. If the observed curve leaves the envelope, the test rejects the hypothesis. These methods are popular, for example,to test goodness of fit for spatial point processes, where the information contained in point patterns is boiled down to a summary function. The summary function is estimated on both the observation and on simulated realizations of the null model. However, the usual practice to draw an acceptance band from pointwise empirical quantiles bears an inherent multiple testing problem, and yields liberal tests in most cases.
This talk introduces a graphical envelope test that has proper size. We will also see how the test principle can be extended to group wise comparison by permutation, or to high dimensional data that are not represented as curves, such as images. Based on joint work with Mari Myllymäki (Natural Resources Institute Finland), Tomáš Mrkvička (University of South Bohemia) and Pavel Grabarnik (Laboratory of Ecosystems Modeling, the Russian Academy of Sciences).
 
26/5, Martin Foster, University of York: A Bayesian decision-theoretic model of sequential experimentation with delayed response
Abstract: We propose a Bayesian decision-theoretic model of a fully sequential experiment in which the real-valued primary end point is observed with delay. The goal is to identify the sequential experiment which maximises the expected benefits of a technology adoption decision, minus sampling costs. The solution yields a unified policy defining the optimal `do not experiment'/`fixed sample size experiment'/`sequential experiment' regions and optimal stopping boundaries for sequential sampling, as a function of the prior mean benefit and the size of the delay. We apply the model to the field of medical statistics, using data from published clinical trials.
 
1/9, Fima Klebaner, Monash University: On the use of Sobolev spaces in limit theorems for the age of population
Abstract: We consider a family of general branching processes with reproduction parameters depending on the age of the individual as well as the population age structure and a parameter K, which may represent the carrying capacity. These processes are Markovian in the age structure. We give the Law of Large Numbers of a measure-valued process, and  the Central Limit Theorem as a distribution-valued process. While LLN recovers known PDE, the CLT yields new SPDE.
This is joint work with Peter Jagers (Chalmers), Jie Yen Fan and Kais Hamza (Monash)
 
15/9, Joseba Dalmau, Université Paris-Sud: The distribution of the quasispecies
In 1971, Manfred Eigen proposed a deterministic model in order to describe the evolution of an infinite population of macromolecules subject to mutation and selection forces. As a consequence of the study of Eigen’s model, two important phenomena arise: the error threshold and the quasispecies. In order to obtain a counterpart of this results for a finite population, we study a Moran model with mutation and selection, and we recover, in a certain asymptotic regime, the error threshold and the quasispecies phenomena. Furthermore, we obtain an explicit formula for the distribution of the quasispecies.
 
22/9, Sophie Hautphenne, EPFL: A pathwise iterative approach to the extinction of branching processes with countably many types
Abstract: We consider the extinction events of Galton-Watson processes with countably infinitely many types. In particular, we construct truncated and augmented Galton-Watson processes with finite but increasing sets of types. A pathwise approach is then used to show that, under some sufficient conditions, the corresponding sequence of extinction probability vectors converges to the global extinction probability vector of the Galton-Watson processes with countably infinitely many types. Besides giving rise to a number of novel iterative methods for computing the global extinction probability vector, our approach paves the way to new global extinction criteria for branching processes with countably infinitely many types.
 
29/9, Umberto Picchini, Lund University: A likelihood-free version of the stochastic approximation EM algorithm (SAEM) for inference in complex models
Abstract: We present an approximate maximum likelihood methodology for the parameters of incomplete data models. A likelihood-free version of the stochastic approximation expectation-maximization (SAEM) algorithm is constructed to maximize the likelihood function of model parameters. While SAEM is best suited for models having a tractable "complete likelihood" function, its application to moderately complex models is difficult,  and results impossible for models having so-called intractable likelihoods. The latter are typically treated using approximate Bayesian computation (ABC) algorithms or synthetic likelihoods, where information from the data is carried by a set of summary statistics. While ABC is considered the state-of-art methodology for intractable likelihoods, its algorithms are often difficult to tune. On the other hand, synthetic likelihoods (SL) is a more recent methodology which is less general than ABC, it requires stronger assumptions but also less tuning. By exploiting the Gaussian assumption set by SL on data summaries, we can construct a likelihood-free version of SAEM. Our method is completely plug-and-play and available for both static and dynamic models, the ability to simulate realizations from the model being the only requirement. We present simulation studies from our on-going work: preliminary results are encouraging and are compared with state-of-art methods for maximum likelihood inference (iterated filtering), Bayesian inference (particle marginal methods) and ABC.
 
6/10, Jakob Björnberg: Random loop models on trees
Abstract: Certain models in statistical physics may be studied via ensembles of random loops. Of particular interest is the question whether infinite loops occur or not, since this is of relevance to phase transition in the physics model. One may consider such models defined on various graphs, and in this talk we focus on regular trees of large degree. We sketch the main ideas of joint work together with D. Ueltschi in which we estimate the critical point for the occurrence of infinite loops.

20/10, Georg Lindgren, Lund University: Stochastic properties of optical black holes
Abstract: "Light can be twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. This corkscrew of light, with darkness at the center, is called an optical vortex." Wikipedia

The statistical properties near phase singularities in a complex wavefield is described as the conditional distributions of the real and imaginary Gaussian components, given a common zero crossing point. The exact distribution is expressed as a Slepian model, where a regression term provides the main structure, with parameters given by the gradients of the Gaussian components at the singularity, and Gaussian non-stationary residuals that provide
local variability. This technique differs from the linearization (Taylor expansion) technique commonly used.

The empirically and theoretically verified elliptic eccentricity of the intensity contours in the vortex core is a property of the regression term, but with different normalization compared to the classical theory. The residual term models the statistical variability around these ellipses. The radii of the circular contours of the current magnitude are similarly modified by the new regression expansion and also here the random deviations are modelled by the residual field.

27/10, Daniel Ahlberg, IMPA and Uppsala University: Random coalescing geodesics in first-passage percolation
Abstract: A random metric on \mathbb{Z}^2 is obtained by assigning non-negative i.i.d. weights to the edges of the nearest neighbour lattice. We shall discuss properties of geodesics in this metric. We develop an ergodic theory for infinite geodesics via the study of what we shall call `random coalescing geodesics’. Random coalescing geodesics have a range of nice properties. By showing that they are (in some sense) dense is the space of geodesics, we may extrapolate these properties to all infinite geodesics. As an application of this theory we answer a question posed by Benjamini, Kalai and Schramm in 2003, that has come to be known as the `midpoint problem’. This is joint work with Chris Hoffman.
 
3/11, KaYin Leung, Stockholm University, Dangerous connections: the spread of infectious diseases on dynamic networks
In this talk we formulate models for the spread of infection on dynamic networks that are amenable to analysis in the large population limit. We distinguish three different levels: (1) binding sites, (2) individuals, and (3) the population. In the tradition of physiologically structured population models, the model formulation starts at the individual level. Influences from the `outside world’ on an individual are captured by environmental variables which are population-level quantities. A key characteristic of the network models is that individuals can be decomposed into a number of (conditionally) independent components: each individual has a fixed number of `binding sites’ for partners. Moreover, individual-level probabilities are obtained from binding-site-level probabilities by combinatorics while population level quantities are obtained by averaging over individuals with respect to age. The Markov chain dynamics of binding sites are described by only a few equations. Yet we are able to characterize population-level epidemiological quantities such as R_0, which is a threshold parameter for the stability of the trivial steady state of the population-level system. In this talk we show how probabilistic arguments can be used to derive an explicit R_0 for an, in principle, large dimensional system of ODE.

This talk is based on joint work with Odo Diekmann and Mirjam Kretzschmar (Utrecht, The Netherlands)
 
24/11, Yi He, Tilburg University: Asymptotics for Extreme Depth-based Quantile Region Estimation
Abstract: A data depth function provides a probability based ordering from the center (the point with maximal depth value) outwards. Consider the small-probability quantile region in arbitrary dimensions consisting of extremely outlying points with nearly zero data depth value. Since its estimation involves extrapolation outside the data cloud, an entirely nonparametric method often fails. Using extreme value statistics, we extend the semiparametric estimation procedures proposed in Cai, Einmahl and de Haan (AoS,2011) and He and Einmahl (JRSSb,2016) to incorporate various depth functions. Under weak regular variation conditions, a general consistency result is derived. To construct confidence sets that asymptotically cover the extreme quantile region or/and its complement with a pre-specified probability, we introduce new notions of distance between our estimated and true quantile region and prove their asymptotic normality via an approximation using the extreme value index only. Refined asymptotics are derived particularly for the half-space depth to include the shape estimation uncertainty. The finite-sample coverage probabilities of our asymptotic confidence sets are evaluated in a simulation study for the half-space depth and the projection depth.
 
1/12, Marianne Månsson: Statistical methodology and challenges in the area of prostate cancer screening
To screen or not to screen for prostate cancer in a population-based manner has been an ongoing discussion for decades in Sweden and many other countries. No country has found it justifiable to introduce a general screening program so far. Meanwhile so-called opportunistic screening has become more and more common. At the Department of Urology at Sahlgrenska two randomized screening studies are on-going: The first one started 1994 and  is in its end phase (~20000 men), while the new one started in 2015 (~40-60000 men). With these studies as a starting point, statistical methodology and challenges in this kind of studies will be discussed. In particular the issue of overdiagnosis will be considered.
 
8/12, Holger Rootzén: Human life is unbounded -- but short
The longest-living known person, Jeanne Calment, died August 4, 1997 at the age of 122 years and 164 days. Is there a sharp upper limit for the length of human life, close to this age? A Nature letter (doi:10.1038/nature19793) a month and a half ago claimed this is the case. We did not find the arguments compelling and have hence used extreme value statistics to try to understand what the two existing databases on supercentenarians, humans who have lived 110 years or more, really say. Results include that there is no difference in survival between women and men after 110 years, but that around 10 times as many women as men survive to 110; that data from western countries seems homogeneous, but that Japanese data is different; that there is little evidence of a time trend (though this is less clear); and that the overall picture is that human life is unbounded -- but short: the yearly survival rate after 110 is about 50%/year. As one consequence it is not unlikely that during the next 20 years, and at the present stage of medicine and technology, someone (probably a woman) will live to be 120, but quite unlikely that anyone will live more than 130 years. Of course, dramatic progress in medicine might change this picture completely. The results come from ongoing and preliminary work, and we very much hope for input from the audience to help improving our understanding. This is joint work with Dmitrii Zholud.
 
15/12, Martin Schlather, University of Mannheim, Simulation of Max-Stable Random Fields
Abstract: The simulation algorithm for max-stable random fields I had suggested in 2002 has several drawbacks. Among others, it is inexact and it is slow for larger areas. One improvement is an efficient simulation algorithm that relies on theoretical results of the choice of the spectral representation (Oesting, Schlather, Zhou, submitted). Another improvement is on the computational side, especially for large areas, by generalising an algorithm of D\"oge (2001) on the simulation of a model in stochastic geometry, the random sequential absorption model. The latter is work in progress.

Published: Tue 08 Jan 2019.