PSLmodels/COVID-MCS Note publish details Zap REST API

Uses methodology proposed in Ganz (2020) to test for sustained declines in positive COVID-19 cases


COVID-MCS

Introduction

COVID-MCS is a web application that allows users to apply the testing framework developed in Ganz (2020) to data on positive COVID-19 test rates. See Ganz (2020) for more details on the implementation of the test, and the Github repository for all underlying code.

Why COVID-MCS?

Policymakers and public health researchers have had difficulty determining whether gating indicators tied to trends in the intensity of the COVID-19 pandemic have been satisfied. One reason for the confusion is that commonly-used hypothesis testing methods, e.g., t-tests for mean comparisons or linear regression models, are poorly suited for questions about shapes of trends in data, e.g., 14-day sustained decrease.

The hypothesis testing framework implemented here is suitable for asking questions like “has a region experienced a specified number of days of declining COVID-19 cases?” by applying the model confidence set (MCS) framework to a series of shape-constrained models. Based on the output of the test, the analyst can then determine whether the data are consistent criteria for phased reopening.

Using this web app

The user must supply data on positive test counts and total test counts, as well as specifying the shapes to test and a series of other assumptions.

An unrestricted model, where the daily average is allowed to take any value, is always tested. The additional models used in Ganz (2020) to test for sustained 14-day declines are “con”, “dec”, and “ius.”

The following describes the possible shapes the model can test:

  • "cei": A ceiling shape constraint. All proportions of positive observations are constrained to be less than or equal to a specified level.
  • “con”: A constant shape constraint. The proportion of positive observations is the same every day.
  • “con_cei”: A constant shape constraint in which the level is also constrained to be less than or equal to a ceiling.
  • “dec”: A monotonically decreasing shape constraint. The model requires that the proportion of positive observations is weakly decreasing across every pair of days.
  • “dec_cei”: A monotonically decreasing shape constraint in which the level is also constrained to be less than or equal to a ceiling.
  • "inc": A monotonically increasing shape constraint. The model requires that the proportion of positive observations is weakly increasing across every pair of days.
  • “inc_cei”: A monotonically increasing shape constraint in which the level is also constrained to be less than or equal to a ceiling.
  • "ius": An inverted u-shaped shape constraint. The model requires that the proportion of positive observations increase monotonically to a peak day and decrease monotonically thereafter.
  • "ius_cei": An inverted u-shaped shape constraint in which the level is also constrained to be less than a ceiling.

Additional assumptions required include:

  • Nesting: If the hypothetical models are nested, then set “models are nested” to true. Otherwise, set to false. The method assumes that nested models are listed from most restrictive to least restrictive.
  • Alpha: This determines the significance level for the test. Higher levels of alpha will lead the method to include more models in the model confidence set with higher likelihood. Lower levels of alpha will lead the method to reject models from the model confidence set with higher likelihood.
  • Ceiling: Defines the ceiling if a ceiling shape-constraint is implemented.
  • Lag: Defines the how many days elapse between inter-day comparisons. (The default is one day.)
  • Seed: Permits the analyst to pass a seed for the purposes of replication. (The default, “0”, indicates a random seed.)
  • Number of bootstrap samples: Number of bootstrap samples used in the execution of the test (100 to 250 is recommended).

Reading output

The model returns the model confidence set (MCS), ie. models that are deemed an equally good fit with the data according to a specified confidence level. In addition, the model returns a summary table where each row represents one iteration of testing the null hypothesis that all remaining models fit the data equally well.

  • “iter”: Number of tests of equality completed.
  • “N”: Number of bootstrap samples for which the simulated range statistic exceeds the range statistic in the observed data.
  • “P.H0”: P-value for the null hypothesis for the current set of models.
  • “P.MCS”: MCS p-values for the current set of models. See Hansen (2011).
  • “MCS”: Models evaluated in this iteration.
  • “Model.Drop”: Model rejected from the MCS if the p-value of the null hypothesis is less than alpha.

The model also returns graphs illustrating how each user-specified shape compares to the observed daily averages.

Where to find COVID testing data

Regional data can be found at the COVID Tracking project. Note that the model is only as good as the data it is testing. Ganz (2020) argues that testing regimes in the United States currently emphasize mitigating the spread of the virus, not measuring the underlying intensity of the pandemic in a region, a fact that should be considered when interpreting the output of the test.

Frequent errors

The model will not run if the inputs are in an incorrect format. Here are some common reasons the model might fail:

  • Model entries are not separated by a comma and a space.
  • The test requires that positive test counts are not zero or equal to the total number tests performed on a given day.

Create Simulation