Please visit the COVID-19 response page for resources and advice on managing through the crisis today and beyond.
The GitHub repository included guidelines for sample predictors and prescriptors, validating the entries and evaluating their performance. It contained general resources such as links to relevant data and literature. The repository also was used to explore building better predictors and prescriptors.LEARN MORE
Judges used descriptions of the technical approaches employed by each team to conduct a qualitatively assessment based on such factors as innovation, collaboration and general usefulness of the proposed solutions.LEARN MORE
A leaderboard displayed the quantitative evaluation of predictor submissions in predicting future (unseen) cases in the December 22, 2020 to January 12, 2021 timeframe. The leaderboard was interactive, making it possible to explore the performance of the different entries in various countries and regions.LEARN MORE
This leaderboard displayed the first quantitative evaluation (QE1) of the prescriptor submissions. Each submission consisted of up to 10 prescriptors representing various tradeoffs. Teams evaluated the effect of changing weights and quantities of prescriptors across different time frames, regions and NPI weights to determine what resulted in fewer predicted cases with less stringent interventions.LEARN MORE
The documented results revealed how the prescriptors were evaluated in the second quantitative evaluation (QE2) as part of the common good. The set of submitted prescriptors was used as an initial population and evolved further with genetic algorithms. In this way, the resulting prescriptors combined the unique and useful aspects of all submissions into a set of super-prescriptors that performed even better. The amount of DNA that each submission contributed to these super-prescriptors was used as a measure of their quality.LEARN MORE