carrot carrot carrot Change Centers x cognizanti collaborators create-folder Data Science Decisive Infrastructure download download edit Email exit Facebook files folders future-of-work global sourcing industry info infographic linkedin location Mass Empowerment Mobile First our-latest-thinking pdf question-mark icon_rss save-article search-article search-folders settings icon_share smart-search Smart Sourcing icon_star Twitter Value Webs Virtual Capital workplace Artboard 1

Please visit the COVID-19 response page for resources and advice on managing through the crisis today and beyond.

The technology behind the XPRIZE Pandemic Response Challenge

The technology behind the XPRIZE Pandemic Response Challenge


Behind the competition: the technical program design, qualitative entry descriptions, quantitative evaluation methods and results.

The XPRIZE Pandemic Response Challenge consisted of creating models to predict the course of the COVID-19 pandemic and to prescribe non-pharmaceutical interventions (NPIs) that would help with mitigation. Cognizant's Evolutionary AI team leveraged its previous research to build the competition, which took place in two phases.
  • In Phase 1—November 2020 to January 2021—competitors used the history of cases and interventions in a country or region as input to predict the number of cases likely in the future.
  • In Phase 2—January through February 2021—competitors prescribed intervention plans that simultaneously minimize future cases and the stringency (i.e., cost) of the interventions. The teams that performed best in Phase 1 were selected to participate in Phase 2.


Using leaderboards to track team performance

Interactive leaderboards were used in both phases to visualize the performance of the teams along the two objectives. A variety of countries and regions were studied at different times of the pandemic and with different weightings on the interventions. Phase 2 entries were also evaluated based on how unique and useful their contributions were as part of a combined super-prescriptor.
The technical elements of the competition (presented below) included the two interactive leaderboards. The competitors were are anonymized to focus on the methodology rather than judging.
In addition to the two interactive leaderboards, the teams were anonymized to focus on the methodology rather than judging.

Technical elements of the competition


    A general description of the competition setup, timelines and rules for the competing teams.


    The GitHub repository included guidelines for sample predictors and prescriptors, validating the entries and evaluating their performance. It contained general resources such as links to relevant data and literature. The repository also was used to explore building better predictors and prescriptors.


    Judges used descriptions of the technical approaches employed by each team to conduct a qualitatively assessment based on such factors as innovation, collaboration and general usefulness of the proposed solutions.


    A leaderboard displayed the quantitative evaluation of predictor submissions in predicting future (unseen) cases in the December 22, 2020 to January 12, 2021 timeframe. The leaderboard was interactive, making it possible to explore the performance of the different entries in various countries and regions.


    This leaderboard displayed the first quantitative evaluation (QE1) of the prescriptor submissions. Each submission consisted of up to 10 prescriptors representing various tradeoffs. Teams evaluated the effect of changing weights and quantities of prescriptors across different time frames, regions and NPI weights to determine what resulted in fewer predicted cases with less stringent interventions.


    The documented results revealed how the prescriptors were evaluated in the second quantitative evaluation (QE2) as part of the common good. The set of submitted prescriptors was used as an initial population and evolved further with genetic algorithms. In this way, the resulting prescriptors combined the unique and useful aspects of all submissions into a set of super-prescriptors that performed even better. The amount of DNA that each submission contributed to these super-prescriptors was used as a measure of their quality.


Future plans

In the near future, we plan to form an ensemble of predictor and prescriptor submissions and demonstrate their performance. The idea is to highlight the value of the community effort—that is, by encouraging diverse approaches—and by working together, we can create solutions that are better than the sum of their parts. We also hope to collect the source for many of the contributions and to make them retrainable. In this way, the data can be utilized by the community to build applications for regional use or to improve the models further.
For more details and media inquiries, contact For Evolutionary AI research at Cognizant, see