How A Clever Algorithm Can Help Us "Complete" The 2013 Boston Marathon

In the aftermath of last year's Boston Marathon attacks, the Boston Athletic Association (BAA) was faced with—among other, more serious problems—a pair of logistical issues. It had to consider how to provide the nearly 6,000 runners who were unable to finish the 2013 race with an opportunity to qualify for this year's race; it also had to decide how to treat their missing finish times, which were of real significance to the people who had an opportunity to fulfill a dream taken from them.

Ultimately, all runners who completed the first half of the race but did not finish (DNF) received automatic entries. The BAA also pledged to provide official finish times for every runner who was still on the course when the race was halted, and made this a priority.

I was a part of team of experts assembled to provide the BAA with guidance on how best to predict the finish times of the runners who DNF. Our proposed solution was to find a set of comparison runners who finished the race in 2010 and 2011 and whose running patterns were as close to each DNF runner as possible. These 2010 and 2011 runners were then used to predict the finish times. We did not consider the 2012 marathon due to a heat wave.

With the help of the BAA, we created a dataset consisting of all the runners in the 2013 race who reached the halfway point but failed to finish, and all the runners from the 2010 and 2011 Boston Marathons. The two explosions occurred at 2:49 p.m.; based on the latest recorded start time, anyone running faster than 3 hours, 56 minutes would have finished the race. Therefore, we restricted our analysis to runners projected to finish the race in more than four hours. The data consist of "split times" from each of the five-kilometer sections of the course and the final 2.2-kilometer section. Our task was to predict the missing split times for the runners who failed to finish the 2013 marathon. We considered several different prediction algorithms, two of which I will discuss momentarily.

First, it was necessary for us to objectively validate the quality of the different prediction algorithms. We wanted to know how close our predictions were to the true finish times. Of course, for runners in 2013 who failed to finish, this is impossible. However, we were able to create an independent validation dataset using the data from 2010 and 2011 by setting aside the finish time for some of the runners. We then applied all of the proposed algorithms to predict the missing finish times in the validation dataset. We compared our projected finish times with the true finish times of these runners. In this way, we were able to assess the accuracy of the different approaches.

The most common and straightforward method to predict finish times of a race is to employ a constant pace rule—that is, assume all runners have and will continue to run at a constant pace. For elite marathoners, this might be a viable solution; however, for the nearly 6,000 runners who did not finish the 2013 Boston Marathon, this is not ideal. To understand why, let us consider a hypothetical runner who has just crossed the 35-kilometer mark. Will this runner continue to run at a constant pace for the remaining 7.2 kilometers? Or will this runner hit the "wall" and slow down considerably? A less obvious problem is that the runner may have started too slow, as is common with first-time marathoners, and might be expected to speed up over the last 7.2 kilometers.

These different running patterns are the fundamental problem in predicting finish times of a race, and we need to be able to identify them to be able to accurately predict finish times. For runners who were stopped after 35 kilometers but before 40, the constant pace rule predicted only 26 percent of runners in our validation dataset within three minutes of their true finish times. This emphasizes the importance of accounting for the speeding up or slowing down of the runner.

To achieve better predictions, we proposed five different prediction algorithms that, in one way or another, account for the different running patterns. While each of these methods outperformed the constant pace rule, a method based on a k-nearest-neighbors (KNN) algorithm worked best. The KNN algorithm is a method that, as a runner, I find very intuitive. The idea behind KNN is to find a set of comparison runners who finished the race in 2010 and 2011 whose split times were as close to each DNF runner as possible. These runners are called the "nearest neighbors." We chose to consider the 200 nearest neighbors, and performed a local linear regression among these nearest neighbors to produce our final predictions. (Oversimplifying, the local linear regression fits a line through the 200 nearest neighbors, and then uses this line to predict the finish time.)

How A Clever Algorithm Can Help Us "Complete" The 2013 Boston Marathon

Under the KNN approach, we are able to predict 69 percent of runners who were stopped after 35 kilometers but before 40 within three minutes of their finish time. This highlights the usefulness of incorporating all available information on the running pattern into the prediction algorithm and not naively assuming a constant pace. Another measure of accuracy is mean absolute error. We found that on average across all DNFs, the KNN approach predicted the finish time within 1.57 minutes of the true finish time, compared with 3.25 minutes using the constant pace rule.

We recommended that the BAA adopt this approach, but they decided to use a constant pace rule that assumes all runners would continue to run at a constant pace. Was this a bit surprising? Absolutely, but I still believe that our approach is much better at predicting finish times than the method employed by the BAA, and the data support this claim. There is one silver lining, though. In most cases, the BAA finish times based on the constant pace rule are lower than our projections. This means the BAA projected finish times are more favorable to the runners than our predictions.

Looking beyond the specific issues posed by the 2013 Boston marathon, there are other applications for this approach. Many large road races provide real-time information on competitors' performances via split times and predicted finish times. We adapted our KNN approach to provide real-time predictions as to when each runner will arrive at various checkpoints along the course (e.g. five-kilometer sections of the Boston Marathon) and more realistic estimates of the finish times than provided by the constant pace rule. This information would be valuable to spectators by providing real-time updates estimating when specific runners will arrive at various locations along the course.

This is an example of a problem for which the data sources are freely available and the problem is easily stated in language that does not require advanced scientific expertise. I encourage you to download the data from our website and produce your own predictions.


This article summarizes the work that Dorit Hammerling, Jessi Cisewski, Francesca Dominici, Giovanni Parmigiani, Charles Paulson, Richard Smith, and I did in conjunction with the BAA to provide official finish times for all of the runners of the 2013 Boston Marathon. Of our team of analysts, Francesca, Giovanni, and Dorit all participated in the 2013 race. For a complete description of the work, see our paper titled "Completing the Results of the 2013 Boston Marathon" appearing in the April 11 edition of PLOS ONE.