While ‘Birdman’ had a great night at the Oscars, winning much more than what we had predicted, our model on the other hand fared more like ‘Boyhood’, a case of so near yet so far.
The overall performance was rather disappointing with the model ending up with just 12 correct predictions out of 20. The gory details of the post mortem reveal even more embarrassing facts.
Yesterday, we had listed three categories as ‘toss-up’, seven as ‘competitive’ and ten as ‘one sided’. We got all the three toss-ups wrong, four of the seven competitive categories wrong and somehow, managed to get a sure shot winner (Best Original Screenplay) wrong as well. The only solace was probably that none of the winners in the respective categories came from outside our top two picks (although we did have multiple entries as the second favourites in some categories).
However, the purpose of the model goes beyond just predicting the eventual winners. Call it a loser’s excuse, but we see the model as just another contribution in the endless sea of the subjective and other objective predictions that fill up the internet every award season. If the purpose of the model was to eliminate all the eventual losers, it did a fine job.
Moreover, a model ultimately has to depend to data and as we mentioned in our post yesterday as well, in some of the down ballot categories, the data was extremely limited and that too came from critics’ and journalists’ awards that have less of a bearing on the eventual Oscar results.
Now, having given our standard litany of excuses, here are some ways we can make the model work better in the coming year.
Rely more on insider awards at least in the top eight categories
In the top eight categories (‘Best Picture’, ‘Best Director’, ‘Best Actor’, ‘Best Actress’, ‘Best Actor in a Supporting Role’, ‘Best Actress in a Supporting Role’, ‘Best Original Screenplay’ and ‘Best Adapted Screenplay’), we could get only five right. However, apart from ‘Best Original Screenplay’ (won by ‘Birdman’ which in my view was the biggest surprise of the night), all the seven others could have been predicted correctly had we given more weights to the insider awards in the model. This year’s award presented a superb case study of what happens when there are two different movies dominating the critics’ awards and the insider awards. We now know who ends up as the winner in that battle. As an aside, it is kind of fitting that a movie that questions the ability of critics does so in real life as well.
Go for the experts’ opinions
While it is always safe to stay away from the advice of horse-trade journalists and self-acclaimed experts, in case of Oscars, where a close group of a few thousand industry insiders decide the fate of the movies, it is probably advisable to consider the opinions and judgements of critics and journalists who are close to the ground and know which way the wind is blowing. While the critics’ choice awards were invented to act as predictors in the Oscars, sometimes tastes and opinions change in a matter of few weeks. This was truer this year than any other year when ‘Boyhood’ peaked too early in the season and could not just match up to the surging momentum of ‘Birdman’. Also probably, we should give weights to the awards happening closer to the Oscar date compared to the earlier in the season awards.
The experts’ opinion also matter greatly in case of lesser awarded categories where data is at a premium. It makes more sense to take a poll of a number of experts on their subjective un-scientific opinion than rely on just two data points to arrive at the expected value of a third one.
‘Experts’, of course, is a rather vague term, especially during the Oscar season, when everyone has a prediction to make. It is thus advisable to choose your expert well. In a separate piece, we shall find out how the prominent publications did in their Oscar predictions.
Look at the Betting Market
The betting market is a great source of information on the general opinion about what is going to happen in the future. While it may falter in some cases, the collective opinion of a large enough pool of people whose money is on the line provides too strong a data point to ignore.
In the next year’s model, we propose to improve the mathematical model and combine the same with the subjective expert opinion and betting market odds while arriving at the predictions. And anyway, considering our dismal performance this year, we can go only up from here.