Thoughts About the Jury Vote

The jury, for us, is one of the most difficult pieces to parse out in the Eurovision prediction game. For a while, we were in denial, arguing that it was a black box and wasn’t worth our time because it couldn’t be figured out. With 3 years of data, it’s time to take a closer look.

In 2009, the EBU changed the rules so that 50% of the points in the final were awarded by national juries. Since then, the role of the jury has been expanded to Semifinals and Final. The juries are selected by each country and are music industry professionals within each country. In order to have timely results, jury votes are made during the final dress rehearsal.

In theory, the jury vote is supposed to counteract issues of neighborly voting, diaspora voting, and song draw that critics believed led to unjust results when placement was determined by 100% televote. In practice, juries have indeed changed the Eurovision scoring results, although we are skeptical the presence of the juries offsets these specific issues. Rather, it seems to impact the contest in other unintended ways.

The pundit, gambler, or fan must now take into consideration not only about what they believe will make viewers pick up the phone to vote, but also what this mystery group of music industry professionals will like. It’s a much more speculative task because there is less information. For the public, we can look to ourselves, our family, friends, polls, the Internet conversation, and the oddsmakers. For the jury, all we have is how they voted in previous contests.

This post takes a systematic-but-not-as-scientific-as-it-could-be look at jury scores since 2009. The table below displays the jury and public points received for the top and bottom vote getters in the Eurovision finals 2009-2011. Points received are in parentheses. The bottom row displays countries with the biggest discrepancies in jury vs. public vote.

The story that emerges is more nuanced and less consistent than the conventional wisdom about juries would have us believe. We discuss some ways in which the public and jury vote differ and propose some hypotheses on how these principles, if true, could play out in 2012.

First, there does seem to be credible evidence that the jury vote differs from the public vote.  However, it isn’t sufficient to say the juries and the public are always looking for different things. It seems to vary from year to year.  2011 was a particularly discordant year between fans and jury.  In 2011, 8 countries had differences of more than 80 points between jury and public.  In 2010, which had a similar number of countries participating, there were only 3 countries with discrepancies of more than 80 points, suggesting more consistency between jury and the public. In 2009, the public and jury agreed on Alexander Rybak (Norway 2009) and Johanna (Iceland 2009), but after that we were back to 7 countries with large differences in public and jury voting.

The 2011 differences I chalk up to a balanced year. Prior to the contest, we felt that there were 10 countries with songs that could win, and I think some of the jury/public disagreement reflects the even footing of the entries. In terms of the openness of the field, 2012 seems closer to 2010.  There  seem to be about 4 or 5 entries that have a realistic shot at winning. It’s not a runaway, but it’s not wide open either.

Second, the juries tend to be less harsh than the public.  The bottom vote getter category shows all countries receiving less than 30 points in the final. In 2010 we have a long list of countries with low public votes and far fewer among the jury. Note also that it’s rare that jury points go into the single digits. This suggests that the jurors do a better job of spreading out the points across all countries.

In support of conventional wisdom, the juries in the last three years have been more likely than the public to give points to “big vocalists.” Consider Maya Keuc (Slovenia 2011), Nadine Beiler (Austria 2011), Harel Skaat (Israel 2010), Sopho Nizharadze (Georgia 2010) and Jade Ewen (UK 2009). Conversely, we wrote up in our debrief from the 2011 contest that “anything old and musty does not succeed anymore.” That sentiment doesn’t ring true for the jury, but it does seem to reflect the public will, and often shows itself in large discrepancies between jury and public vote. Consider how poorly Nadine and Harel did in the public vote, despite having good draws. Consider also Niamh Kavanagh’s (Ireland 2010), Filipa Azevedo’s (Portugal 2010), and Chiara’s (Malta 2009) dismal public votes. Even successful entries like Jade Ewen (UK 2009) had large discrepancies between jury and public voting.

Hypothesis 1: the jury rewards “big vocalists.”

The 2012 test:  If true, we should expect the jury to give more points than the public to Spain, Albania, and Macedonia, all of whom have played the “diva” card this year.  Estonia and Iceland have sent big male vocalists and may also do well with the jury.

Hypothesis 2: the public punishes “musty ballads.”

The 2012 test: If true, we should expect the public to give low points to Spain and Portugal, which have submitted old-fashioned ballads this year.  The UK also meets this criteria, but Engelbert Humperdinck’s celebrity (see below) may offset the “musty ballad” penalty.

We observe the jury has given more points than the public to entries that–for lack of a better word–have “authenticity.”  Acts with authenticity may be experienced singer-songwriters or just people who simply live the music: the Raphael Gualazzi (Italy 2012) factor and the Tom Dice (Belgium 2010) factor. Note the italics on experienced singer-songwriters: Paradise Oscar (Finland 2011) was too inexperienced to bring in big points, but he did reasonably well with the juries in both the semifinal and final. Conversely, Alexey Vorobyov’s (Russia 2011) song lacked authenticity. We noted in our 2011 debrief that there wasn’t a good connection between artist and song.  Others described the entry as cynical and pandering. The public liked it, but the juries punished him severely.

Performance package may also be an important piece of this. Blue (UK 2011) gave us an indulgent performance that resulted in a large discrepancy between public and jury vote. Neither jury nor public responded to 3JS (Netherlands 2011), who in interviews “were all about the music” but in performance were all about bad blazers and a sloppy concert performance. High camp pays a similar price with the juries.  Kejsi Tola (Albania 2009) is one of the all-time great camp entries, but the jury did not love the green man. Similar wrath was bestowed on Malena Ernman (Sweden 2009), Homens da Luta (Portugal 2011), and 3+2 (Belarus 2010).

Hypothesis 3: the jury will reward acts with “authenticity.”

The 2012 test:  If true, we should expect the jury to give some points to Montenegro, with the caveat that the song is too inaccessible to get really big points. Nil points, though, should be unlikely. Turkey and BiH may also do better with the juries than the public because of this factor. Watch for Denmark too.

Hypothesis 4: the jury will punish acts that lack “authenticity.”

The 2012 test: If true, we should expect the jury to punish Lithuania and Georgia.

Hypothesis 5: The jury will punish high camp.

The 2012 test: If true, we should expect the jury to punish San Marino. Latvia and Austria are also vulnerable depending on how they are staged.

Finally, in 2010 and 2011 juries seem to have awarded fewer points to acts that were already widely known. Consider Blue (UK 2011), Dino Merlin (BiH 2011), and maNga (Turkey 2010). These were all established artists and crowd favorites that did not perform as well with the juries as they did with the public. Blue did particularly poorly with the jury, but it may be because they violated the “authenticity” factor with their performance (see above).  We are quite interested see how the “celebrity” factor plays out because there seems to have been a change since 2009. In 2009 the jury rewarded Patricia Kaas (France 2009) and Jade Ewan (UK 2009), who included Andrew Lloyd Webber onstage.  The 2009 jury also gave many more points than the public to Denmark, whose song had been penned by Ronan Keating. 2009 was the first use of juries, and we’re wondering if the 2009 jury had a more favorable view towards “celebrity” entrants than subsequent juries.

Hypothesis 6: the jury will award fewer points than the public to “celebrity” acts.

The 2012 test:  If true, we should expect the jury to award fewer points than the public to the UK, Serbia, Ireland, and possibly France.

The jury voting puzzle is likely to be with us a while, but it may be a tough nut to crack since we know so little about who the jurors are, and the challenge is made tougher if what the jury looks for changes from year to year.  So how’d we do? Please drop us a comment if you’ve made other observations about the juries.  We’d love to hear them.

4 thoughts on “Thoughts About the Jury Vote

  1. This is an interesting analysis. For me, the juries ‘throw a curveball’ into the process thus making the results more exciting to watch and giving small countries hope that they can succeed against the likes of Blue or Alex Sparrow. They also stop song-free novelty acts from winning i.e. The Grannies. At the moment I think their influence is too strong and that it should be reduced to 40% or 35% – as you partly suggest there does seem to be a tendency for juries to vote for more conservative-big-voice-show-tune songs. I’d rather that they vote for songs that would be seen in the I-Tunes charts.

  2. Thanks Chris. The motives behind the jury are tough to parse out because we know so little about them. I am inclined to believe that the juries are voting for what they believe is the “best song”, rather than they are motivated by a decision-making agenda for the greater good of the contest. But that is an assumption, and you’re right to call me out on it. Writing this post has persuaded me that there are differences in what the jury and the public are voting for. Of course, this isn’t new. When was the last time the Grammys were in touch with the public? This, however, isn’t an award show, it’s a contest where the public has their say. The problem for me is that the juries aren’t correcting the issues with the public vote they were meant to correct, they’re just another voice in the mix.
    Cheers,
    Jen

  3. Good point! At the Brit Awards white male guitar singers/groups usually win – e.g. this year Ed Sheeran won over Jessie J (despite her having made an impact globally).

    I recently moved to Germany (from the UK) and it is noticeable that Eurovision here has a much more contemporary feel to it. My worry is that the conservatism of the juries will pull Eurovision back to the dark days of the mid nineties and that Germany will have to resort to fielding someone like Helene Fischer (which it could easily do I guess).

    I think the next two years will be interesting – it would be good to revisit the topic in June 2013 to see if there is a clearer picture (and whether other countries have been analysing jury voting patterns too and have consequently adapted their approach).

  4. Very interesting and insightful article! One important factor I think you missed: The juries vote on the rehearsals, not the televised contest. IIRC, Blue’s lousy performance at the jury rehearsal last year was the overriding factor in their point discrepancy.

    I find all the talk about how to make the contest “more fair” endlessly amusing. Judging music (or acting or painting or any other art) is a subjective thing, and the term “best song” is way too ambiguous anyway. Best in terms of what? And who are you to say?

    As long as everyone (performers and voters) follows the rules as they were laid out when the contest was set up, then by definition it was fair. Regardless of the results. Even if the singers that won were off-key most of the time. 😉

Comments are closed.