Why this poll is different

image.jpeg

The world of theme parks and coasters is chock full of polls and "best of" lists. Every year, there are lots of them competing for your attention. Every year, they release their results and coaster enthusiasts completely lose their minds.

"NO WAY that coaster got #1!" "This poll is rigged!" "I can't take this seriously, WTF?"

Let's face it: most of the "best of" lists in website articles are simply the opinions of the writer.
Most “best coaster” polls don't accurately reflect reality.
Even some very popular polls are using a tabulation formula that unfairly favours good coasters that lots of people have ridden over better coasters that fewer people have ridden. Thus, newer or more off-the-beaten-path coasters don't have a chance of being ranked where they should be.  
Here's why:

image.jpeg

You might've seen a poll with a ballot that looks like that. You might've seen several of them. This is a very popular way to run a poll because it's easy to understand, it's easy to fill out a ballot, and it's easy to tabulate. Usually, what happens is whatever you put as #1 gets 10 points, #2 gets nine points, etc until your #10 choice gets just one point. All the points are tabulated from the ballots received and the coaster with the most total points is declared the winner. Sounds good at first, but it can skew the results. Have a look at this:

image.jpeg

The winner should be Awesomesauce. Literally everyone who rode it said it was the best coaster they'd ever ridden.
Every. One.
But since only 9 of the 100 voters had ridden it, it didn't have a chance. Bananaboater got more points... lots more points, even though nobody who rode it ranked it as their #1 coaster. What's worse is that looking at the individual ballots might even show that those nine people who rode Awesomesauce had also ridden Bananaboater... which means that every one of those people thought Awesomesauce was better than Bananaboater, but Awesomesauce still lost. Worse than that, Coconut Crush comes in second place solely because nearly everyone had been on it. Everyone ranked it at #10, meaning that every single one of those voters thought nine coasters were better than Coconut Crush, but it still ranks at #2. There's a better way.

Enter Mitch Hawker

Back in 1993, Mitch Hawker launched the Best Wooden Coaster Poll (whether you read that as "Poll to choose the Best Wooden Coaster" or whether you read it as the "Best Poll to choose a Wooden Coaster" doesn't make any difference - it was accurate either way). Rather than assigning points to coaster votes, he developed a ranked pairs algorithm that looked at each possible pair of coasters across all the ballots and asked a simple question: Of the voters who've ridden both of the coasters in this pair, which coaster did most of them prefer? It seems like such a simple thing, but it revolutionized the whole coaster poll landscape. Before long, the accolades were coming in and people deemed it the most accurate and most respected poll out there. 

In the example above, Awesomesauce would’ve won, since the algorithm would’ve paired up Awesomesauce against Bananaboater and found that everyone who rode both of them preferred Awesomesauce. Then it would’ve found that the same thing happened when Awesomesauce was paired against Coconut Crush. Then it would’ve compared Bananaboater to Coconut Crush and seen that of the people who had ridden both of those coasters, they preferred Bananaboater. The results would’ve reflected that reality.

Sadly, Mitch stopped conducting the poll in 2013. The ElloCoaster Poll seeks to fill that void and provide an accurate and respectable poll to determine the best of the best. Here's how it works:

Head-to-head pairs ranking

The ballot contains every major wood or steel (depending on which poll you’re doing) coaster on earth that was operating this year. Voters simply mark the coasters they've ridden in order of preference, with #1 being their favourite and going down from there. The ElloCoaster tabulation program collects the rankings from the ballots, looks at each coaster, then goes one-by-one comparing that coaster to every other coaster to see how many people rode both coasters and whether the coaster won, lost, or tied with the coaster it's being compared to. With 198 wood coasters (as of this writing), there more than 39,000 possible pairings! The steel ballot has more than SIX MILLION pairings! To insure accuracy in the calculations, ElloCoaster uses a custom-designed program to do all the tabulation. (Realistically, there are half that many pairing combinations, since a pairing of Coaster A to Coaster B is the same as pairing Coaster B to Coaster A. Nevertheless, the program looks at the pairings from both sides as a means of error-checking.)

Determining the results

For each ballot, the tabulation program looks at each coaster the voter rode and one-by-one, pairs it with every other coaster that voter rode. For example, if a voter rode five coasters (A, B, C, D, E), the program will look at coaster A and B. If coaster A got a higher ranking on the ballot, it receives a win. If it ranked lower, it gets a loss. If they got the same rank, it records a tie. Then it compares coaster A with coaster C and records win, loss, or tie there. Then A/D and A/E. Coaster B is then compared to A, C, D, and E. This continues down the line until every coaster is compared to every other. The number of wins, losses, and ties is tallied and the program loads the next ballot. Once all the ballots are tallied, it calculates the "win percentage" for each coaster - in other words, in all the head-to-head comparisons that this coaster 'competed' in, what percentage of the time did it win? That percentage is what determines where a coaster comes in the final rankings after all the ballots are tabulated, assuming the coaster had enough riders to be counted (the threshold is usually set around 5% of total ballots, so if 200 people vote, a coaster needs at least 10 riders to be included in the results).  

The head-to-head comparison model eliminates the bias that most other polls inadvertently give to coasters that have been ridden by more people, since in any given pair matchup, only the voters who have ridden both coasters have any say as to which one is better. This also helps coasters outside the US get a fairer chance at placing where they should (since voter pools tend to be dominated by US voters, overseas coasters usually have fewer riders and rarely get fair placement in other polls. This system fixes that.) It's not about how many people rode a coaster, it's about how good the coaster is compared to the others. 

Here’s what to do now

Voting is open from Nov 1 to Dec 31 each year.

It can take some time to fill out a ballot, especially if your list is long. Feel free to download the previous years’ ballot and start entering things if you like. On November 1, the updated ballot will drop, complete with instructions on how to update the ballot you’ve been working on already. If you’ve voted in the previous year, you can use that ballot as a starting point, too.

You can get your ballot over on the BALLOT PAGE.