Select Page

 

People who have followed drum corps for decades have seen significant changes in the activity and in the rules and logistics that govern competition. Most people who are passionate about the activity have – at one time or another – expressed strong opinions regarding scores and placements.  Quite often, people have been displeased with the results. Admit it; most of us have disagreed with the outcomes of some competitions. That is not surprising in an activity where the content and style being presented from one corps to another are completely different and with innovations continually introducing new challenges for adjudicators. Modern drum corps competition, it can be said without much controversy, has essentially evolved into a sophisticated display of artistry.

Back in the day, there were some obvious obstacles to “fairness” with the existence of different judging organizations, contests that gave corps home-field advantages, and geographic biases. Eastern corps traveling West and Midwest corps traveling East often complained that the location and/or judging association assigned to a show translated into a competitive disadvantage. Today, those obstacles have faded with DCI as a monolithic judging entity and a more level playing field than in the past. However, there are other factors many people believe affect fairness and with razor-thin margins and passionate fans, controversy still exists.

With “The Summer Music Games,” as DCI now frames the competition calendar, about to return to what we hope is “normal,” we ask the question: How fair is the judging system?

Let’s look at it from a few different angles:

  • Scoring
  • Objectivity
  • Logistics
  • Consistency
  • Judging “Art”
George Oliviero and Jack Whelan

George Oliviero and Jack Whelan

Scoring

The rules and scoring system have undergone major changes since the 1970s when DCI emerged as the governing body of the activity. Prior to the early 1980s, execution was judged using the “tick” system, and execution was weighed very heavily in the final scoring. The tick system was a “tear down” system, deducting tenths of points for any errors a judge detected.

“Evaluation execution” was introduced in the 1980s as performance levels improved and counting mistakes became a less effective way to assess quality. Until the DCI scoring changes began, General Effect only accounted for 30 points, with the remaining 70 points targeting execution. Fast forward to the current system, and that has flipped with the majority of points now allocated to GE and Analysis captions. Although there are purists who bemoan the changes in the scoring system, most people believe the current approach is more appropriate because of the dramatic evolution of the product on the field. Judges’ prime responsibility is to rank (determine who is 1st, 2nd, 3rd, etc.) and rate (determine spreads between the units), and the current system, although imperfect, does offer a mostly effective framework.

DCI Judge

Objectivity

Since the current scoring system theoretically gives judges tremendous latitude in ranking and rating, a widely held belief is that if results are not “fair,” it is primarily because of the personal biases of the judges. This could certainly be true, but it appears there is a lot more to the equation. Anyone who has witnessed competitive drum corps this century knows that productions on the field vary widely, with diverse idioms, styles, and designs. To a certain degree, we are asking judges to compare “apples and oranges” and come up with a “fair” assessment. Imagine you are asked to compare the following: an excellent classical symphony against a great jazz piece. Which would you rank higher, and by how much? What about a painting by Rembrandt compared to a Picasso? It is a significant challenge and is analogous to what judges face constantly. There is no doubt that every individual has personal preferences, and judges are not immune to this. They have opinions. So, naturally, when there is a disagreement about the overall results or the scoring in one particular caption, people often point to personal bias, or that a certain judge favors one corps over another. Although this probably does happen – a judge’s personal preferences may seep in – there are forces at work that tend to counteract this.

With all the latitude judges have, and differing opinions, you might expect to see some changes in rankings from one judging slate and one competition to another. Interestingly, this is not the case. For example, let’s consider two units that are very evenly matched and scoring within tenths of each other every time out. It would seem likely their head-to-head ranking would change during the season due to the human element alone. However, if you review the results from the past 2 decades, changes in rankings hardly ever transpired. Almost every corps’ ranking from June stayed the same all the way through DCI Finals. Think of it, two corps scoring within tenths of each other all season and yet the ranking never changes. One reason for the static results may be related to logistics.

DCI Judge

Logistics: Order of Appearance

Two areas of logistics that appear to have an outsized effect on the results of DCI competitions are the order of appearance and an apparent desire for consistency in scoring. The order of appearance at DCI shows for many years has been based on the expected reverse order of finish. In other words, the corps that is expected to place first performs last, while the corps expected to trail the field performs first. The order is usually derived from previous scores which are used to determine “rankings.” This means that if Corps A has been scoring higher than Corps B in previous contests, Corps A will perform after Corps B. There are two potential problems with this approach; 1) If the corps have not yet competed head to head and the ranking is based on scoring from different venues with different judges, it may not properly reflect which corps should be ranked higher, and 2) Once the precedent is set (using this initial ranking which may not be accurate) it is much more difficult for the lower-ranked corps to move above the higher-ranked corps because of a phenomenon called “slotting.”

Slotting is defined as putting or assigning something into a slot, essentially determining its position. In the case of drum corps, this is a corps’ rank or position relative to other corps. The slot or position a corps is assigned early in the season has the potential of becoming somewhat permanent, or, at the very least, quite difficult to alter. That means the corps that gets “slotted” as the 9th best corps very rarely advances above that slot. There is significant evidence to support this and it can be easily observed by reviewing contest results for the past two decades. Because the order of appearance is now the reverse order of expected finish, the competing units have essentially been systematically pre-judged. It is expected that the corps that performs last will win. And it is expected that the first corps to perform will place last. Even with valiant attempts by judges to be objective, there is a psychological phenomenon that tends to reinforce the “established” rankings due to slotting. This trend has been accelerating recently and you can observe the impact most clearly if you examine scores and placements of the top 12 during the 2019 season.

2019 Mandarins

2019 Mandarins

If you discount the very first scores of that year, it appears that corps became permanently slotted in late June. The ranking/placement of the top 12 corps, from that point forward, did not change for the rest of the season, with two exceptions. Those 12 corps competed in reverse ranked order 6 times from early July through DCI finals. The only placements that ever changed: Mandarins overtook Phantom Regiment in DCI Semi-finals, and the Bluecoats and Blue Devils (scoring within a tenth of each other all year) traded the top spot. In many instances, the spreads – the difference in scores between adjacent corps – were just a few tenths, but no placements (other than those mentioned above) ever changed. While it is possible all the placements from 2019 were completely warranted, it seems very strange they would remain so static throughout the entire season. Were all these corps’ performances so incredibly consistent, as every corps improved during the season, so much so that no corps ever moved up or down? Or, was the effect of slotting reinforcing those placements? When you look at the evidence, it is difficult to believe that the order of appearance did not have some influence on the final results. You can view the scores from 2019 here: Scores from the 2019 Season.

In theory, it should not matter when a corps performs, as judges should rank and rate them effectively, regardless of order of appearance. But history shows that it does matter.

Does Slotting Matter?

If there is doubt about the effect of slotting, just ask any corps about it when the order of appearance was randomized. In the 1970s performance order was often determined by a random draw. Corps’ considered it a huge disadvantage if you drew an early slot in a big competition. Back then, the effect was so pronounced that the first corps to compete was referred to as the “sacrifice corps.” For some big shows, the order of appearance was the reverse order of the date your application was received. Ask anyone who marched in the 1970s about the importance of the order of appearance and you will hear stories of the effect of an early or late slot. Sacrifice corps were routinely “sacrificed.” If you ever had to compete in prelims while dew was still on the grass, you probably recognize the impact of an early slot.

There was the famous “DCI block” – all the DCI finalists from the previous year were regularly slotted in a large block at the end of most big prelims shows. Non-finalists frequently went on early and most of them did not fare well. Making finals of those big shows (World Open, US Open, American International, etc.) was very difficult if you performed early. This became most evident when high-ranked corps or those on the cusp of Finals drew an early starting position and suffered accordingly. Note: This discussion about slotting is not intended to advocate for total random order of appearance. Would having the best corps perform first and lower-ranked corps perform last create more “fairness?” That is possible, but it could also backfire on the lower corps and would not improve the flow of the event. Audiences universally prefer that the best corps perform later and that should be a consideration.

1971 World Open prelims lineup

1971 World Open prelims lineup – artwork by Don Daber

The 1988 Anomaly

An interesting and controversial situation regarding order of appearance occurred in 1988. DCI changed the logistics around order of appearance and also randomly selected judging panels, in an apparent attempt to increase “fairness.” They also did not announce the scores of the top 12 after the Prelims or Semi-Finals, creating mystery and suspense. The audience knew who made the top 12, but not their rankings. The top 5 rated corps and those ranked 6 through 12 based on previous scores were grouped and assigned starting positions randomly within their group. For the top 5, regardless of previous scores or placements, any of these corps could perform last or in any of the 4 prior positions. During the season, the top 5 were the Blue Devils, Santa Clara Vanguard, Phantom Regiment, the Cavaliers, and the Madison Scouts. Up until that point in the season, the Blue Devils were undefeated in every one of their 26 competitions and their margin of victory over Madison was usually more than 4 points. Madison had lost to Blue Devils, Santa Clara, Phantom Regiment, and Cavaliers all season. Repeat: the Madison Scouts had lost to all 4 of these corps all season.

In Semi-finals, the random draw gave Madison a later draw than all of those 4 corps. Unknown to the public, since no scores or placements were announced, Madison won the Semi-finals. Then, their random draw for Finals was again later than those other corps. Although they had lost to those 4 corps all season, they were peaking at just the right time. Madison’s powerful Finals performance cemented them as the crowd favorite and gave them the title. In particular, their closer of “Malaguena” was considered a prime reason for their victory. You can watch the final 5 minutes of that performance here:

Did their late draw give them an advantage? Did they “deserve” to win? Most people in attendance saw Madison as legitimate victors, but fans of SCV and Blue Devils had to wonder if the order of appearance was a factor. Let’s be clear, the judges made scoring decisions that put Madison first – this was no accidental victory. But did performing after the other top corps have any influence? The Blue Devils’ GE Visual score (6th) and Visual Performance score (4th) were keys to their (very disappointing for them) third-place finish, with observers citing their suboptimal visual performance as justification. The majority of people at the event believe that “the best performance” won. The question remains: did the order of appearance affect the outcome? Did the random order of appearance level the playing field such that the “best performance” could win? Scores for the 1988 season can be viewed here: Scores from the 1988 Season. 

Logistics: Consistency

In a perfect world, live drum corps performances should be judged solely based on what happens on the field, on that date, and at that time. Every live performance is different, and with humans involved, and changes made to programs during the season, you would expect some variability would be significant. However, even when significant, it is not always reflected in scoring. During Allentown weekend several years ago, site of the annual Eastern Classic, I was involved in a conversation between a group of my friends and a very high-ranking DCI official. It proved to be a very insightful dialogue. One of the topics we discussed was why a certain corps, who had improved very dramatically as the season progressed, was lagging in scoring and stuck in the same ranking they held earlier when their performance was considerably weaker. The corps in question had a rough start to their season, but made tremendous progress over the past few weeks. The improvement they had made was stunning. My friends and I were surprised that their scores were not reflecting the improvement and we thought they should be ranked higher. The DCI official said “that corps does not deserve to be ranked higher” because “it is too late to get good.” This statement was startling as it implied that regardless of how well a corps performs, once their position (slot) has been determined, they tend to be stuck there. We pushed back strongly during the discussion, pointing out that these are live performances and that scoring and placement should reflect what is presented on the field, not based on previous performances. This official was adamant that it did not matter how well the corps was performing now, one week before Finals. It was “too late to get good” and he repeated it over and over again. In other words, certain things like rankings should not change after mid-season. This attitude, from a high-ranking DCI official, suggested indifference to truly objective judgment. It suggests that even a dramatic improvement in performance might not be rewarded. And it strongly reinforces the effects of slotting.

If you think individual judges have tremendous latitude to make “their own call” and ignore “established” scores and placements, you might be surprised at how restricted they are in that regard. There are many accounts of judges who went against the grain by either having out-of-step rankings or spreads that were beyond the expected range. Those judges were issued warnings (either overtly or through subtle suggestions) and those who continued with non-conforming rankings or spreads ceased to be called upon to judge in the future. This aversion to a judge rocking the boat forces strict compliance and breeds an unnatural consistency. (Note: There have been situations when a judge went rogue with scores that were completely out of whack and appropriately faced consequences. The judges being referenced here did not go rogue, they simply had opinions that were not completely in line with previous scores/rankings). With slotting already in place, this creates a situation where rankings seldom change. In an activity with live performances that undoubtedly vary from one to the next, the potential for objectivity – judging each performance on its own merits – appears somewhat limited. The desire for scoring consistency appears, at times, to supersede objectivity.

2013 Carolina Crown

2013 Carolina Crown

Judging “Art”

Is it possible to determine which tastes better, an apple or an orange? Making that determination is likely to be quite subjective, isn’t it? They are so different that the comparison is difficult. That is the essence of the colloquial phrase “comparing apples to oranges.” And yet, in competitive drum corps, every unit is like a different fruit as the productions are all totally different. Determining which of these different fruits is “best” is clearly an enormous challenge.

As previously mentioned, if you were comparing the best paintings by Rembrandt and Picasso, which painting would you “judge” as the “best?” What about trying to pick a winner between an excellent classical symphony and a phenomenal jazz piece? That seems extremely difficult, if not impossible if the goal is to be “fair.” This highlights a dilemma in judging modern drum corps. The divergent styles and content in the activity today coupled with the almost universally excellent performance levels exacerbate the difficulty. If the pitfalls of scoring, objectivity, and consistency do not provide enough obstacles to fairness, the task of judging these musical and visual pieces of art adds a whole new aspect to the dilemma.

Surprisingly, the results of DCI competitions get it right most of the time. Although even this writer takes exception to the results of some competitions, it seems like the best corps usually wins and the placements often seem accurate. The top 6 corps at DCI Championships is typically the unofficial consensus of the community in spite of the challenges in adjudication. People might not agree on the exact order, but that top group is almost always seen as “correct.” Of course, there are situations where a corps gets slotted low in mid-season and is unable to rise despite major improvements. Those situations, thankfully, appear infrequent.

2013 Blue Stars Drum and Bugle Corps

2013 Blue Stars

Back to the Question

So, how “fair” is the judging system?

In consideration of all factors, it would be naïve to say that the judging system is completely “fair.” Fairness that depends on human observations and decisions is ripe with inherent challenges, in any activity. Even in the NFL where a critical judgment on pass interference or the coin toss for overtime can determine the outcome of a playoff game, it is evident that “fairness” can be elusive. If DCI could achieve total objectivity and the desire for consistency is muted, there is still the enormous challenge of comparing apples to oranges. The task of judging the art form that is now modern drum corps and achieving complete fairness seems unrealistic.

In an activity where people are passionate about the results, there will always be controversy over scores and placements. There may be steps DCI could take to increase “fairness.” like the random order of appearance and judging panels used in 1988. But even in the absence of any changes, the results – even with all the warts of the judging system – are appropriate almost every time. For people who are unhappy with the outcomes citing a lack of fairness, it might be best to simply appreciate the quality of the performances and recognize that no system is perfect. Modern drum corps offers great entertainment that cannot be diminished by final scores and placements. As a youth activity, it is more important to recognize and celebrate the performances and acknowledge that even if you don’t agree with the results, we can all admire and applaud the excellence on the field.

  • Note: For major competitions in 2022, DCI will be grouping adjacently ranked corps into groups of threes and have their order of appearance randomly determined within those groups.

Featured image – VFW Prelims Scoreboard, Cleveland, OH, 1964