Against Aggregation: The Anti-Criticism of Rotten Tomatoes and Metacritic
//Philip Conklin
The concept of review aggregation — the process whereby vast numbers of reviews are collected, collated, and averaged, producing a rating for each product intended to give an idea of its quality — is pervasive to the point that it seems natural. Review aggregators are everywhere you look on the Internet; there’s Rotten Tomatoes, Metacritic, Epinions, Goodreads, Tripadvisor, AllMusic, GameRankings, and on and on. While these sites may allow us to easily compare many different products, why do we feel so comfortable in using them to judge the value of works of art? It may be very helpful in deciding, say, what vacuum cleaner to buy, but why do we insist on also using this system to assess films?
For a lot of people, the beginning of the process of deciding what movie to watch — whether it’s at a theater, streaming online, or any other form — likely starts at Rotten Tomatoes or Metacritic, the two most popular review aggregators for film. But what do these sites tell us? What would a 100 percent rating mean on Rotten Tomatoes? What would a Metascore of 100 mean on Metacritic? Given their systems, would a score of 100 indicate a perfect film? A perfectly likeable film? A perfectly unoffensive film? A perfect commercial product?
Not only do review aggregators prevent any serious or critical interaction with a film, but they essentially reformulate the relationship between film and film viewer in a way that diminishes both, by transforming the filmgoing experience into a market transaction.
the systems
To explore this further, one first has to delve a little deeper into the workings of each website. Of the two, Rotten Tomatoes operates on a much simpler system. The website amasses reviews from critics — who must meet certain criteria to be included — judges whether the review is favorable or not favorable — based on the critic’s star rating or grade, or, in the absence of either, on the content of the review itself — and then determines what percentage of reviews is favorable, which is the film’s final score. As they put it, “The Tomatometer measures the percentage of Approved Tomatometer Critics who recommend a certain movie.”
Metacritic’s formula is more complicated. From their website: “A METASCORE is a weighted average of reviews from top critics and publications for a given movie.” First, Metacritic’s pool of reviewers, because it’s restricted to “top critics and publications” is much smaller than Rotten Tomatoes’. And instead of a favorable/unfavorable score, Metacritic assigns each review a score out of 100, based, again, either on the critic’s own star rating or grade, or the general tone of the review. But each review is not equal. They weight each review, assigning “more importance, or weight, to some critics and publications than others, based on their quality and overall stature.” After each review is rated, and all the ratings are averaged, each movie’s score is then “normalized,” or “graded on a curve,” to “prevent scores from clumping together.”
So in each case, “review aggregation” provides a different result. On Rotten Tomatoes, the percentage of critics who had a generally positive reaction to a film, and on Metacritic, an average of assigned scores, weighted based on apparent quality and influence, and normalized in order to create a hierarchy of related products. Each of these systems is problematic in its own way, but both present a mode of judgment inimical to the project of film criticism.
rewarding mediocrity
Clearly, sites like Rotten Tomatoes and Metacritic — made possible by the vast indexing and assimilation of information provided by the Internet — owe their popularity in part to their convenience. And from this angle, their popularity is well earned; in one place, in one glance, we can see, in theory, how good a movie is, in a clear, edible, numeric value. Given this convenience, the idea that the numbers assigned to films on these sites have anything to do with the films’ quality is particularly alluring.
First, I would like to question what it is the information provided by review aggregation signifies, since I find the idea that these websites offer anything having to do with a film’s merit a dubious proposition. Here we’ll have to differentiate between the two websites in question. As explained above, the grade offered by Rotten Tomatoes indicates the percentage of critics who favorably reviewed a film. The question is, of course, what is favorable? Looking through critics’ ratings on their website, the rule of thumb seems to be that a rating above 50 percent (that is, to reference the most common scales used by reviewers, 3/5 or above or 2.5/4 or above) constitutes a favorable rating. In other words, if a critic likes the film more than he or she dislikes it, no matter the slightness of the margin, their review is considered positive, and they are entered into the equation accordingly. Take these two blurbs from critics on the page for Guardians of the Galaxy: “One of the most pleasant surprises of a mixed-bag movie summer, Guardians of the Galaxy is something akin to Star Wars on Quaaludes,” and “It's like the silly, superhero-loving kids goofing off in the back of the classroom looked at The Avengers and X-Men and declared, ‘Hey, how hard can it be to save the world? Let's do it too!’” Though I’d say they’re roughly equivalent in their level of praise, according to Rotten Tomatoes, the first is positive, and the second is negative.
The problem here is as evident as the Tomatometer score is convenient. Essentially, a score of 51 percent gets entered into the equation as 100 percent. This means that if every critic who reviews a movie thought it was marginally above average, it would receive a perfect score on Rotten Tomatoes. That oft-quoted Anchorman joke — “60 percent of the time … it works every time” — bears more than a little resemblance to this system — “100 percent of the time … they liked it 51 percent of the time.” Moreover, a film that every critic unequivocally enjoyed every second of will earn the same score as the film that every critic just barely enjoyed; or, a film that 90 percent of critics thought was great and 10 percent thought was horrible will earn a lower score than a film that 95 percent of critics thought was passable. The permutations go on and on.[1]
Essentially, Rotten Tomatoes rewards mediocrity, while paying no heed to the difference between mediocrity and greatness. Theoretically, of course, a great movie will earn a higher score than a mediocre one. But greatness is very rarely a unanimous verdict, while we can all likely agree on slightly-above-average.
With Metacritic, the method is more contrived but the result just as meaningless. Although their score is based on the average of critics’ scores (adjusted to a 100 point scale), instead of reverting to the favorable/unfavorable partition — resulting in generally lower scores than those of Rotten Tomatoes — Metacritic runs into issues when it begins to “weight” and “normalize.” The easiest criticism to levy against this practice is that it’s arbitrary; there’s no reason to “prevent scores from clumping together” other than to make a more clean and distinguishable list, meaning that the quality of a certain movie may be misrepresented, even according to this flawed system. Further, by assigning more importance to certain critics and publications, the results serve only to reinforce established tastes, rather than consider a film based solely on its own merit.
market imperatives
These can probably all be seen as minor quibbles about a generally useful system. After all, review aggregation is more a tool to compare the quality of a variety of products, not an absolute and objective judge of quality. These sites are meant to help you decide which film to see, one might argue, not tell you precisely how good every film is.
But there’s more to it than that. What is most problematic about the idea of review aggregation proposed by Rotten Tomatoes and Metacritic is that it shifts the nature of the relationship between the film and the filmgoer. Instead of a work of art to be assessed and appreciated on its own terms, a film becomes a product in a market, compared, in a sort of Consumer Reports of culture, against other products with apparently identical possible benefits but varying degrees of success. This turns filmgoing, at its core, into nothing other than an economic transaction.
Review aggregation first of all eliminates differences between films by judging all of them on the same scale and with the same criteria. As an example, let’s compare Closed Curtain with Guardians of the Galaxy, which at the time this is being written both hold a rating of 92% on Rotten Tomatoes. Closed Curtain is an Iranian film directed by Jafar Panahi, a well known filmmaker who in 2011 was arrested for shooting a film without a permit (necessary for any film production in Iran, whose film industry is highly censored) and sentenced to six years in prison and a 20-year ban on directing films, giving interviews, and leaving the country. Though Panahi is not currently serving his prison sentence, the ban on filmmaking is still in place, and Closed Curtain is therefore an illegal film. Guardians of the Galaxy is a Hollywood film produced by Marvel Studios (a subsidiary of The Walt Disney Company) for a budget of $170,000,000.
By this juxtaposition I don’t intend to claim that Closed Curtain is automatically superior to Guardians of the Galaxy, or that any “art film” is by nature better than any “blockbuster.” But it would be even more outrageous to claim that these two films have comparable virtues, similar goals, or should be judged with the same system and in relation to one another. For the most part, I enjoyed Guardians of the Galaxy (I suppose that enters me into the “positive” sector on Rotten Tomatoes); I have not seen Closed Curtain. But I’m certain that nobody would have the same exact reaction — that is, 92 percent contentment — with both films. Closed Curtain is as much an act of political subversion as a work of art, and its director is a dissident whose low-budget, independent productions utilize non-actors and blend fiction and documentary while addressing the political and cultural climate of Iran. Guardians of the Galaxy is a prestigious Hollywood production with high profile stars and a lavish budget that executes its comic book source material with some charm, albeit cloying and crowd-pleasing charm. I would not argue that this makes one better than the other, but rather that this should necessarily preclude comparison.
But comparing is just what Rotten Tomatoes and Metacritic do — it is their essence. One doesn’t visit these sites in order to determine how good a film is, but to determine how good it is in relation to other films. This is inscribed in Metacritic through its system of hierarchizing movies; it is prevalent all over Rotten Tomatoes, where you’re assailed with lists of celebrities’ highest-rated work, weekly “Top Rentals,” “24 best and worst sports coaches in film history,” “Robert Rodriguez’s 10 Best Movies.” On these websites, all films are caught up in a perpetual cycle of comparison with one another, in which what is being compared is less important than in how many variations it can be compared.
This insistence on comparison is a result of the reconfiguration of filmgoing as a market transaction, which places a premium on the decisions of the participants in the market. “Weekend decision-making is centered around Movies,” proclaims the About page on Rotten Tomatoes’ parent site. And from Metacritic: “Metacritic's mission is to help consumers make an informed decision about how to spend their time and money on entertainment.” A practical mission, but a destructive one, in that it assumes that there are only two possible responses to a film: satisfaction or dissatisfaction; the comfort of $12 well spent or the anxiety of a poor economic decision; the pleasure of entertainment or the frustration of anything else. While one is preferable to the other, both of these responses are to my mind bland and undesirable; they make no room for the challenging, controversial, puzzling, or contradictory films that often are most pleasurable, the ones that have the most to be gained from watching. Further, I would argue that this framing, which seems so natural because of its prevalence, its apparent unchallenged dominance as the preferred mode of judgment of culture, is a profoundly unnatural way to deal with films, and works of art in general.
Imagine, for example, visiting a museum and, instead of a description of the work of art providing some context or information about the artist, you were presented with a rating, and that every painting in the museum was compared on exactly the same rating system, so that Warhol, Brueghel, and an anonymous Russian icon painter were all judged in exactly the same way. Surely one’s experience of art would be handicapped in a system such as this, yet we readily accept such a system for films. The comparison between movies and art in a museum may seem exalted, but that fact alone says a lot about how we think about movies.
studio marketing tools
Film production is, of course, a business, one dominated by a few multinational media conglomerates (The Walt Disney Company, Comcast, Time Warner, National Amusements, Sony, and 21st Century Fox) which each own one of the six major Hollywood studios (Walt Disney Pictures, Universal Pictures, Warner Bros., Paramount Pictures, Columbia Pictures, and 20th Century Fox, respectively). Metacritic is owned by CBS, which is owned by National Amusements, which owns Paramount, one of the aforementioned Hollywood studios responsible for a good portion of the drivel Metacritic is dedicated to appraising. Rotten Tomatoes is owned by Flixster, which is owned by Time Warner, which owns Warner Bros, another of those Hollywood studios. It’s no coincidence that these websites are operated by companies that each own one of the major movie studios. It is in the interest of these corporate leviathans, not the individual who uses these sites, that the wheel of film assessment turns toward review aggregation and everything it entails.
To understand this, it’s important to start with some historical context. Before television, VHS, DVDs, and long before the Internet, studios could count on a steady flow of filmgoers attending movies at a theater. In 1947, 90 million people, which at that time was two thirds of the country’s ambulatory population, went to see a movie in an average week. In 2003, it had dropped to than twelve percent of the population.[2] Whereas in Hollywood’s “golden age” there was a large established audience that could be expected to see whatever was playing at their local theater, today the studio must create an audience for each film they release (cf. Epstein, pp. 177-197).
Rotten Tomatoes and Metacritic serve as essential tools in the studios’ mission of audience creation. They rustle up buzz for upcoming films by promoting clips and trailers; they feature interviews with cast members; they present lists not only of theatrically released movies, but also trumpet DVD releases by lists of the “Top Rentals,” “New Releases,” and “Coming Soon to DVD” each week. And what they present as essential movie news is nothing other than studio promotional material; does the headline "Sean Harris Eyed for Mission: Impossible 5" inform anyone of anything they need to know, or simply make them aware of the fact that Mission Impossible: 5 is going to be released at some point in the future? It's important to realize that awareness of the existence of an upcoming film is the most important aspect in film marketing, making Rotten Tomatoes and Metacritic vital and successful parts of the studios' marketing apparatuses.
Further, especially in the case of Rotten Tomatoes, they continue to promote awareness of previous studio releases by compiling lists of top rated work by certain directors, by genre, or by anything they can think of. Rotten Tomatoes is currently featuring a “60 Worst Summer Movies” list; is this because some completist fans out there are really itching to know what the worst-reviewed summer movies since 1975 were, or is it simply a way to remind people about movies that they would otherwise have no occasion to ever think about, thus continuing the life of a worthless product beyond what it deserves or could ever have hoped for without sites like Rotten Tomatoes?
Rotten Tomatoes and Metacritic are not so much a helpful tool for the consumer of films, or at least not entirely; first and foremost they are centers of audience creation for the films they rate. They constantly promote buzz and awareness for theatrically released movies, DVDs , television , and, in the case of Metacritic, video games and music. It’s also important to consider that DVDs earn more money for the studios than theatrical distribution, that each of the major studios own a television network, which also brings in more money than ticket sales at theaters, and that video games and music are big profit-makers for some of the studios, particularly Sony.[3]
Of course, this isn’t due to any grand design on the part of the studios. Rotten Tomatoes and Metacritic were founded by individuals and only later bought out by the studios. And whether or not it’s possible to know to what extent the studios that own the websites a hand in shaping the content of the sites, or promote their own films over those of other studios, the fact that both websites were purchased by a company that owns a major film studio shows that the media companies understand the value inherent in them.
audience rating, critics’ consensus
Aggregation of ratings, the collection of instantaneous value assessments, pervade the culture to a shocking extent. It’s rare in our media-saturated world to encounter a product of any kind that is presented without some sort of value judgment attached. Netflix, Hulu, Amazon, IMDb, Barnes and Noble, iTunes all attach an audience rating to their products; television shows like The Voice and American Idol ask viewers to participate by rating their favorite performers; you can hardly watch any TV show without being asked at some point to tweet a response to a poll, the results shown on your screen in real time; it is everywhere. Seemingly, this kind of thing is in line with the noble spirit of democracy. But in his book On Television, Pierre Bourdieu explains the harmful effect ratings can have:
In this light, consider Charlie Chaplin’s 1947 film Monsieur Verdoux. On the heels of a public scandal (a maternity suit was brought against Chaplin, a case he eventually won), and amid rumblings about Chaplin’s political leanings, the film was almost universally panned when it was released; at the premier, audiences hissed their disapproval; it was a box office flop, surviving only a month-long run before being pulled by its distributor; and Representative John E. Rankin of Mississippi was so morally outraged that he called for Chaplin’s deportation. Safe to say, its Rotten Tomatoes and Metacritic scores would have been less than adulatory. But writing for The Nation, James Agee was among a few critics who praised the film: “In case this leaves any doubt of my opinion of the film, let me say that I think it is one of the best movies ever made, easily the most exciting and most beautiful since Modern Times. I will add that I think most of the press on the picture, and on Chaplin, is beyond disgrace. I urge everyone to see Monsieur Verdoux who can get to it.” (Agee on Film, p. 250) Agee went on to write an impassioned three-part defense of the film over the next month in the magazine. Luckily, time proved him right, and the film is now considered a masterpiece. Review aggregation leaves little room for dissenting opinions like Agee’s; it doesn’t disallow them to exist, but it strips them of their power, subsumes them into the formula, makes them little more than a fraction of a percentage point in a meaningless score. We can’t be sure whether Rotten Tomatoes and Metacritic make it impossible for films ever to bounce back from poor opening notices, but certainly it has made it more difficult; not only do people no longer feel the need to read any reviews, but the apparent “critical consensus” makes it appear as though there are no opposing opinions to consider.
Even if it were possible to have a real “critical consensus,” it would hardly be more helpful than an audience rating, a score which both websites supply. The only real difference between the average score of the user and the average score of the critic, is that the latter is simply representative of a much smaller and more specific portion of the population. Serving only as a measurement of how much a certain group — one that, to be sure, generally has a better handle on the material they’re rating than others — enjoyed a specific film, the ratings on Rotten Tomatoes and Metacritic only serve to underline the importance of market imperatives in the film industry. Despite their apparently democratic appearance, these sites actually strip the individual of any power. Your voice, and the voice of the critic, may be heard, but it means nothing.
conclusion
Rotten Tomatoes and Metacritic, and all review aggregators, have nothing to do with film criticism, are in fact engaged in an activity exactly opposed to film criticism. By collating and averaging these reviews, critics, even the worthwhile ones — which are few and far between — become nothing more than a small section of the undifferentiated mass of consumers constantly rating products. Film criticism should provide context; it should reveal meanings; it should encourage meaningful engagement with movies; it should challenge us to discover new perspectives, to see more clearly, to analyze more deeply. The value of criticism, moreover, rests on concerted, thoughtful interaction with a film, both on the part of the critic and the part of the reader, but this process is inhibited by these instant, averaged value judgments, which can only serve to reduce individual thoughts into a meaningless, homogenized mishmash. Review aggregation places movies in the vicious continuum of the market, reduces them to commercial products rated by ease of digestion, and transfigure criticism into a useful marketing tool for multinational media corporations.
Maybe this sounds lofty or dramatic, but when you see the extent to which these ratings systems have infiltrated every aspect of culture, a forceful response proves warranted. Metacritic and Rotten Tomatoes ratings are included on the Wikipedia page of every recent film; film advertisements feature the same ratings; when you google any film title followed by “review,” its Rotten Tomatoes score appears at the top of the page; newspapers have adjusted their publishing schedules so that their reviews may be included in the Rotten Tomatoes scores which are published every Friday. These review aggregations are convenient and have been made to appear intuitive, but their destructive logic inhibits meaningful interaction with cinema.
//Philip Conklin is a co-founder of The Periphery.
Footnotes:
[1] Rotten Tomatoes also offers an “Average Rating,” a score out of ten that is supposed to be a true average of critics’ ratings, making it closer to the Metascore. But since this rating is only seen on a movie’s individual page, and then only very small under the Tomatometer score, it doesn’t play a big enough factor in Rotten Tomatoes’ functioning to bear discussion here.
[2] Epstein, Edward Jay, The Big Picture: Money and Power in Hollywood, p. 17
[3] In The Big Picture, Epstein shows that in 2003 the big six studios not only lost money on the theatrical release of their films (with a total deficit of $11 billion at the world box office) but on average spent more just on advertising and prints for a given film ($39 million) than they earned from ticket sales ($20.6 million). Their profits come almost entirely from ancillary markets like DVDs, television, theme parks, video games, etc.