Games Reviews – A critique
Games reviews are an essential part of gaming. As long as gaming has existed as a hobby, there has been a huge difference in the quality of games on the market. To save gamers money and time they have needed some way of gaining advice and information about which games were good, but more importantly, which games were bad.
Back in the days when I grew up with the commodore 64 and the Spectrum, the bad games were so bad no one would ever want to play them. The good games meanwhile were difficult to find and often from small obscure developers rather than the big names of the day like Ocean and Codemasters. Reviews served as a kind of formal word of mouth recommendation, written as they were by hobbyist and fellow gamers keen on making sure the best games were played and appreciated. Price, publisher or developer were poor indicators of a games quality, it was all very random and money was easily wasted on games that frequently had little in common with their awesome box art.
Avoiding the crap is the key purpose of reviews for me. Gamers want to know about the really bad games; the movie tie-ins rushed for a quick sale, the also-ran games shamelessly copying more popular titles and the broken games that are, as Philip K Dick would say, buggier than bat-shit. Gamers can be tricked, fooled by doctored screenshots, seduced by box art or tempted by misleading advertising. It’s the reviewers who look out for us. Those reviewers don’t tell us what we should think about games, they tell us of their own experiences; What they liked, what they hated and crucially, whether the game is worth our hard-earned pennies.
There are a number of criticisms of contemporary game reviews and I’m going to introduce the three biggest here before discussing the first of them in more detail. They are scores, bias and corruption.
Review scores are one of the most controversial aspects of games review. Gamers usually want them, journalists usually don’t and devs and publishers only want them if they’re over some arbitrary number like 8.5. You may think that reviews scores don’t affect your buying decisions, but recent studies have shown the link between game scores and how we rate games is causal rather than correlative. In other words, if they say we’ll like the games, we’re more likely to like the games (Joystick).
Review scores have a massive impact on game sales so there’s pressure on review sites to give inflated scores, especially to games that are advertised heavily on those sites. The truly awful Kane and Lynch was given a low score on Gamespot and this resulted in the reviewer, Jeff Gerstmann, being shown the door. This led him along with several other journalists to jump ship and create the much more reputable site; GiantBomb.
Such outright corruption is probably rare, its more likely that reviewers are subtly influenced to review games highly. For a reviewer, a poor review may not always lead to demands to reconsider or lose your job but angry PR people hounding you and threatening to withhold exclusives could be just as effective. Added to this, gamer fanboys for particular franchises can be very vocal and therefore its safer to be kind to the likes of Final Fantasy, Zelda or Halo than lose those fanboys by pointing out genuine shortcomings.
This means there are a number of factors pushing game scores up but much fewer good reasons for reviewers to score games lowly. This has led to the slow and horrible adoption of the 7 to 9 scale wherein most games fall within this narrow range. The occasional terrible or broken title will be given the score of 4, but on the whole the disappointments get a 7 and the great games get a 9 with everything falling between. Particularly cowardly sites like IGN will even re-review a game with a “second opinion” if a major title gets a score fanboys see as being too low in a pitiful attempt to wash their hands of any danger of giving a genuine opinion.
Sites like GiantBomb and Kotaku avoid the 10 point scale and gleefully mess with the metacritic score of games. Many publishers say how much they hate it, which means it must surely be a good thing for gamers. Do we need anything more than the scores provide by an aggregator like metacritic?
Most journalists will grumpily tell you that the score is just a full stop at the end of the review, designed to bookend the far more important information they have given in the body text. This would be fine if so many reviews didn’t completely miss the point of reviews in the first place. Varying wildly in content, reviews range from narrative descriptions of the reviewers experience to histories of the developer to bizarre James-Joyce-style stream of thought bullshit. In future I’ll write more about bad reviews, but as an example take this self-indulgent nonsense from Tim Stone on Eurogamer.
With so little concept of what is important to gamers shown in reviews, opinions of games are often formed through comparing what the journalist says to the gamers own experience in an attempt to evaluate their reliability. Gamers play demos, watch videos and read developers blogs while comparing these pieces of evidence to gain a complete picture of a game before committing to buying it. This generally works for simpler games, but complex titles like Final Fantasy 13 take a great many hours of play before offering their best moments and have a structure that is not best shown in videos or walkthroughs.
Review scores are here to stay. When Edge magazine experimented by omitting game scores for one issue it was a trial, and obviously the feedback they received was not positive. They went back to their ten point scale in the very next issue.
As you read this post, you’re free to rate it with the buttons at the top of the screen. I include this because I want to improved the content of the site. I want to know what you think. Games developers need to use scores in the same way. If they don’t listen and blame the critics, they end up looking like Kevin Smith. No one wants to look like Kevin Smith.
Next time on this subject: Bias!