Cutting in and out – Do we still need cutscenes in video games?

In the beginning, there was Pong. Okay, so that wasn’t quite the beginning: there are a bunch of video games that pre-date Atari’s 1972 table tennis simulator. But as the first commercially successful arcade game, Pong – with its barebones, black and white graphics and simplistic gameplay – provides a perfect reference point when comparing how far our expectations of video games as an interactive art form have come.

See, whereas gamers back in the ‘70s were content to bat a handful of white pixels back and forth for hours on end, their modern counterparts expect to immerse themselves in fully realised, virtual worlds where their onscreen avatars are capable of anything. What’s more, they also expect to take part in well-plotted and emotionally resonant stories worthy of a Hollywood blockbuster. These stories are typically told through cutscenes: lavishly rendered in-game cinematic sequences that do everything from kick-off, drive and conclude a game’s narrative, to fleshing out its characters and establishing its overall tone.

There’s no denying that cutscenes are directly responsible for raising the standard of storytelling in video games. However, following the release of games like Death Stranding – famed video game designer/director Hideo Kojima’s ambitious, post-apocalyptic action game which boasts 7 hours-worth of cutscenes! – I’ve found myself wondering whether developers are leaning too heavily on this particular tool. In fact, I’ve started to question whether we really need cutscenes at all.

So today, I’m going to put cutscenes under the microscope – exploring their origins, how they became the go-to storytelling tool for developers, and tallying up what they do (and don’t do) well – before weighing in on whether or not they should have a place in video games’ future.

No story? No problem

Like I said earlier, back in the era of Pong, gamers weren’t really that bothered about story: they we just amazed by the (at the time) cutting-edge technology at their fingertips. As time went on, video games’ complexity continued to grow, but it was clear that the medium still couldn’t tell proper stories yet. Indeed, the best you could hope for were the cute, barely-animated vignettes that broke up the levels in 1980’s Pac-Man.

Things started to change in the early ‘80s, as developers began to experiment with full motion video cutscenes using the LaserDisc format. These pre-recorded videos could display vastly superior visuals to sprite and vector-based graphics but came with drawbacks of their own. For one thing, the jump between FMV footage and actual gameplay was incredibly jarring, while efforts to build games out of FMV footage entirely – such as in Cinematronics’ landmark Dragon’s Lair, overseen by ex-Disney animator Don Bluth – resulted in brief, barely-interactive movies that were pretty to look at, but dull to play.

In any event, LaserDisc-based games never quite took off, so it would fall to more conventional, less technologically advanced games like Maniac Mansion, Prince of Persia and Zero Wing to carry the torch for narrative-based video games. It must be said that while these games (and others) were broadly successful at telling an engaging story, the reality was they simply weren’t backed by the hardware necessary to tell a genuinely compelling story.

And to be honest, nobody cared. It might seem alien to today’s gamers, but the only real focus for those who developed games and those who played them was the gameplay itself. Sure, early entries in role-playing franchises like Final Fantasy or Phantasy Star relied on reams of text to spin a half-decent yarn – and those truly desperate to know their motivation for rescuing Princess Peach from Bowser could consult the game manual for a bit more context – but by large, a game’s story took a backseat to how much fun it was to actually play…up until the early ‘90s.

Greater storage equals greater storytelling opportunities

What happened in the early ‘90s? CD-ROM drives became commonplace in home computers, and – with the launch of Sony’s PlayStation in 1994 – optical discs would soon supersede cartridges as the primary delivery method for home consoles, as well.

The far greater amount of storage space offered by CD-ROMs was a literal gamechanger in every respect, and cutscenes were no exception. Almost overnight, it wasn’t just feasible for every game to offer FMVs – it was expected.

Looking back, the low-resolution video and cheap costumes and sets are beyond hokey (Star Wars star Mark Hamill probably isn’t putting his role in Wing Commander IV at the top of his résumé), but gamers lapped them up at the time, and suddenly, it wasn’t enough for a game to rely solely on its gameplay: it had to deliver grounded in a fully-fledged narrative – even if that meant players becoming passive participants in proceedings.

Of course, not all FMVs used film footage. Computer animation had moved on in leaps and bounds by this point, allowing developers to produce pre-rendered cinematic sequences starring digital characters capable of acting. Sure, none of these pixel-powered performers was going to nab an Oscar, but they could emote, and just as importantly, they blended in slightly better with the relatively crude in-game 3D models that had replaced sprites – theoretically leading to a more immersive experience for players.

Nowhere is this better exemplified than in 1997’s Final Fantasy VII, a watershed title that represents the first truly convincing demonstration of cutscene-driven storytelling’s potential. Indeed, the game’s most memorable scene – which depicts love interest Aerith Gainsborough’s shocking demise at the hands of villain Sephiroth – is regularly cited as one of the most influential moments in video game history, and the first to elicit a genuine emotional response from players.

Yet even replacing human thespians with synthetic counterparts in FMVs still resulted in a video game experience designed around periodically taking control away from players, which seems to run counterintuitive to how video games should work. It’s as if somewhere along the way, developers forgot that cutscenes were ultimately born out of necessity, and consciously or not, decided that storytelling techniques borrowed from films was the best (if not the only) way to tell stories via video games – and gradually, the line between the two began to blur…

Longer cutscenes, less immersion?

That’s no exaggeration, either: the further into the modern era of gaming you get, the more elaborate (and lengthy) cutscenes become. Unsurprisingly, Hideo Kojima has been at the forefront of this – before Death Stranding was demanding players take extended breaks from gameplay to get up to speed on the story, Kojima had already set two Guinness World Records related to cutscene length with Konami’s Metal Gear Solid espionage franchise!

Not that all developers share Kojima’s enthusiasm for extended non-playable sequences. Ground-breaking 1999 open world martial arts epic Shenmue debuted the “quick time event” mechanic, an enhanced version of Dragon’s Lair’s interactive movie conceit. Instead of sitting back to watch a cutscene unfold, QTEs require players to correctly press a rapid string of buttons to ensure that their avatar correctly navigates the scenario they’re in.

Although some critics have decried QTEs for disrupting games’ normal gameplay rhythm and making ostensibly thrilling moments a chore, they’ve since featured prominently (if not always perfectly) in smash-hit action franchises like God of War and Call of Duty, and been deployed masterfully in story-focused efforts like Heavy Rain, Mass Effect and The Walking Dead: The Game.

So, like them or loathe them, QTEs deserve kudos for at least partially solve cutscenes’ “passive player” problem. However, there’s another, arguably superior route: no cutscenes at all.

On the face of it, this might seem impossible: how can you tell a complex story without cinematics? Yet as early as 1998, Valve’s Half-Life proved that an engrossing narrative could be conveyed without ever wresting control from players and by purely relying on the in-game graphics engine. All it took was (extremely) clever level design, and if the trade-off was the inability to properly flesh-out the player’s character, this was more than made up for by the sensation that you were always an active participant in Half-Life’s sci-fi/action/horror storyline.

In the intervening years, indie darlings like Firewatch and What Remains of Edith Finch have followed Half-Life’s lead, largely eschewing cutscenes to convey the story in favour of creative implementation of on-screen text and voiceover acting, to mesmeric effect.

And then there’s 2007 first-person shooter BioShock, which features minimal cinematics and, more importantly, pivots around a plot twist that forces players to commit a brutal murder themselves (rather than simply watching it unfold) which serves as a harrowing, brilliant meta-commentary on the inherently fatalistic nature of video game storytelling.

All of which brings us neatly back to my original question: cutscenes – do we need ‘em?

So, do we still need cutscenes?

Certainly, several developers and pundits over the years have pushed for industry to move away from cutscenes – not to mention Hollywood heavyweights like Steven Spielberg and Guillermo del Toro! Like me, they argue that, too often, cutscenes detract from (rather than enhance) the gameplay experience, making it less immersive.

Is it as cut and dried as that? Not by a long shot. In an impassioned op-ed for Game Front, journalist Jim Sterling makes a compelling case for cutscenes viability as a storytelling tool, citing examples like Metal Gear Solid 3: Snake Eater and the Uncharted series as games that – even accounting for their strong core gameplay mechanics – wouldn’t have been nearly as impactful sans cinematics.

Sterling rightly points to techniques prevalent in cutscenes such as staging, camera angles and pacing which add immeasurably to a game’s overall narrative context and atmosphere, and which would be exceptionally difficult (if not downright impossible) to replicate within the confines of player-controlled gameplay.

To me, the answer is clear, then: cutscenes are here to stay…but we don’t need them.

There will always be video games that benefit from using cinematics to tell their stories, and as developers continue to innovate, you can expect these sequences to blend more and more seamlessly with actual gameplay itself.

That said, the near-absolute creative freedom at developers’ disposal today means that they should constantly be striving towards truly immersive gameplay that puts control back in gamers’ hands, and this means replacing cutscenes with new or enhanced storytelling alternatives wherever possible.

Ultimately, though, it doesn’t matter whether cutscenes stick around forever or vanish entirely tomorrow. All that really matters is that no matter what tale a video game might be trying to tell, players always feel like they’re the ones helping to tell it.


Agree? Disagree? Let me know in the comments below or on Twitter or Facebook!

2 thoughts on “Cutting in and out – Do we still need cutscenes in video games?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.