Monday 4 July 2011

Shades of Gray

Representing morality is an increasingly popular trend in gaming. More and more games, in particular RPGs, have taken to giving players and their actions karma. There are two main ways of doing this, it seems – an arbitrary assignment of morality to actions, or keeping the morals behind decisions gray. Of course, there are advantages and disadvantages to each, and neither can really be considered objectively superior.

The first method is the most common. It comes with an obvious advantage to both the designer and the player – it is easier to both implement and understand. Since it paints all actions as either good or evil (or occasionally neutral), all actions are reduced to one of a small number of outcomes. This can make it much easier to determine consequences for certain types of behavior – instead of, for instance, determining how scene X plays out by tracking previous actions, this can be determined by keeping track of a single core moral statistic that is affected by various events throughout the game. While this is by far the most straightforward method of representing morality, it is not without its disadvantages.

It is generally unrealistic, for instance. In reality, most decisions boil down to something more complex than simply ‘good’ and ‘evil,’ and, indeed, such concepts are not concrete as they are often shown to be in fiction. As such, creating definite good and evil sides can come across as, at best, strange, and, at worst immersion breaking and unrealistic. In fact, this is one of the most common criticisms relating to this method of morality-representation. It also opens the doors to a lot of unfortunate implications, such as certain actions that are not necessarily that evil being frowned on, or vice versa. Take, for example, Fallout 3, which is perhaps the best (or… worst?) example of this, with things such as killing evil characters who have yet to show any evidence of being malicious (i.e. Sister, who, as the player can discover, is a slaver)  rewards the player with good karma, which can be seen as rather… extreme. More clarification is perhaps necessary – it is possible for the player to kill Sister without learning of his evil nature (he just comes off as a bit of an ass) and be rewarded for what is, for all intents and purposes, a random murder.

Additionally, it tends to struggle to comprehend morally gray areas. Choosing between two warring factions who are both equally gray on any moral scale cannot equally be represented by a simple karma meter. This can lead to confusion and, at worst, more unfortunate implications. Fallout 3 provides another example, (it really is a showcase for poor morality systems) this case in terms of the Pitt DLC, which centers around a very morally ambiguous struggle between a harsh slaver who is genuinely working towards a greater good and a slave who wants to be free, but who will (Spoiler alert), assuming he wins, cause there to be no viable cure to the horrible disease sweeping the Pitt. This could make for a very good, interesting conflict, were it not for the fact that siding with the Slave is shown as definitely good, and siding with the Slaver is definitely bad. This may seem as obvious at first, but below the surface it is the fight between a well-intentioned person who seeks to truly cure his charges, and uses drastic measures, and a man who is just as selfish as any slaver (wanting to use the cure for profit) and far less competent than the former. A potentially interesting philosophical conflict ruined by game limitations, then.

This is not to say such methods of recording karma have never been successfully implemented, though. Bethesda have actually done it quite well in previous games, with the Elder Scrolls series portraying unprovoked murder as effectively always evil, though they had their own problems (the ability to cause civilians to attack you in Morrowind, for instance, allowing for ‘unprovoked’ murder regardless of the actual situation). Bioware seemed fond of this as well, up until Dragon Age, with everything from Mass Effect to the D&D games using a moral system – fairly obviously, in the case of the latter, given that it gave rise to the entire phenomena of moral systems. Indeed, many of the problems present in videogames, and more, in fact, are present in D&D’s morality, but that’s a topic for another time. Other games have successfully harnessed a moral meter system – the Rollercoaster Tycoon series has a variant in it’s park rating system, which measures how guest-friendly your park is, taking into account things like mess, safety, seating, and entertainment and facilities provided. It works quite well, except for the fact that it can be exploited (by forcing unhappy guests to leave your park, either by moving them to the exit, or by less… ethical means), though, to be fair, this is more of a cheat than a legitimate tactic.

So, that’s black and white morality covered. Now, onto the gray area, and less concrete definitions of morality. This is where, rather than assigning choices arbitrary statistics, choices all have their own consequences based on the user’s actions. This is, obviously, harder to implement, as, rather than a central system for defining morality, each decision has to be considered independently. It can often cause some confusion with players as well, and often, despite the increased potential, reverts back to a black and white system. Unfortunate implications are just as common as the other system, and it can often be immersion breaking – in the Baldur’s Gate series, one must, at various points, choose between two factions who are just as gray, which makes it hard to roleplay a paladin or other lawful or good character.

However, it does indeed provide developers with a greater potential for morality. It also makes it easier to create continuity between sequels (e.g. Mass Effect, King’s Quest, etc.). Additionally, it can come across as less condescending than the more straightforward method, coming across more as the developers saying “You did this, here are the consequences” as opposed to “You did this, have an extra number on a statistics screen.” When it is implemented, it tends to be done rather well. Interestingly, the best example I can think of comes from a game with an otherwise black and white morality system, which is Neverwinter Nights. I am referring to Charwood, in which there are four possible outcomes, and, while there is a fairly obvious good one, choosing it has the worst consequences. This does not come across as punishing the player, as they are provided with this information, so they can use it to make the decision. It comes across as a nice contrast to the rest of the game, and allows for an interesting situation.

Of course, I don’t mean to imply that there are only two systems of representing morality. That would be incredibly hypocritical on my part, having just criticized the same two horse race in terms of morality. There are various different points along the scale between these two extremes, as well as systems entirely removed from these two. You have things like Dragon Age, in which each decision is taken individually, but can also affect a variety of karma meters in the form of your companion’s attitudes towards you, or Sims 2, in which your Sims aspiration meter and lifetime aspirations act as something of a gauge for how well the player has treated them. In truth, there is a myriad of different ways of portraying morals and ethics, and no single blog post can hope to do much more than scratch the surface. Personally, I am looking forward to seeing how developers explore human morality and other aspects of philosophy in the future.

4 comments:

  1. I know this principle from Bioshock! Pretty nice game by the way.

    ReplyDelete
  2. I like your spin on the Karma meter. Good analysis. I'm thinking twice about white lies now.

    ReplyDelete