Subjective Objectivity

Hello! I was going to sit down today to write you a brief treatise on Hearthstone, so that I could get back to playing more Hearthstone, but, as I thought about it, one of the little worms that lives in my hair crawled into my brain and took it hostage. “Crap!” I thought, “Now I’m going to have to accept its demands or risk never again feeling the warm touch of a greasy controller in a hot-seat hockey match.” So, it was a close thing. But, here we are; you know which decision I made. I’m going to keep this brief, because you don’t come here to be held hostage to the whims of my brain-worms. That’s my job. No, you’re here to root through the words I cobble together for spelling errors. Aren’t you? Someone has to worry about that…

Anyways, the topic of today’s “conversation” is subjective-objectivity. That may sound like a conflict of terms, but it’s more accurate than you’d think. If you were alive in the Double O’s (and if you weren’t, I’m surprised that you’re precocious enough to be here), then I’m sure you were aware of the explosion of the popularity of abusing the term “objective”. The internet and Facebook were beginning to flower, so we needed a way to interact that made sense. Enter: Objectivity. When you lack a lot of cultural common ground with people, it can be difficult to communicate. That’s why phonetic kittens became the most popular form of greeting. It was something almost everyone could agree on, he said glibly.

But, I’m here to tell you that our worship of objectivity hasn’t died down. On the contrary, we’ve canonized and deified it. It cackles gleefully as we praise charts and graphs. It draws life-energy from those who worship science without understanding it. And it lives in game review scores. You see, the problem with objectivity is that it requires a standard to be measured against. Mathematics has proofs. Psychology has models and experience. Physics has Maths. The internet has one person yelling at the other until one of them logs off. (Both of them considering themselves to have been validated) When you say something is objectively better than another thing, you’re saying that it is better on a theoretical scale. That scale is where subjectivity comes into play.

You see, the scales themselves are usually developed through rigorous testing and prediction. We don’t just throw together theoretical models willy-nilly, unless you’re me deciding how I feel about a game, but we’ll come back to that later. Personality scales and universal models both rely on a core of tested principles. However, and let’s get specific here, are those models objective? Noooo, they’re the product of their time and place. Let’s look at I.Q. scores, because, as a member of the psychological community, I hate seeing them abused.

Based on my I.Q. score, I’m way above average, but what does that even mean? I mean, you’ve read my writing. I’m groping in the dark here! Seriously, what problems could I have with a systemic, trusted model that reinforces the words that drip from my fingers? Well, for starters, it’s broken as a universal model. Many of the questions used in the old standard I.Q. tests were culturally biased. In fact, they were used early on to confirm the old Imperialistic notion that Westerners ruled and everyone else drooled. Unfortunately, it was a forgone thing, because non-Westerners were failing the sub-textual Western Culture portions of the exam. You know when a piece of pop-culture trivia randomly helps you pass a test? Yeah, it was like the opposite of that.

Even worse, I.Q. scores don’t stand the test of time. If you get tested when you’re a kid, then you’re going to need to go again. Not unlike STI tests, the results will vary as you live your life. It’s a pretty ephemeral measure of intelligence, honestly. Not that intelligence was ever a solid thing to begin with. Whip out your intelligence right now. Come on! Do it! Let’s compare sizes. Sorry, I was having a forum-flashback. It gets worse as you approach the upper ranges.  Roughly 66% of the population falls between 85-115, and the percentages you start dealing with past 125 are portions of 5%. Since, everyone’s I.Q. measurements fluctuate daily, you’re not really in a hard-fast ranking system as much as you are a ranking swamp, drifting in a milieu of people around your intelligence range, never sure where you belong. (Pro-Tip: It doesn’t matter that much. How you use your intelligence is far more important and reflective. Look at me, I write about games and culture. >.>)

It was originally developed to test for average intelligence, so it’s not properly calibrated to sense differences past a certain point. You know how you can tell the difference between a warm and a hot mug of tea, but you can’t discern the subtle temperature differences between a branding iron and a glowing stove-top? Part of that is because of (I hope) a lack of exposure to both, but another part of it is because your body hasn’t developed the tools necessary to know the difference. It has never needed to. Generally, exposure to extreme temperature requires one response: Get The Fuck Away. GTFA for short. By the same token, we’re measuring average intelligences and how they stack up. Where on the spectrum they are.  I’m sure we could calibrate a genius-level test, and some people have spent a good deal of time working on this for ostensibly non-narcissistic reasons, but would it really be helpful?

Knowing if someone is below 70 I.Q. points has legal ramifications in the states, but we don’t have any I.Q. requirements linked to high I.Q.s. Except Mensa. And, honestly, Engineering-focused Universities SHOULD have an I.Q. cap built into their student recruitment procedures to reduce the sheer number of super-villains they spawn from Mensa’s ranks. Biosci and Robotics programs, too. Thanks for reminding me, Spider-man. Prevention > Cure

Right, yes, the point of all this was to illustrate how a carefully designed, rigorously tested system can have short-comings in some areas. I.Q. examinations are purpose-built tests, and we should be aware of what that purpose is when we employ them. This applies to every model we have. They exist, and function best, in the systems in which they were developed.

But that doesn’t always last. Behaviourism was considered the be-all of Psychology at one point. The motto was: Pay attention to and mold the behaviour. Behaviour is everything. You can’t prove the other stuff. Eventually, we discovered that the principles of that system didn’t hold up. I mean, to anyone with cognitive thought, it can be ludicrous to consider that there might not be cognitive thought. That is until you try to prove it exists objectively. Think of it this way: a behaviourist can alter the behaviour of an organism without the organism knowing about it at all. So, what’s the point of knowing? Does knowing even come into it? It was all unprovable rubbish to the psycho-orthodoxy, either way.

Before that, we had Freud. If you disagreed with him, well, then he’d say you were in denial or make some other equally unprovable claim. The point is, if you stuck both of those people in a room, and made them watch a third party doing something completely innocuous, like smoking a cigarette, they’d come up with different reasons for why they were doing it. One would be based on an oral fixation hypothesis. The other would probably revolve around operant conditioning and social learning theory. By today’s standards, both of these would seem false to a degree. Now, we’re all about Neuroscience and addiction. Each of these three explanations is backed up by years of study. The models they appeal to were rigorously designed. They all make predictions, and, in this case, they’ve all been validated. But, which one is right?

There’s a pervading belief that there’s a right answer. Well, I’m here to ask, “A right answer according to what?” Having a model or a scale, a graph or a chart, doesn’t mean anything without interpretation. That’s why two people can look at the same climate-change chart and have two very different reactions. One will call it a climate shift in keeping with one of our current models. The other will begin digging a shelter in a backyard somewhere in Canada. Yes, we can objectively see that one part of the graph is higher than the other, but what does it mean?

Models for climate change and human behaviour are pretty abstract. With a proper understanding of Maths and a free stats package, you can bend numbers to your will in unbelievable ways. Even the concept of an outlier is mind-blowing when you think about it. “Why is that one point way over there?” “Oh, don’t worry, it’s obviously not part of our set. We just won’t include it.” The logic there is both hilarious and accurate. Something about that dot on the edge of the graph put it apart from the others, but what? We’ll never know unless we look, but we rarely look, because we can explain it away. So, interrogate graphs thoroughly, don’t skip the thumb-screws, but, also, don’t make life-decisions based on graphs. You have no idea what went into the making of one. Look to the source data or approach it skeptically. Even the source data isn’t above being tampered with.

And even if the primary source wasn’t damaged, is it accurate? There’s a practice in the scientific orthodoxy that has always pissed me off, because it’s ridiculously irresponsible. That practice is only publishing positive results. Again, makes sense, because why are you publishing something if you found nothing? Precisely because you found nothing. Why did someone else find something? What method did you use that screwed it up? Why are we throwing this away?! It’s as valuable, if not moreso, than a positive result, because we have no idea how many times we did something before it worked. How many unpublished papers lie discarded somewhere because they didn’t prove anything, except that another paper, somewhere else, failed to replicate its results. A negative result is a result, and I wish the scientific community would acknowledge them more actively.

If you’re interested in this issue, then I encourage you to look into it, because it’s going to be an important criticism of the ancient orthodoxy in the coming years as papers get cheaper to publish and distribute. It also means that scientists -and people in general- are going to have to be more careful about what they accept off-hand. Don’t worry. It’s not all meaningless. Often, famous papers will be discredited if no one else can replicate them, but I still think we can learn a lot from what doesn’t work out. Science is… a journey. Our Second Great Trial, after not killing ourselves off.

Let’s bring this back around to games and subjective-objectivity now. That barb about game review scores wasn’t thrown in haphazardly. Review scores are interesting, but they aren’t definitive. The reason I say this is because this last year saw a lot of crap being slung about games not getting perfect scores from reviewers. I think I even wrote a something about GTA5 not getting a perfect review on GameSpot and the complaints that followed. Some people take these things very seriously, especially people who run gaming companies, but we have a small advantage over them. We -can- not give a shit.

Seriously, if you like a game, and it gets a shitty score, remember that it’s essentially meaningless. Even MetaCritic, one of my favourite game-review sites, suffers from the fact that a relatively small number of reviews are collected. Of course, it’s an improvement over the old days when review scores were presided over, almost solely, by gaming magazines and V&A Top Ten. *shudder* That being said, your views might not be being represented. For years, we thought the world was round and boy-bands were cool, but we just weren’t looking at them from the right angle. The metric we were using to measure their value: their popularity, was a poor one. But it worked at the time, so who’s to say?

What do you do, then? Do you start a review blog to compete with Trivial Punk? You choose your opponents well, but what metrics are you going to use to judge a game’s quality? A vaguely positive or negative feeling related to a number somewhere between 1-3 or 8-10? Do you charge people money for good reviews? (That joke was topical a few years ago. Checking it off my list…) Even so, what standards do you hold the game to? Cinematic? Literary? Engagement? A hodge-podgy spectrum? Some sort of superior inter-internet dialectic about quality? You could take any one of these perspectives and be absolutely right, if that’s what you’re looking for. For example, Ebert is a film critic that you might know from his pairing with Siskel, and, for a while, there was a big fuss made when he said he didn’t believe that games would ever be art. To which I respond, in my heart-of-hearts, that he’s a fool. But, in my more diplomatic public persona, and in my brain; where I keep all the thoughts that aren’t related to Power Rangers, a vague childish need for approval  or how much I love chocolate; I recognize that they might not seem that way to someone with very different, highly refined (Read: specific) tastes.

Ebert and I don’t have to agree. In the money, influence and power driven world of The Entertainment Industry, things like review scores and “objective” measures can make a real difference. And, yes, that can trickle down and affect me. We may never see another Banjo-Kazooie game, because a lack of popularity leads to a lack of influence and sales, and a lack of money. Therefore, no company is around to make it. Whereas, popular I.P.s can see a loaded fuck-train of new releases. See Call of Duty for that particular example. That could be something to complain about if these titles really upset you. But, is giving a CoD game a low review score because it’s unoriginal any more objective than giving it a high one because it’s a technical masterpiece? Is the solution to just average the two the way MetaCritic does? Yes, that makes a lot of sense, but don’t consider it to be true anymore than anything else I’ve addressed.

Finally, let’s get back to how you should score your games. You can use a score, because people understand those very intuitively. A high one is good; a low one is bad. Sums up your feelings pretty quickly, and there’s nothing wrong with using it, providing you explain yourself. From there, people can harp on about them all they want, but, as long as you’ve explained yourself, they’re just being lazy. You could do review scores the way I do them. Take two metaphors and compare them. The content of the metaphors tells you about the experience I had playing it, while the comparison between the two tells you what I thought of its overall quality. Think of them as feely-fractions. Or, you could come up with your own system. Use bar-graphs or whatever. As long as people understand what you mean, then it won’t be a problem.

And that’s really the important thing. If you know that the people you’re talking to understand what you’re saying, then you’re in the clear. Electrical engineers have had the positive and negative symbols flipped on their schematics for untold years, but it’s a convention, so people understand it. We’re not as monolithic a culture as we once were, and, to some people, especially untrained engineers working with schematics, this can be pretty shocking. But, as long as we approach the world with an understanding of subjective-objectivity, we won’t be that upset when someone gives us an 8/10.

And if none of that convinces you of the pervasive nature of subjectivity, then consider this: For the concepts of Good and Evil to exist, you need some sort of Mannequian universe. For something to even just BE good, you can’t be talking to a relativist. And, if you want them to concede that something might be better, then they can’t be a nihilist. The things we are willing to accept as true or valid are weird, and they have a far larger emotional component than we’re often willing to admit.

Objectivity exists on a scale, and it’s usually purpose-built. We’re only able to be objective because we became aware of how subjective we were being. Yet, Awareness doesn’t cancel out Subjectivity any more than having someone call you on your argumentative strategy makes you wrong. Unless you were being fallacious on purpose to make a point. Then again, they could be doing the same thing… I’m going to work through this… and when I do, I’ll see you on the other side.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: