Remember Me: Human Revolution, Part 2

Posted in All the Things, Everything Else with tags , , , , , , , , on April 16, 2014 by trivialpunk

Hey! It’s been a long time again! Long enough that the “Remember Me” portion of the title holds a lovely double-meaning. This is where we normally do a little house-keeping and then move on to the article, but there’s way more house-keeping than article this week, so we’re going to do things in the opposite order, and that will make sense in a minute.

One of the biggest downsides to writing two-part articles separately is that you never know what will strike between setting down one collection of type and the next. It’s the risk we take in indulging serialized media. I’m sure we all know what it’s like to have something simply cease (#Firefly Feels). But, that’s the dread truth of life, isn’t it? The systems, patterns and truths we use to live our daily lives can betray us at any moment. Not that they will, but they can. And we have to live on in spite of that sonorous unknown. We have to, and we do.

Yet, even benign undulations affect us, which brings us nicely back to where we should have been all along: Google. Search engines are a pretty fascinating element of the internet, because they seem to exist as a clear window of exploration. But, even that’s a clever trick. “What trick?” you might ask. The trick of appearing unbiased and helpful while still encouraging a homogeneity of thought. Whether that’s good or bad, I’ll leave to the scholars.

Don’t get me wrong, I’m not about to go off on some paranoid tangent about big-brother and the NSA. The internet does that enough on its own without me gracelessly adding to it. Believe me, if I had something interesting to say, I’d be all over it, but think of what we’re discussing here as a technology that is related to the algorithm that lets someone build a psychological profile from your browsing history. I’ve beat around the bush enough, so let’s make sure the question that leads to the actual discussion is a mirrored crystal: How does Google decide the order and availability of search results, and how does that affect knowledge distribution, as well as understanding?

This question, and those related to it, are part of a growing sector of academia called the digital humanities. The Humanities, if you’re not familiar with them, are all areas of study that are related to humans, culture and history. This includes Psychology, English, Sociology, History, Anthropology, etc. The Digital Humanities is a sub-set of the Humanities that focuses on how the advent of the computer is changing the ways we understand things. The information sources we rely on. The mediators through which we experience culture. The gate-keepers that control the flow of information that informs our daily life.

Because, if we could be said to have experienced an Apocalypse in the last few decades, then it’s definitely the Rise of the Machines. Computers, obviously. You, as you are right now, sitting in your chair in front of a high-def computer-screen with a touch-phone in arm’s reach, have probably grown up with the idea of computers and the internet, but it’s a serious historical game-changer. Or rather, it could be, but that depends on how we use it. Completing the circle of blather and bringing us back to…

Google is the most popular search engine on the internet. It’s capable of delivering millions of search results in a matter of seconds. It organizes those results based on your prior viewing history, anticipating what you’re probably looking for. Typing in “Titan?” If you’re an anime fan, then you’ll probably see “Attack On Titan” as an option. More of a Final Fantasy 14 aficionado? You’ll probably find an entry or two about his boss fight, if not get an outright suggestion in your search box as you’re typing. Then again, there’s always the Titan missile. A Wiki page on The Titan Rocket family (Yes, I’m Googling as I type…). The Teen Titans. The Titans of ancient legend. Personally, the first thing that popped up for me was Titan, the moon.

But, remember, it’s not just the things you’re most likely to enjoy. Google uses a process that orders your search results based on the viewing habits of other people. That’s why Wikipedia is usually one of the first results you get: everyone goes there. This is a pretty neat system, but it’s got a few obvious problems. The first is that it encourages group-think by increasing the chances that we’ll all be seeing the same information. Which sounds bad, but it also makes it easier for us to communicate, because we’re more likely to have seen the same entries, so we might have some common ground. Just think about the sheer amount of legitimacy and authority that being the first search result alone would lend you as a result.

Of course, it takes a lot of views and a lot of traffic to become the very best. So, if you’re peddling grass-grazer gas, you’re probably not going to get there. It’s a sort of quality-control thing, but it’s not perfect. Then again, what is? It can certainly be manipulated. When I’m working as a freelancer, one of the primary things people worry about is SEO (Search-Engine Optimization). This basically means writing the article such that it’s more likely to be picked out by a search algorithm based on its content. Basically, they want mimetic density in the first paragraph. (If you knew how much I was over-simplifying, you might want an apology. If you do know, then sorry!) If you’re writing about sausages, you’ve gotta make sure you squeeze in every sausage-related tid-bit that might be important to the sausage community into the article. Even your jokes should be sausage-based. Pro: Top results are more likely to be related to your topic. Con: It could just be pandering propaganda.

Again, though, it’s hard for pandering bullocks to hold a slot. Eventually, the popular, the informative and the useful rise to the top. And, it’s not like there’s going to be a standard search-result screen; it’s an active, adaptive process. Come on, admit it, Google is elegant as balls.

So, now that we’re on the same page, you can see how easy it would be for Google to alter the course of human understanding. Hell, even Wikipedia could become a hive of scum and questionable sources. We rely on their integrity and the work of a bunch of anonymous fact-checkers to ensure that the information that guides large portions of the digital world is reliable. We’ve all learned to be sceptical, though, right? You always follow the blue links at the bottom of the Wiki article to make sure it’s not just bollocks? Because, I sure don’t.

So, to be 100% clear, Google is controlling your mind by showing you: the information you already know you want, and therefore already agree with, and exactly what everyone else is looking at. It’s a truism that no one looks past the first page of search results, but it could also be reality. The real question is: do we use Google because we trust them or do we trust Google because we use them? It’s the main problem with having one source of information or a Total Institution. Churches, Schools, Prison systems, Legal Systems, they all face this reality with greater or lesser denial and greater or lesser concern.

Come on, though, it’s not like you only get information through Google. And it’s not like you can’t trust them. I do, wholeheartedly, but I know how easy it is for me to simply accept the information and understanding that I’m given, so I’m wary. It’s even easier if you think you ‘re the one that found the information in the first place. Remember, though, you didn’t: Google did.

Where is all this unhealthy scepticism leading us? Right back to the two games we were discussing in the first post. The reason that a search engine needs to be subjected to that level of scrutiny is that it is the tool you use to explore. To perceive. On the internet, Google is like one part of your sensory system. It feeds you information that you can use to direct your behaviour. It also functions as an extension of your body, as a literal tool, because the functions you can perform are restricted and enabled by the engine. Weird, right? Let’s go further.

Now, say, instead of Google, we were talking about a set of prosthetic legs from Deus Ex: Human Revolution. And, let’s say it’s a bit after the game, so we’ve gotten a little creative with our designs. Now, instead of your flesh-sticks, you’re rocking a set of quadruped robot-legs! Pretty cool, right? It’s not all walking up buildings and winning three-legged races, though. The change is going to fundamentally alter how you understand your ability to shimmy. The design of the legs themselves, the way they bend, their length, size and weight, are going to affect how you can move with them. As a result, your understanding of your body and your relationship to the environment will change.

Bringing it all together… What about your mental environment? If you’ll remember, last time, we talked a bit about mental augmentation, storing memories digitally and increasing processing power by fusing metal and flesh. Now, the issue at hand. Take what we said about Google and apply it to your memories. Your brain. Your understanding of everything you are and will be. Make one memory salient and bring someone to their knees. Make the memories of a sickness seem to last too long and you can shatter someone’s resolve. Associate stomach flu and a food, then BAM! Instant life-long distaste. It goes without saying, then, that the search protocols we use in supplemental prosthetic memory must be under even greater scrutiny.

I can’t stress this super obvious, but easy to underestimate, idea enough: your brain and body enable and constrain everything you are. The tools we use, both mental and physical, refine those abilities even further. From those copious refinements comes a multiplicity of interactions that create you. Change any one of those elements and you’ll change the end result. When we start augmenting people, we’ll start changing what we consider to be human.

Deus-Ex-Human-Revolution-Wallpaper-Wallpaper-2

Okay, stop, I’m going to break the 4th wall even harder, right now. This is all I can really remember from the original post I had prepared. However, looking back, I can see I put a couple points in the wrap-up that I promised to cover, so I’m going to do that, now. But, I’m going to talk to you in my own voice, instead of Triv’s. I know it’s a small difference, but there is one. Trivial Punk can’t discuss Trivial Punk with you, and that’s what I want to do. But first…

Procedural memories are those you use to perform actions and tasks. Think of your procedural memory as an instruction booklet for your motor systems. There are two things that immediately shake out from this concept, based on our discussion here; 1: Changing your procedural memory will change how you perform actions. If you could hack into that digitally, we’d have… issues. 2: More applicable to robotic augmentation: if you change the tools used to perform the actions, then your procedural memory will develop very differently. Obvious, again, let’s go deeper. What do we use procedural memory to do besides just perform actions? Inform understanding.

We have a complex empathy system that lets us try to guess at the intentions and motivations of the people and things around us. We know when people are happy, because they smile, but we all know that smiles can mean different things under different circumstances, and we know that, because we’ve smiled. We’ve smiled under hundreds of different circumstances in many different ways. But, what if you didn’t have a mouth? How would you interpret a nervous foot-tap if you didn’t have a foot. Or legs. Or a body? Don’t worry, you definitely can, but you might experience the interpretation of the gesture differently. The memories you use to do the interpretation might be more symbolic than kinaesthetic. Obviously, I’m in full speculation mode, at this point.

In the coming generation, we’ll have to begin talking about the strictures of the human form as we begin to download minds, or human-like minds, into currently inhuman forms. That’s enough to consider on its own. Usually, this whole piece would come down to questions of whether or not they’re human. I don’t have that question; I don’t really care if they’re human or not. If they have minds, then we should unite around that commonality. But, you know, xenophobes will be xenophobes. And maybe they won’t care about us silly flesh-babies, but that’s another future to be decided on at a later date. Any ways, the question I’m eager to ask is: should we place any restrictions on form?

Your first answer would usually be, “Of course not!” That was mine, too, until I really thought about it. If we extend those potential-minds people-privileges, and we DAMN-WELL BETTER! (No, seriously, let’s try to never be slavery-genocide stupid ever AGAIN! That’s going to be the social problem of that age, you heard it here 4, 940, 342nd!) Okay, so let’s say we do extend them those privileges, and we can control their physical form, then how far are we away from slavery? No, seriously, if you give something a mind, but only give it the ability to Roomba around a house, then you’re purposely constraining its physical form in order to ensure that it does your chores. It’s a topic to approach carefully; those are some serious future-crimes.

But, okay, maybe you don’t care about robots, so I’ll leave you with this culmination of these ideas. The human brain is a remarkable processor. I actually don’t know the words to make you realize how amazing it is. It contains a universe that actively understands itself based on electrical signals. Seriously take that all in, now look at yourself… your existence is amazing. Moving on, have you considered the computational abilities of your brain, apart from when it’s actually doing mathematics? Each of us is a phenomenal information processor. You know where I’m going with this. You could take the brain of a developing human, a child, and insert it into a system a ‘la The Matrix, but instead of harnessing bio-energy like an inefficient newb, you control the brain’s environment such that the brain develops into a pure information processor.

Between the man-machine interface we discussed last post and this post’s ramblings about how your physical form affects your mental understandings, can you see the rows and rows of bodies? You don’t need the skin if they’re in a tube. You just need to keep the brain nourished and the nerves alive. Or, maybe you don’t. Maybe you just need it functioning and you’re controlling the nerve-inputs, because they’re inefficient. Plato be damned, brains in vats could represent more computing power than currently exists, if you could access it properly.

Or worse, you could end up a helpless, motionless tube that feels and digests food for another set of organisms, depending on how inefficiently we’re running this thing. So, be glad of your body! Never let anyone tell you that your body is your humanity. And always accept augments from strangers.

Okay, let’s get to that house-cleaning now that there’s a way longer post than I intended to write in front of it!

As you might know, I went on a little hiatus to knock my head-brain around a bit. I’m trying to figure out what to do with my life. I want to write stories. I want to make short movies and silly letsplays. There are so many ideas that I haven’t dared try, yet. You know, standard creativity-dreams. However, I’m also kind of a science-nerd, in some respects. Academia could provide me a life doing the thing I find most interesting: thinking. This year, I’ve got to make a decision about where I begin my future. And, as of right now, Trivial Punk isn’t shaping into a career.

Don’t get me wrong, I do this because I love it. And I’m taking the chance to keep doing it, because I want to make interesting ideas and the primal grips of fantasy the central conceits of my life. I want to encourage people to dream, because the future is forward. It’s up in space. It’s beautiful, terrifying ideas about the nature of our developing consciousness. It’s equality as a given, not something we have to continually fight for. It’s finding humanity in the post-human and hope in our most crushing defeats. And if I write nothing more for the rest of my life, or if you read not one screed, not one iota, more of my writing, then, please, accept this final plea, “Don’t give up on what we can be.”

In the spirit of that idea, I’m going to spend the summer working on videos, articles, letsplay and the like with my friends. We’re still learning, but we’ll put all we have into it. All I ask is that you share the things you like. I’m one person; I can’t do much on my own. But, if each single person does just a tiny bit, then we can accomplish great things. Or plug our work, either one. I don’t like talking, or even thinking, in a mercenary fashion. I don’t enjoy self-promotion, unless it’s too true to be manipulative. So, that’s what I’m doing here. Please, share the stuff you like so that I can do this with my life.

Regardless, I’ll still finish the novel I’m writing. We’ll still make videos and put in the time. It’s up to us to make this work. But, let’s be honest, all my work means nothing if nobody sees it. That’s it, cards on the table, honest. Any number of things could change in a given life, so who knows if any of this will hold true in the long run. Literally no one can say. But, it will always be helpful to share the things you like on the internet.

I’m sure you’ll notice some new stuff in the next couple of weeks; here are a couple of new things I’m trying, let me know what you think!

Dead Space Letsplay – Trying something a bit different with this one. Let me know what you think! Did you have enough time to see them all? Were they in the way? Entertaining? Funny? Existentially disturbing?

Doom 3: BFG Edition Letsplay - Simple and sweeeeet.

Dark Souls Letsplay Turbo Preview – The best and fastest way to see us die over and over again.

That’s it for today! Thanks for reading and I’ll see you on the other side.

Dead Men Tell No Tales; They Drop Beats

Posted in All the Things, Everything Else with tags , , , , on February 21, 2014 by trivialpunk

I know you were expecting a Part 2 for Remember Me: Human Revolution, and you’re going to get it. But, not today, and, you’ll shortly understand why. You see, among the things stewing in my brain is an ongoing fascination with our digital lives. There are so many aspects to it that aren’t readily apparent. For instance: a lot of the things we say hang around longer than we realize. Pictures we put up, videos we up-load. Comments we make. At one point, I almost lost a job over a frustrated post about bureaucracy in the work-place. Many people have lost jobs because of the aforementioned. There are many famous cases, you can look them up at your leisure, but just take my word if you’re not privy to it all. (If you want phrases to Google, go with “teacher fired for” or “woman fired for” and you’ll find your controversy)

Just the other week, I watched a Twitter snafu develop between two individuals over a new comment on an old, old Tweet. Totally understandable, I’m not here to judge one way or t’other, but I will note that it has to be incredibly jarring having someone comment on something from over three years ago, especially if social media is second nature to you. You get used to just tossing thoughts into the aether when you think they’re worth sharing. So, this situation would be as if you were approached in the street by a stranger and questioned about an off-hand comment you made in a bar one time, word-for-word. I’m not sure if we have any etiquette for that sort of thing, but we might consider starting a conversation about it somewhere. Or, joining in on the on-going one, whatever’s your style.

That right there was going to be the heart of a discussion I was going to post next week about how permanent a lot of the stuff we throw into the annals becomes when computers are involved. There are pictures of me drinking absinthe in a questionably smokey room floating around on-line somewhere. Not incriminating in the least, but if I were to be competing for an important office, you know someone would dig it up. There’s more out there than your voluntarily posted stuff. Google Search Histories, old e-mails, posts in random forums from when you were 14, off-handed remarks from the early days when Godwin’s Law was hammered out… Some of it’s still out there.

We don’t tell children this enough; we don’t tell adults this enough. The internet being a fairly new thing, it’s hard to tell exactly what the long-term ramifications are going to be for our culture and, quite honestly, our very being. How you understand yourself is morphing as you use the internet. Don’t worry, your understanding of yourself morphs all the time, so it’s nothing new. But, it is important that we consider how we’re changing, not just acknowledge that it’s happening.

All this would have been fascinating conversation fodder, but something else came up just last week. A good friend of mine, an incredibly creative, unquestionably kind man, died of a gun-shot wound outside of the town where we met for the first time, many years ago. Now, I have to approach this thing from another direction, because I can’t think of anything else. I’m going to try to keep this light so it can escape the black hole that my heart has become. That’s not to say I don’t have one; it’s just pretty heavy right now. Bah-ZING! Don’t go off on me about density and weight being different things; now’s not the time to look for the truth in puns.

I discovered that this terrible tragedy had occurred via the internet. In fact, we had done most of our interacting through the internet over the past few years. We’d both moved around a lot and only saw each other about once a year. Your standard long-term Facebook Friendship. You know, the people you keep around because they’re too precious to lose, but your life is too insane to comfortably accommodate. I went to his page, because I felt I had to confirm what had happened. It was immediately strange; it held the muted calm of a graveyard in brightly lit text-boxes. It had become a shrine.

The life that my friend had lived had come to pay its respects. Pictures and videos poured in from people that had known him. They posted their favourite memories and said their farewells. It was touching. I kept scrolling, a tear fighting its way through the mounds of cynicism and unreality that were protecting me from realizing that he was actually gone. That is, until I scrolled down far enough to reach his final post. It was a link, posted two days before his death, to an album he’d just uploaded to sendspace. I finally felt his death with his final post, and, yes, that’s the link to the album. Dead men tell no tales, I’ve realized; dead men drop beats.

It made me reflect for a minute on all of the things we leave behind. The great works left for us by fantastic masters of craft. Lives bottled in miniature. Books that let us see with fresh eyes the days long past. Music that sums up swathes of human history. Movies had hold in a suspended mixture the essence of a moment. Games, like SH2, that let me stalk the streets of my fondest memories. Each life leaves something behind, though, even if it never leaves a discernible mark on our social fabric. I know it can be hard feeling heard on this massive planet, but we do have voices. And, now, we all leave echoes.

What happens to our Facebook accounts when we die? How long should they be maintained? Do we pay to preserve them like a memorial in a graveyard? More disturbing… to me, my friend’s life was revealed through an on-line profile that came to represent him. Now, he’s gone, but the profile remains. His life, to me, at least, has left a distinct stamp that I can still access. Everything about it for the last year has been shared with me digitally. All the information about his life and death were represented in a similar manner. I have no proof of that life one way or the other, except his Facebook profile. Which says to me that it’s a pretty important part of life.

How many MySpace pages exist in this fashion? How many memorial pages persist in the annals of Facebook’s digital library? What is Facebook’s responsibility here? What do we do with those pages, as a people? On the one hand, they’re a disturbingly visceral reminder of the past, like having a corpse you can visit from your screen. Unmoving, dead. On the other, it’s alive with the memories of the people who can visit it. Because, when someone has moved on from this life, a digital symbol is as valid a representation as a physical one, albeit, arguably less sacred. Don’t believe me? You have on-line banking, don’t you? You should really get it.  Memories locked in vines? True enough for me.

If there’s this much of you left on the internet, the record of your life, and, if the internet can destroy something as fundamental as your livelihood with its digital memory, then we must conclude that the internet, and Facebook in particular, is an integral part of your life. But, it is unlike any other part of your life. It leaves a trail of the changes you make, unless you unmake them. Information that you’ve completely forgotten existed is in an old e-mail somewhere or on a weird back-woods forum. How do we deal with the implications of this? Do we put a statute of limitations on these things? You change a lot in five years, but the words you wrote then are just as fresh and accessible as they’ve ever been. Albeit, inconvenient to find, unless stumbled upon accidentally. Which happens, because internet.

People discover old works by artists that were posted over five years ago, and, if questioned, how should the artist be approached? That’s a long time in a life, really. Hell, when I was 13, I thought I was assembling a harem. As it turns out, I was just cheating on a load of people at once. But, if you questioned me, now that I have a brain more in control than my testicles, I would say that I acted like a complete prick. I can’t go back and change that, or anything that shook out of that, but I’m a better person now. Although, if we’re being 100% honest, I’m going to leave you doubting that that story ever happened. There, that’ll do it. Let’s say it did, though.

If I was doing it on-line, there’d be a record of it. People could come by later, see it for the first time, and judge me all over again. They don’t know me; they haven’t been privy to my life so far. As far as they know, today, I’m a pimp in Paris dealing cards at a mob game. Worse still, what if I was being rhetorical? Or joking? Taken out of context, the Sochi Problems meme seems really arrogant. Just first-world people complaining about first-world problems. What about the people who have to live under these conditions? What about the lives that go on within Russia that have to deal with these problems first-hand. Tainted drinking water isn’t a first-world problem; it’s an everyone problem.

Then, you realize, that that’s the purpose of the Sochi Problems meme. How do you get people to pay attention to something on the internet? You make fun of it. Make it cute, marketable and readily understandable, and hope it catches on. With the attention of the global community focused on Sochi, it was going to catch on. Russia is a country that’s a massive player on the world-stage, and maybe we need to pay it more attention than we have been.

And that’s what I’m kind of doing here. Reminding us to pay attention. My friend was a really good person. To me. To everyone I’ve ever seen him interact with. I don’t know what happened to him exactly. I don’t know what his life has been like for the past few years. All I have are the things I saw and the life I let slip by me. Maybe I should have spent a little more time with him, but that’s a regret we always have. So, I put forth, maybe we should concern ourselves with the physical problems masked by the Sochi Problems. I don’t know; I’m not a politician.

What I do know is that the life we create on-line is non-trivial. Even a punk like me leaves a mark, but what happens when I’m gone? How much of me is even real to you? With over a hundred posts on this site, you could, theoretically, re-live my entire blog for a long time before you ever had to face the fact that I wouldn’t be writing any more. I may be here, right now, typing in a room, but there’s a lot of me out there, too. To you, that might be all I am. Surely, each line is infused with a bit of my essence at the time it is written, but I am not that man any more than I’m still the child who soiled his diapers. What happens when those servers go dead? Do I die with them? No, but what if I’m already gone? Do I live in those pages or just in the memories they evoke? For how long?

Do I die with my history? Does my history die with me? Does silencing my history kill a portion of the living person? If you destroyed my Facebook account, I would be diminished. If you trashed Trivial Punk, I’d lose all I had written down. Those memories would die in your head and in mine. The difference being, those memories are MY history. These are strange questions that we must consider. Or, you could let them pass you by, but they will affect your life. Might as well join in the conversation.

R.I.P. Dear Friend. I’ll see you on the other side.

Remember Me: Human Revolution, Part 1

Posted in All the Things with tags , , , , , , on February 11, 2014 by trivialpunk

There are things we just don’t talk about. And, no, I don’t mean the things we here in Canada would rather you didn’t know about us. I mean, there are things we don’t often associate with each other outside of their respective genres and fan-boys, like horror and science fiction. Yet, these two things are inextricably linked. They can tell different parts of the same story. Any given tale is interwoven with opportunities for genre play. For example: A man murdering his wife. That’s a little SH2, but what site are you reading this on, really?

The portion involving their meeting is a romantic love-story. The trials and tribulations of their everyday lives are a slice-of-life Woody-Allen movie. The story leading up to the murder is a psychological thriller. The murder is a horror story. Then, CSI procedurals mop up. I don’t know whether that would be an awesome movie or just a play on what Simon Pegg and the gang have been accomplishing since Spaced. Either way, depending on what you focus on, the narrative can flip-flop in any direction.

The same goes for games. Without Master Chief, the Halo universe would be a terrifying sci-fi story. Half-Life would be an experiments-gone-wrong extinction story. (And given HL2, I don’t think it turned out peachy, even with the intervention) Because, the strange, the unknown and the apocalyptic are pretty terrifying in the right context. I’m sure we’ve all looked up at the heavens and had that existential moment of ego-shattering obliteration while considering the vastness of forever. Looking up and thinking, “I don’t matter in the slightest,” is rarely life-affirming and hardly true. Were you really expecting nothingness to care? That’s a bit much to ask of the infinite cosmos, regardless of what’s in it.

So, that brings us back to where we started. Science Fiction and horror have had a long-standing relationship. Horror is usually the place where someone can make a political statement or satirically skewer society in a safe, bloody space, while sci-fi eschews the blood for white and chrome. Think of Frankenstein, Aliens, Jason X… no, I’m kidding, but these represent very real fears that bubble below the surface. Frankenstein makes us question the very nature of life and forces us to confront the animal within when the farmers go-to with pitchforks. Aliens is a little more post-industrial, but it’s not a coincidence that it shot into prominence during The Space Age. Are we all on board with this?

Good, because we’re going in a direction that might seem a little weird at first, but it makes perfect sense once I’ve explained a bit more: Neuro-ethics. You see, I study psychology (But, you probably knew that by now). As a result, I have to try to keep up-to-date on all of the weirdest things we’re learning about and thinking about people. That helps tremendously, because, as I’ve already suggested, new shit is perfect fodder for scary shit. …that’s the basic idea. But, before it becomes condensed into thematic form, it has to pass through a barrier of general awareness and social unease. Let’s start with the awareness.

Most horrors begin their lives as social issues. Zombies: consumerism. Vampires: class system. Were-wolves: puberty… and the animal fury that bubbles below the surface of humanity. Remember Me: identity. Plot-twist: That’s right, we’re talking about Remember Me. And, what the hell, a little Deus Ex: Human Revolution, as well. You see, I firmly believe that bio-ethics and neuro-ethics are going to be at the heart of some pretty important up-coming debates. As a result, they’re going to see a lot of play in horror circles. Might as well be ready now.

images

Yes, that’s the art book, but it’s a little less, “Look at how nice my butt is!” than the game-case cover. Come on, that’s not even subtle, guys. There’s even a little protrusion from the title that points to it. These are thoroughly designed products; you can’t tell me that wasn’t on purpose. Okay, you could, and I would believe you. Anyways, Remember Me’s most applicable-to-this mechanic is one where you go into someone’s brain and re-write their memories so that they remember an event differently. And if that was it, we’d have stuff to talk about until your eyes-lids drooped far enough to cover your nostrils, causing you to develop a hitherto-unknown snoring problem. But, there’s more.

In the game, at least, this power re-writes how people understand and approach you. Let’s talk a bit about why this wouldn’t work using an example, and then have a little “fun.” The first time you use this power, you’re reaching into someone’s memory of a medical procedure. Then, you change a few details about the memory and completely alter the outcome. In this case, a bounty-hunter is on your ass to claim a reward that she’s going to use to pay for her husband’s memory treatments. Yeah, it’s a whole thing. Just roll with it. Just before she’s about to kill you (Nillin), you change her memory so that her husband was killed during the procedure, rendering her motivation for killing you moot.

From there, she starts to make decisions based on that new memory. Now, she’s here to recruit you to help her take down the corporation that is responsible for the memory tech that drives the plot. This all makes sense, until you really look at it. And it becomes the defining difference between human memory and this conceptualization of how memory works. That’s why we’re tearing it apart here; it’s not JUST because I’m churlish.

First of all, you alter people’s memories by changing the memory’s details. Which, on its face, is ridiculous. How many memories do you have with explicit, unimportant details? We don’t remember everything. We remember important things and things that grab our attention. For instance, you can undo her husband’s mask in the memory. This causes him to start speaking. So, I ask you, where is the new dialogue coming from? Why would the assassin have a specific memory for the state of the mask?

Let me explain why that’s weird. When you remember things, most often, you take a lot of stuff for granted. Sounds bad, but it really isn’t. If you attended (paid attention) to everything you saw, then you’d be incapable to processing it quickly enough to react to it. A keyboard is a complicated device, but you can conjure one to mind. If you looked at one off-handedly, it would take you a while to notice if the I and the L keys had been swapped. However, you would notice if home-row was missing, because that’s a pretty glaring departure from a standard keyboard. If it was half squid, you’d start to wonder if it was a keyboard at all. This is part of the schema theory approach to memory. You have a schema(tic) for the general attributes of an object, from the essential to the out-of-place. This forms part of your long-term declarative memory. Your memory for general things.

BUT, as a sub-set of declarative memory, you have semantic and episodic memory. Semantic: facts you know about stuff; Episodic: things you remember that happened. So, your keyboard is both a keyboard and your particular keyboard. You know that E is beside R because of semantic memory. You know that the R key is sticky because of episodic memory. This is where things get tricky, though, because your keyboard becoming sticky is in both your semantic and episodic memories. You know “your keyboard is sticky” as a concept in semantic memory that’s attached to your particular keyboard. You also remember spilling soda on it, so you know why it’s sticky. Making sense so far? This is all standard, and this is why things usually work out without you noticing.

What if you stuck your hand in there (ie. your brain) and started fumbling around, though? Well, if you changed your memory of the spillage, you’d still have the pesky memory, and even peskier proof, that your keyboard is sticky. You wouldn’t know why, though. This is where things get interesting. You’ll start trying to figure out why it’s sticky. You’ll make attributions to other spills, wonder if it’s just getting old… You’ll try to explain the dissonance, or you’ll just forget that there is one.

Let’s get off your keyboard for two reasons: to let you type and because assassin memory-remixes are more interesting. (Sorry!) Okay, we’ve altered her memory. Now, she remembers that he’s dead, but she also knows that she came here for some Nillin Killin’. Or, at least, came here for Nillin. In the game, she wants to take down the corp, as I mentioned, because of her husband’s death. But, what about the last little while that’s she’s been planning on killing Nillin during? You don’t just waltz into a place and abduct someone, especially when you’re executing a roof-jacking. You have to know, then, that the assassin has hours of memories where she’s sitting on a roof, envisioning the plan she’s about to execute.

Where does that memory go? Where are her feelings of grief over her husband’s death? What about her feelings of animosity towards Memorize Co.? Why are they so fresh? Why is she still depositing cheques for Memorize? Why does she feel a sense of gratitude floating in her memory towards the doctor that (saved) killed her husband? We all know that, eventually, she’ll find out that she was played, especially when the hospital bills keep rolling in, but, at this point, she’s experiencing some pretty intense cognitive dissonance.

I know what you’re thinking: can’t we create new artificial memories already? Wouldn’t that do the same thing? Sort of. Elizabeth Loftus is the name you’ll probably want to Google if you’re unfamiliar with this work. As part of an experiment, people were showed old images of themselves at the carnival when they were a kid. Soon, they began generating new memories about the event and became sure that they had actually been to one. The thing is: the pictures were 100% fake. The new memories that they created were also fake, natch. How does that work?

Well, in simple terms: you know what happens at a carnival. You’ve been to a carnival, so you have those memories. It was a long time ago, so the details are going to be fuzzy anyways. There’s no reason for you to believe it’s not true. You’ve got visual proof. Often, the newly created memories would start out being related to a portion of the picture. “Oh, I remember that balloon.” or “We rode the tilt-a-whirl that day!” Seems pretty sinister. Let’s dig deeper.

How does such mad science exist in today’s modern world?! Well, it’s something your brain does. A lot of the time, your brain is generating your memories as you review them. It combines the salient points of episodic memory with the knowledge from semantic memory and makes a thing. Not quite a new memory; not really a faithful reproduction. However, you need those other memories for it to generate them. You can remember that you went on the tilt-a-whirl, but you probably won’t remember the event itself, because it was made up. You could, however, imagine the event. Or, more likely, if you’ve ever been on one, you could just make a quick memory substitution.

Let’s apply this to the assassin. She has the new memory in her head. But, does she have the memory of the physiological reactions that go with it? The game won’t let you kill people in their memories, because it wouldn’t make sense for the brain to remember itself dying, but you have memories like that. We’ve all had the “falling to our death” dream. The memory wouldn’t make sense, but neither does anything to do with the husband’s death. But, okay, you can’t new-memory your way out of being dead. Unless it set off a mind-shattering existential crisis, but that’s just me being imaginative.

And this is actually where the game makes sense. Her husband’s dead, so she goes on a mission to save him. She has all of these events: going to get Nillin, great personal loss (comes with being an assassin), and a strong feeling towards Memorize (TM). Her mind reconciles her position, near Nillin, with her new motivation, anger at Memorize (R), and synthesizes a memory that makes sense.  She’s here to take down Memorize (RX). So, what’s my problem again? …Why, in that particular memory, does she notice what position the strap on her husband’s arm is in?

Seriously. If you’re creating a memory, and you don’t keep track of the details, then why is that detail a part of the memory? Yes, you could create an entirely new memory using the same parts, but then you’d have to erase the old memory and all those pieces associated with it. If you erase them, then you can’t use them as a reference. If you just cut them to pieces and move them around, then they’re not properly associated with the neuronal matrix that makes the whole endeavour work. The memory remixing works more like a simulation. These are the things that happened. If you changed one thing, this is how the whole situation would have played out, even the parts you were unfamiliar with, like the Quarantine protocol from the game.

But, maybe everyone in this world is familiar with that protocol by now. And maybe the memory creation power lets you hi-jack the whole mnemonic system, including the feels. With all that, there’s still the problem that that’s not how human memory works. It’s pretty robust in some ways, and it’s not nearly that logical. The only time that you might remember that the strap was undone is if it was important. You’d remember it if the patient strangled the doctor, sure, but not if nothing went wrong. Even if you reconstructed the memory in your head, you’re more likely to wonder why the strap was undone than accept that it was. Unless you already accept it, because of Memory-Magic, so let’s talk about that.

Let’s accept the game-world on its own terms now, because I’ve been forcing reality down its throat long enough to wonder where it’s getting oxygen from (And by reality, I mean theories I’ve slap-dashedly applied). Computers work based on logic. If you change one thing, then a subsequent thing is guaranteed to change. Humans, however, don’t really work that way. We all hold conflicting opinions; cognitive dissonance means holding two of them at the same time. That would make a computer go to bat-nuggets. However, we also don’t do logic as well because of it. So, what if you combined the two?

It’s pretty clear that the protagonist of Remember Me is using some kind of woman-machine interface. Yet, everyone else is, too. I know that I’ve got a digital halo hanging out on my body, but it’s become increasingly obvious that the rest of the world does not. Except in the game. The ability to jack-in and alter memory implies some kind of neural infrastructure that allows it. The pop-up memory cues around the level reinforce the idea; they’re sure as hell not holograms. (ARGs. We’ll get back to that…) There is a machine component to this business, and it makes sense of the problem I brought up earlier.

If the machine is the portion responsible for re-assembling the memories, then it would do so logically, just like a simulation. Every time you recall something, your brain would work together with the computer to reconstruct it as accurately as possible. It would probably give people something approaching eidetic memory under normal circumstances. It would also explain how Nillin could so perfectly re-write  the assassin’s memory. Now, we’re back to the neuro-ethics.

We all know it would be unethical to re-write someone’s memory, but what about augmenting it? Well, if this game suggests anything, it’s that we have to be careful how we do that. Keeping any portion of declarative memory outside of the body would fundamentally change how people remember things. And, if you’re ahead of the class on this one, it would also fundamentally alter how people understand themselves, both legally and philosophically. Part of the reason you know who you are is that you remember what you’ve done, how you felt and what you wanted to do. Many of our favourite memories hang on assumptions too fundamental to question. Hooking ourselves up to a computer would force us to do just that.

This isn’t entirely theoretical, either. We can create new memories in lab rats. We’re working on ways to create augmented memories using neural implants. Google Glass is creating an augmented reality environment already. That’s going to be used to scaffold memory, too. This places us firmly in the realm of neuro-ethics. If you can alter memories, then can you sell them? Are witness statements now more reliable or more suspect? Is it appropriate to delete a memory that’s hurting someone? What’s the line between improving someone and killing them? I don’t mean that facetiously, either. If you alter someone so drastically that they’re no longer recognizable as the same person, then what happened to that other person?

Or, more sinister than any of that, what if you altered people’s memories so that they remember enjoying something they hated? That would raise all sorts of frightening implications for sexual assault. Or even murder. What if you kill someone, but then transfer all of their memories to a new body? Was that murder or the gift of a new body? “Who says you even wanted a new body? Oh, you do, now? Okay!” What if you could change people’s minds on a massive scale? Why vote on issues when we can agree on issues? Who are we being altered to agree with? “Why, it’s everybody… now.” You could make someone a slave for an entire year and then make them forget it. Put someone in prison and wipe away the memory. Or, you could save on space and time and make them live out their sentence in a memory-prison.

Under these circumstances, what is free will? Of course, the more frightening implication is, what is it now? Some of these seem obvious or hyperbolic, and they are, but these are the sorts of things we need to consider when implementing new technology. We don’t want to end up in some Total Recall-V for Vendetta hybrid-movie.

That’s Nillin’s frankly terrifying world. And, if you’re still with me, we’re going a little farther down the rabbit hole next time. There will be more about Remember Me, something about procedural memory, the Human Revolution thing will become a bit more prominent, and we’ll get to how our current technological overlords (Google, Facebook, Twitter…) are already controlling our minds. Think about that lot, and I’ll see you on the other side.

Subjective Objectivity

Posted in All the Things, Game Reviews with tags , , , , on February 5, 2014 by trivialpunk

Hello! I was going to sit down today to write you a brief treatise on Hearthstone, so that I could get back to playing more Hearthstone, but, as I thought about it, one of the little worms that lives in my hair crawled into my brain and took it hostage. “Crap!” I thought, “Now I’m going to have to accept its demands or risk never again feeling the warm touch of a greasy controller in a hot-seat hockey match.” So, it was a close thing. But, here we are; you know which decision I made. I’m going to keep this brief, because you don’t come here to be held hostage to the whims of my brain-worms. That’s my job. No, you’re here to root through the words I cobble together for spelling errors. Aren’t you? Someone has to worry about that…

Anyways, the topic of today’s “conversation” is subjective-objectivity. That may sound like a conflict of terms, but it’s more accurate than you’d think. If you were alive in the Double O’s (and if you weren’t, I’m surprised that you’re precocious enough to be here), then I’m sure you were aware of the explosion of the popularity of abusing the term “objective”. The internet and Facebook were beginning to flower, so we needed a way to interact that made sense. Enter: Objectivity. When you lack a lot of cultural common ground with people, it can be difficult to communicate. That’s why phonetic kittens became the most popular form of greeting. It was something almost everyone could agree on, he said glibly.

But, I’m here to tell you that our worship of objectivity hasn’t died down. On the contrary, we’ve canonized and deified it. It cackles gleefully as we praise charts and graphs. It draws life-energy from those who worship science without understanding it. And it lives in game review scores. You see, the problem with objectivity is that it requires a standard to be measured against. Mathematics has proofs. Psychology has models and experience. Physics has Maths. The internet has one person yelling at the other until one of them logs off. (Both of them considering themselves to have been validated) When you say something is objectively better than another thing, you’re saying that it is better on a theoretical scale. That scale is where subjectivity comes into play.

You see, the scales themselves are usually developed through rigorous testing and prediction. We don’t just throw together theoretical models willy-nilly, unless you’re me deciding how I feel about a game, but we’ll come back to that later. Personality scales and universal models both rely on a core of tested principles. However, and let’s get specific here, are those models objective? Noooo, they’re the product of their time and place. Let’s look at I.Q. scores, because, as a member of the psychological community, I hate seeing them abused.

Based on my I.Q. score, I’m way above average, but what does that even mean? I mean, you’ve read my writing. I’m groping in the dark here! Seriously, what problems could I have with a systemic, trusted model that reinforces the words that drip from my fingers? Well, for starters, it’s broken as a universal model. Many of the questions used in the old standard I.Q. tests were culturally biased. In fact, they were used early on to confirm the old Imperialistic notion that Westerners ruled and everyone else drooled. Unfortunately, it was a forgone thing, because non-Westerners were failing the sub-textual Western Culture portions of the exam. You know when a piece of pop-culture trivia randomly helps you pass a test? Yeah, it was like the opposite of that.

Even worse, I.Q. scores don’t stand the test of time. If you get tested when you’re a kid, then you’re going to need to go again. Not unlike STI tests, the results will vary as you live your life. It’s a pretty ephemeral measure of intelligence, honestly. Not that intelligence was ever a solid thing to begin with. Whip out your intelligence right now. Come on! Do it! Let’s compare sizes. Sorry, I was having a forum-flashback. It gets worse as you approach the upper ranges.  Roughly 66% of the population falls between 85-115, and the percentages you start dealing with past 125 are portions of 5%. Since, everyone’s I.Q. measurements fluctuate daily, you’re not really in a hard-fast ranking system as much as you are a ranking swamp, drifting in a milieu of people around your intelligence range, never sure where you belong. (Pro-Tip: It doesn’t matter that much. How you use your intelligence is far more important and reflective. Look at me, I write about games and culture. >.>)

It was originally developed to test for average intelligence, so it’s not properly calibrated to sense differences past a certain point. You know how you can tell the difference between a warm and a hot mug of tea, but you can’t discern the subtle temperature differences between a branding iron and a glowing stove-top? Part of that is because of (I hope) a lack of exposure to both, but another part of it is because your body hasn’t developed the tools necessary to know the difference. It has never needed to. Generally, exposure to extreme temperature requires one response: Get The Fuck Away. GTFA for short. By the same token, we’re measuring average intelligences and how they stack up. Where on the spectrum they are.  I’m sure we could calibrate a genius-level test, and some people have spent a good deal of time working on this for ostensibly non-narcissistic reasons, but would it really be helpful?

Knowing if someone is below 70 I.Q. points has legal ramifications in the states, but we don’t have any I.Q. requirements linked to high I.Q.s. Except Mensa. And, honestly, Engineering-focused Universities SHOULD have an I.Q. cap built into their student recruitment procedures to reduce the sheer number of super-villains they spawn from Mensa’s ranks. Biosci and Robotics programs, too. Thanks for reminding me, Spider-man. Prevention > Cure

Right, yes, the point of all this was to illustrate how a carefully designed, rigorously tested system can have short-comings in some areas. I.Q. examinations are purpose-built tests, and we should be aware of what that purpose is when we employ them. This applies to every model we have. They exist, and function best, in the systems in which they were developed.

But that doesn’t always last. Behaviourism was considered the be-all of Psychology at one point. The motto was: Pay attention to and mold the behaviour. Behaviour is everything. You can’t prove the other stuff. Eventually, we discovered that the principles of that system didn’t hold up. I mean, to anyone with cognitive thought, it can be ludicrous to consider that there might not be cognitive thought. That is until you try to prove it exists objectively. Think of it this way: a behaviourist can alter the behaviour of an organism without the organism knowing about it at all. So, what’s the point of knowing? Does knowing even come into it? It was all unprovable rubbish to the psycho-orthodoxy, either way.

Before that, we had Freud. If you disagreed with him, well, then he’d say you were in denial or make some other equally unprovable claim. The point is, if you stuck both of those people in a room, and made them watch a third party doing something completely innocuous, like smoking a cigarette, they’d come up with different reasons for why they were doing it. One would be based on an oral fixation hypothesis. The other would probably revolve around operant conditioning and social learning theory. By today’s standards, both of these would seem false to a degree. Now, we’re all about Neuroscience and addiction. Each of these three explanations is backed up by years of study. The models they appeal to were rigorously designed. They all make predictions, and, in this case, they’ve all been validated. But, which one is right?

There’s a pervading belief that there’s a right answer. Well, I’m here to ask, “A right answer according to what?” Having a model or a scale, a graph or a chart, doesn’t mean anything without interpretation. That’s why two people can look at the same climate-change chart and have two very different reactions. One will call it a climate shift in keeping with one of our current models. The other will begin digging a shelter in a backyard somewhere in Canada. Yes, we can objectively see that one part of the graph is higher than the other, but what does it mean?

Models for climate change and human behaviour are pretty abstract. With a proper understanding of Maths and a free stats package, you can bend numbers to your will in unbelievable ways. Even the concept of an outlier is mind-blowing when you think about it. “Why is that one point way over there?” “Oh, don’t worry, it’s obviously not part of our set. We just won’t include it.” The logic there is both hilarious and accurate. Something about that dot on the edge of the graph put it apart from the others, but what? We’ll never know unless we look, but we rarely look, because we can explain it away. So, interrogate graphs thoroughly, don’t skip the thumb-screws, but, also, don’t make life-decisions based on graphs. You have no idea what went into the making of one. Look to the source data or approach it skeptically. Even the source data isn’t above being tampered with.

And even if the primary source wasn’t damaged, is it accurate? There’s a practice in the scientific orthodoxy that has always pissed me off, because it’s ridiculously irresponsible. That practice is only publishing positive results. Again, makes sense, because why are you publishing something if you found nothing? Precisely because you found nothing. Why did someone else find something? What method did you use that screwed it up? Why are we throwing this away?! It’s as valuable, if not moreso, than a positive result, because we have no idea how many times we did something before it worked. How many unpublished papers lie discarded somewhere because they didn’t prove anything, except that another paper, somewhere else, failed to replicate its results. A negative result is a result, and I wish the scientific community would acknowledge them more actively.

If you’re interested in this issue, then I encourage you to look into it, because it’s going to be an important criticism of the ancient orthodoxy in the coming years as papers get cheaper to publish and distribute. It also means that scientists -and people in general- are going to have to be more careful about what they accept off-hand. Don’t worry. It’s not all meaningless. Often, famous papers will be discredited if no one else can replicate them, but I still think we can learn a lot from what doesn’t work out. Science is… a journey. Our Second Great Trial, after not killing ourselves off.

Let’s bring this back around to games and subjective-objectivity now. That barb about game review scores wasn’t thrown in haphazardly. Review scores are interesting, but they aren’t definitive. The reason I say this is because this last year saw a lot of crap being slung about games not getting perfect scores from reviewers. I think I even wrote a something about GTA5 not getting a perfect review on GameSpot and the complaints that followed. Some people take these things very seriously, especially people who run gaming companies, but we have a small advantage over them. We -can- not give a shit.

Seriously, if you like a game, and it gets a shitty score, remember that it’s essentially meaningless. Even MetaCritic, one of my favourite game-review sites, suffers from the fact that a relatively small number of reviews are collected. Of course, it’s an improvement over the old days when review scores were presided over, almost solely, by gaming magazines and V&A Top Ten. *shudder* That being said, your views might not be being represented. For years, we thought the world was round and boy-bands were cool, but we just weren’t looking at them from the right angle. The metric we were using to measure their value: their popularity, was a poor one. But it worked at the time, so who’s to say?

What do you do, then? Do you start a review blog to compete with Trivial Punk? You choose your opponents well, but what metrics are you going to use to judge a game’s quality? A vaguely positive or negative feeling related to a number somewhere between 1-3 or 8-10? Do you charge people money for good reviews? (That joke was topical a few years ago. Checking it off my list…) Even so, what standards do you hold the game to? Cinematic? Literary? Engagement? A hodge-podgy spectrum? Some sort of superior inter-internet dialectic about quality? You could take any one of these perspectives and be absolutely right, if that’s what you’re looking for. For example, Ebert is a film critic that you might know from his pairing with Siskel, and, for a while, there was a big fuss made when he said he didn’t believe that games would ever be art. To which I respond, in my heart-of-hearts, that he’s a fool. But, in my more diplomatic public persona, and in my brain; where I keep all the thoughts that aren’t related to Power Rangers, a vague childish need for approval  or how much I love chocolate; I recognize that they might not seem that way to someone with very different, highly refined (Read: specific) tastes.

Ebert and I don’t have to agree. In the money, influence and power driven world of The Entertainment Industry, things like review scores and “objective” measures can make a real difference. And, yes, that can trickle down and affect me. We may never see another Banjo-Kazooie game, because a lack of popularity leads to a lack of influence and sales, and a lack of money. Therefore, no company is around to make it. Whereas, popular I.P.s can see a loaded fuck-train of new releases. See Call of Duty for that particular example. That could be something to complain about if these titles really upset you. But, is giving a CoD game a low review score because it’s unoriginal any more objective than giving it a high one because it’s a technical masterpiece? Is the solution to just average the two the way MetaCritic does? Yes, that makes a lot of sense, but don’t consider it to be true anymore than anything else I’ve addressed.

Finally, let’s get back to how you should score your games. You can use a score, because people understand those very intuitively. A high one is good; a low one is bad. Sums up your feelings pretty quickly, and there’s nothing wrong with using it, providing you explain yourself. From there, people can harp on about them all they want, but, as long as you’ve explained yourself, they’re just being lazy. You could do review scores the way I do them. Take two metaphors and compare them. The content of the metaphors tells you about the experience I had playing it, while the comparison between the two tells you what I thought of its overall quality. Think of them as feely-fractions. Or, you could come up with your own system. Use bar-graphs or whatever. As long as people understand what you mean, then it won’t be a problem.

And that’s really the important thing. If you know that the people you’re talking to understand what you’re saying, then you’re in the clear. Electrical engineers have had the positive and negative symbols flipped on their schematics for untold years, but it’s a convention, so people understand it. We’re not as monolithic a culture as we once were, and, to some people, especially untrained engineers working with schematics, this can be pretty shocking. But, as long as we approach the world with an understanding of subjective-objectivity, we won’t be that upset when someone gives us an 8/10.

And if none of that convinces you of the pervasive nature of subjectivity, then consider this: For the concepts of Good and Evil to exist, you need some sort of Mannequian universe. For something to even just BE good, you can’t be talking to a relativist. And, if you want them to concede that something might be better, then they can’t be a nihilist. The things we are willing to accept as true or valid are weird, and they have a far larger emotional component than we’re often willing to admit.

Objectivity exists on a scale, and it’s usually purpose-built. We’re only able to be objective because we became aware of how subjective we were being. Yet, Awareness doesn’t cancel out Subjectivity any more than having someone call you on your argumentative strategy makes you wrong. Unless you were being fallacious on purpose to make a point. Then again, they could be doing the same thing… I’m going to work through this… and when I do, I’ll see you on the other side.

You’ve Been Doing WHAT With Your Time?

Posted in All the Things with tags , , , , on January 26, 2014 by trivialpunk

Where is my head at, indeed? It can be difficult to explain precisely why there are days I can write and days I can’t. It’s especially difficult when that day stretches on longer than a week. I mean, it’s not the food at that point, is it?

In any event, today isn’t one of those days I can write, either. I sat down, stared at my post notes for today and said, “I can’t… I can’t make that sound like words right now. I’m just gonna write whatever.” So, that’s what this is.

But, I’m not here to waste your time entirely. I may have been slacking on the posts, but I’ve got three new videos up for those of you who are into that sort of thing! (Silent Hill: Homecoming - Part 3 – The Spider Episode) (Doom 3: BFG Edition - Part 1 – Getting Controls Under Things) (Hotline Miami – One Shot – My Eyes! The Goggles Do Nothing!)

What else have I been up to? Well, I’m assuming you don’t want to hear about class, so let’s get to the game stuff. I replayed a few games with a friend of mine. We hit up LIMBO, Trine, and Intrusion 2. I know. I can’t get enough indie platforming, either.

That transitioned nicely into the next weekend, during which we played The Last of Us. I have to say, even after all this time, and a replay, that game still sticks with me. “I believed him.” HA!

Last weekend, we pulled out a shiny new Xbone and hooked it up to a 60 inch plasma. That was a nice time. I got to check out Ryse: Son of Rome, which looks just as good as you think it does. Then, we played some hockey (NHL 14), despite the fact that I’m not really a fan. (Odd for a Canadian, eh?) Finally, we spun up CoD: Ghosts and played Extinction mode for a while. Extinction is the replacement for zombies. It’s aliens instead of zombies and hive-destroying instead of crawler-camping.

I’m a big fan of the CoD: BlOps Zombies mode; it’s a lot of fun to team up with a buddy and mow down the shambling remains of the Nazi threat. Or to just take on zombie Romero. Whatever works. Extinction mode is equally fun, and it tacks on a few extras to keep things interesting. You can customize your character’s ability load-out before combat begins and then activate those abilities in-game by enabling them through a levelling system. Once enabled, you use the money you gain killing aliens to activate them. But, you also use that money to buy guns and perks, so it fits in nicely.

COD-Ghosts_Extinction_Made-it-Out-Alive-Next-Gen

The alien design isn’t particularly original, sort of Lilo by way of Sunset Overdrive, but it’s fluid and solid. No, not a colloidal suspension, I mean that they are easy to keep track of but fast enough to miss.  At the moment, there’s only one Extinction map available, but it’s a doozy and more are coming. Also, more aliens show up as you progress, so you won’t be disappointed. Even so, repetition is the lesser-known Dagger of Damocles hanging over the head of Engagement. The leveling system and tactics will buy it some time, but we can only hope that they don’t charge too much for the map-packs.

Those of you following my Twitter won’t be surprised to hear that I’ve been playing Assassin’s Creed IV: Blackflag. I finished it today. The ending was a bit anti-climactic, and there were a few features that clashed… There were even whole sections, like the underwater diving areas with the sharks, that I hated. However, all that being said, for a game of its size, its quality is amazing. This is the best Assassin’s Creed game to date. If you’ve ever been a fan of the series, or you just feel like blending some free-running into your high-seas adventures, then you owe it to yourself to give this game a try.

Just… don’t be a ridiculous completionist like me. Those last couple of sync percentages would take hours to earn… but, I still think of them. In the night. During the alone times.

That’s what I’ve found time to play this last month. If there are any LP series that you would like to see more of, or if there’s a game you’d like to see us play, then let us know! I’m going to go see if I can hammer these notes into actual text for next time.

Cheers!

Critically Critical

Posted in All the Things, Everything Else, Game Reviews with tags , , , , , , on January 6, 2014 by trivialpunk

Hello! We’re back on schedule this time! Who saw that coming? Apparently, you did, because here you are, all polished and shiny. Or maybe that’s just my new keyboard. Sorry, I spend so much time with this thing that it’s kind of a big deal to me. It also means that I won’t have to use my mechanical keyboard for everything (as much as I love it to bits and bytes), so I’ll be able to stream and letsplay previously unplayable games. Anyone that’s watched any of my old series will know what I’m talking about. Before I went full gamepad, it was pretty clacky on the “Trivial Punk Fun-time Roughly 30 Minutes.” Now, you’ll get to see what you’ve always wanted: me playing Half-Life. No, I don’t know, but I’m excited to get started.

In the meantime, here’s part 2 of the Silent Hill: Homecoming letsplay that’s up and running on my channel. I hope that it engenders a little love towards me, because we’re going to approach another topic that I want to get out of the way before I get back to game reviews. I know, I should just do both. Maybe I will! I’m not the boss of me! Oh… er… anyways, this came up because I had a pretty obnoxious conversation over dinner last night with an associate of a friend of mine. Vague association, I know, but let’s continue on to the actual topic: Criticism.

I know that a substantial portion of the people who come here to feast on the mind-worms that droop from my slackened, pit-marked skull once a week are game critics. Or movie critics. Or book critics. A lot of the reason for that is that criticism is one of the things I enjoy, so I spend time reading about other people’s perspectives quite a bit. It’s good for me to see something through someone else’s eyes. Sometimes, it can completely change my view of something. Thus, it only makes sense, based on the laws of collision that I just made up, that critics are who I would meet on-line. Love the work, keep it up! This isn’t directed at anyone in particular, but I felt that in the age of Internet Criticism, we should discuss why and how we critique things.

Now, this is going to be a pretty big discussion. Obviously, a lot of the meat of critique is personal and subjective, so I can’t make sweeping generalizations and expect to do any good. Even though I probably will end up doing so, human weakness and all, I want to try to be as fair as I can. In the spirit of that commitment, I’m going to be honest: this is going to be 100% subjective. There are the things I’m personally tired of reading, like “B sucked because it didn’t A,”  but that doesn’t mean they should be abandoned. It’s complicated, because the road to that example conclusion is a legitimate way to interrogate a work. From my point of view, the crucial element is when the technique is employed. Let’s dive in and I’ll show you what I mean.

Let’s get this out of the way first: no one will ever be truly objective. (Not “can”: “will.”) It’s simply impossible. Which is too bad, because it would make criticism that much easier. Every experience a person has is influenced by all their previous experiences, how they’re approaching the present, whether they’re hungry or not, how many times they’ve experienced something… Let me give you a for instance. I loved most of The Lord of the Rings the first time I saw it. It was truly awe-inspiring. However, I felt that same way when I watched Star Wars: Episode 1 The Phantom Menace. Did you hear that? It was the sound of a hundred collectively gasped breaths, followed by a slow, deliberate move to the unsubscribe button, but hear me out.

When I first watched SWE1, I was eleven years old and, while I was a huge fan of the original trilogy, I wasn’t committed to the integrity of the Star Wars universe. I watched it with my Dad, which was kind of a special treat, because it was something we could both enjoy. I saw it in the theatres, and I wasn’t immersed in the internet culture that so derides the trilogy we whinge about today. And, let’s be honest, the first movie wasn’t as bad as we think it is today. It was fun, exciting and had Jedi, so I really enjoyed it. If I had put together my review then, it would have been a glowing recommendation written about as well as a Engrish translation.

What about The Lord of the Rings? Well, by the time I watched those movies, I was a little bit older and pretty committed to being a pop-culture junky. I had almost the same reaction to it, because the circumstances were pretty similar for what I was expecting from a movie at that point. Hell, JRR’s The Hobbit was one of the first books I ever read. Even so, I still spent a good deal of the movie comparing it to the book. It’s a pretty common practice now, and, having grown up doing it, it’s one of my least favourite past-times. Don’t get me wrong, I love the movie, but it didn’t have Tom Bombadil in it, and that’s what I wanted to see the most. Kind of soured the thing for me a bit. Eventually, I got over that and saw it for the good movie it was.

Until… years later, I’m sprawled on the couch, half-and-half vodka-coke in one hand, hangover threatening to pound its way through the booze wall I’m hastily erecting, memories of my recently lost girlfriend drifting through my head… Naturally, I put on a movie to distract me. That movie was… you guessed it… The Fellowship of the Ring. And I HATED it. It was too long, over-written, stuffy, pretentious, needless, unrealistic, unfaithful to the source material and a bunch of other mean things. Legolas was a pretentious jerk; Gimli was an annoying bag of hair that should have died too many times to count. You get it. I believe none of those things, but if I had written a review right then, I would have said it was terrible. Ironically enough, one of the reasons I didn’t enjoy it was that I’d loved it enough to have watched it too many times.

See what I mean? I chose an extreme case because I wanted to illustrate it as clearly as I could, but this occurs to a greater or lesser extent all the time. And it’s quite alright. I’m not saying we shouldn’t hate on movies, games or books. That would be silly and unrealistic. It can be a lot of fun to make light of the things we enjoy or hate. After all, we spent money to experience them, and they have a pretty big impact on our society. If that’s what someone wants to do, then I’m totally behind them (unarmed, even). But, I would suggest that we always ask “why.” I’ll illustrate with the conversation I had last night.

The basics of it were that Starship Troopers, a campy poke at Fascism and the military melded with over-the-top drama and action, was not like the book. Also, my associate thought it was a bad movie. Okay, that’s a legitimate opinion, but why is it important that the movie be like the book? Could a movie ever be exactly like a book and still be a good movie? You can capture the essence of something, but why does that mean we need to follow along blindly behind it? (SST Irony) Ideas can be used as vehicles for other ideas, after all. And they really should be, because some things just aren’t as relevant to popular culture as they once were. Or maybe we want to deconstruct them with a goofy movie. Basically, why is A movie THIS movie, and why do we want it to be a different movie? I could go on…

There are a lot of “whys” there, and I’m not going to go through all of them or we’d be here all night, but I will address the most important question I would ask: why are we proffering this criticism? I’m not saying that rhetorically; I’m asking what the end-goal of the criticism is. That’s a pretty essential question to answer, because it informs everything you do with your critique. It’s not something you can answer once, either. You have to know, going in, what you want to communicate for every critique. For instance, last night’s conversation would probably be akin to arguing personal movie preferences, because neither of us managed to have our opinions altered. We weren’t calmly discussing the merits of the movie; we were arguing over whether or not it was good based on our personal preferences. All well and good, but does that help anyone else? Only if you can explain yourself and provide a recommendation.

You see, to me, a personal preference review is one that’s helpful to people who are familiar with your preferences or, and this is optimal, someone who shares your preferences. That way you can provide helpful advice for anyone interested in that piece of cultural ephemera. But it probably wouldn’t be helpful to anyone actually making the film. On the creation side of things, you’re familiar with the fact that not everyone will like your film. You don’t make your films for everyone, and you’re aware of the diversity of opinion. I mean, you could try making movies for everyone, but then people just complain that they’re bland, which kind of defeats the purpose if there’s a large group that doesn’t like them. (Still, it might make a lot of money)

Thus, a lot of the time, the why you’re reviewing something comes down to who you’re reviewing it for. On this site, I try to peer into gameplay mechanics, poke at playability, flirt with experience and languor in story. All so you can decide if you want to buy it or not. Also, a part of me wants to influence how you look at games and movies. When I wrote my review for The Hobbit: TDoS, I did it because I loved the movie, yes, but also because I knew some people might get taken aback by some things if they went in expecting the book. Essentially, I wanted to help craft the mind-set you went into the movie with. Mind-set is a big part of getting everything you can out of a movie. You don’t want to watch an arty foreign-language film at a kegger (or maybe, hmm…) and you don’t want to watch LotR sprawled on the couch with a life hangover. (Still…) I vehemently believe that personal context matters.

At the same time, I occasionally focus on portions of a game that I felt affected the overall quality. Maybe there’s a setting I think would improve an experience or a level that I think could be better. At that point, I’m trying to get other designers, players and producers to think about if it could be improved, my recommendations be damned. Or, maybe I’ll do a review strictly to make a point, like I did with Super Hexagon and learning curves.

One thing I will never do is seriously denigrate the things I purport to love. I’ll always poke fun at EA, but I’m glad they’re around to draw capital and attention to gaming, and I genuinely enjoyed Dead Space. I didn’t like the second Star Wars prequel, so I didn’t watch the third one. I didn’t need to show up to hate it. And that’s what made last night’s conversation so needlessly obnoxious: “Okay, we’ve said our pieces about the film. Now what?” I’ve seen people work for hours to prove that something was bad, but what does that accomplish if that’s the end of it? Last-night-dude seemed hell-bent on convincing me that SST was a bad film, but all he did was convince me that he didn’t like it. It’s not just him, either. I’ve done it. I see it all the time on-line. We’re stating our opinions, and that’s great, but I think we should draw the line at trying to suck the fun out of it for other people. That’s not a critique, after all. Oh dear, did I not mention? I don’t believe that A critique = A review, but they’ve got a lot of things in common, so I figured I’d leave this segue here.

A critique is like a nuanced review. It takes into account past cultural incarnations (Read: prequels), current trends, multiple preferences, multiples perspectives, historical relevance, contemporary relevance… It basically situates the cultural artefact (Ex. movie) within its context and approaches it on its terms. Then, it discusses what all of that implies to us. It’s done to help the producers of the work to improve, the consumers of it to appreciate it and the critic of it to develop their abilities further. Again, that’s just my opinion, and we’re into semantics here, but that’s what I believe to be the purpose of a comprehensive critique. But, it’s not like every critic has all that background knowledge, especially starting out, or all that much room. I’ve seen some pretty poignant review Tweets, after all. And you don’t have to know or do all this to be a critic. The only reason I bring it up is because I believe it’s what we should shoot for when really diving into a piece of pop-culture.

To me, you should always approach something on its own terms; it really improves the experience. Because I have to approach many different types of popular culture, I’m not familiar with all of them, and I don’t always appreciate them off the bat. However, at some point along the way, I realized that I would never learn to enjoy new things if I always judged them by the standards I already had. I would always data-feed myself things I liked, narrowing my preferences further, until I couldn’t see anyone else’s opinions through my own miasma. I would believe something for so long that I’d begin to think it was true. If I didn’t try, if I didn’t work to cultivate different perspectives, I wouldn’t be much use to you today.

That’s kind of the heart of it. The standards by which we judge things are our own, and if we truly want to criticize something on its own merits, then we need to be aware of how those standards are formed and what they communicate. Just whinging about something doesn’t really forward the cause of improving its overall quality, unless we’re talking about affecting viewing figures and preferences. In which case, complaining about Michael Bay films hasn’t done much to affect their revenue. If someone hates those films, then maybe they’re not for them. I hated Transformers the first time, because it was a fandom movie made for someone other than the fandom. They knew the fandom would already come, so they made the film to appeal to everyone else, so they could get their business, as well. When I approach it with that in mind and a pint of beer, then it’s really not so bad. It’s the same way I watch Elementary on television and just pretend it’s a detective show about a guy who just happens to be named Sherlock Holmes. (If anyone’s a poster-child for the changes wrought by contemporary relevance, it’s Sherlock Holmes… or vampires)

More and more of us are flocking to the internet to give our opinions on movies, games and books, and I think that’s wonderful (That’s why I’m here, after all). A culture of criticism can really do some good, especially if we can add something to the reader’s experience. However, I think we should be extra aware of why we’re personally critiquing something and what we hope to communicate about it. After all, we’re going to be leaving these Tubes to the next wave of critical thinkers someday, and they’ll be looking to us to figure out how to approach criticism. For my piece, I want to leave them a tradition that’s positive and thoughtful. Cheers!

Happy New Years!

Posted in All the Things, Everything Else with tags , , , , , , on December 30, 2013 by trivialpunk

Hello! So, I was thinking about it, and you might be happier knowing the date of my next post. You see, I spent my holidays recovering from this year. Resting and fighting the mid-twenties, graduate-year angst of what I’m going to do with my life. I’m doing much better now, but I’m on my way out the door and I’ve got plans until after New Years. You know me.

So, the next post from Trivial Punk will come on January 5th, 2014. I’m not sure what it’ll be about, but I’m thinking about what kind of stuff I do here and trying to decide how best to proceed in 2014. I’ve worked on quite a few projects over TriviPunk’s life-time. Some were put aside. Others were quietly released. This year, I’m thinking of simplifying and focusing.

Stuff I’m hanging on to that you know about:
Trivial Punk on YouTube
Trivial Writing (Story section only)
Gloam (I’m nowhere near done it yet)

Stuff I’m hanging on to that you don’t know about:
An Unannounced Novel
The Binder
Code-name: The Pilgrim
The LOMG

Stuff I’m letting go:
Duel (The Online Version) -no funding, no staff

I would also like to offer my sincerest apologies to Recollections of Play and CheeeseToastie, both of whom nominated me for awards: The Versatile Blogger Award and Blog of the Year awards, respectively. Thank you for your nominations, and I sincerely apologize for having a brain too full of frappe’d guacamole to accept in a timely fashion. I feel that it’s been far too long for me to accept them properly, but I’m grateful for the mention. Go check them out! There’s a reason I follow them.

Good luck to you in this new year. Kick some ass! You know, metaphorically.

Here’s wishing you many bright days and High Scores in 2014! Stand steady against the flow of the years, and I’ll see you on the other side.

-Trivial Punk

Oh, and here’s a little something just for you. My name, Trivial Punk, has many different meanings and ways to shorten it. Trivial Punk was something I thought of as a throw-away name for on-line Go games. I wanted something that reflected my awareness of my own insignificance, but also something that would be ironically poignant if I ever became successful. It also gestures towards my love of trivia.

So, Trivial Punk is a reference to my state as a single, unimportant being, my love of things trivial and the irony of how we think of things of relative unimportance. And, if you’re reeeeal good, it shortens to Tri-Pun, for obvious reasons. What’s in a name? Oh so very much and nothing at all. It’s Trivial.

Happy New Years :-D

Follow

Get every new post delivered to your Inbox.

Join 155 other followers