Tagged: gaming

The unreal urbanism of Pokémon Go

Earlier this month the mobile-app game Pokémon Go was released in the U.S., and the game has been ubiquitous ever since. Aside from being a sudden pop culture phenomenon, the game’s success poses some significant implications. First of all, this is clearly a breakthrough moment for augmented reality. Pokémon Go is not the first augmented reality game, nor is it the most ambitious, but it has undoubtedly brought AR into mainstream consciousness. Secondly, the success of Pokémon Go has led me to reconsider all my previously held assumptions about the uses of mobile apps and gamification for interfacing with urban spaces. I have historically been cynical about the prospect of using mobile games or AR interfaces to interact with urban space, since they usually strike me as shallow and insignificant, typically resulting in a fleeting diversion like a flash mob dance party, rather than altering people’s perceptions of place in any lasting or meaningful way. Pokémon Go satisfies all the requirements of my earlier preconceptions, yet despite my best critical instincts, I really like the game.

Pokemon Go S&S

A lure party in progress at Soldiers & Sailors Memorial in Oakland. Photo credit: user invertedcheese85 from reddit.com/r/pokemongopittsburgh

 

The buzz about Pokémon Go had been building on various forums online, and after it was released it was virtually impossible to avoid Pokémon Go-related posts. Save for maybe 10 minutes with a friend’s Game Boy in the late 90s, I’ve never played a Pokémon game and I preemptively wrote off Pokémon Go as yet another cultural fad that I would never partake in or understand. Curiosity got the best of my wife, however, and she downloaded the app and we walked around our neighborhood to test it out. To my surprise, the game was a lot of fun; our familiar surroundings were now filled with digital surprises, and we were excited to see neighborhood landmarks and murals represented as Pokéstops, and wild Pokémon hanging out in the doorways of local shops.  We meandered around discovering which of our local landmarks had been incorporated into the game, and each discovery increased my enjoyment of the app. Yes, the game is simple and shallow, but I was completely charmed. I downloaded the game so I could play, too.

Reactions to Pokémon Go have been as fascinating as the game’s widespread adoption. Many news articles sensationalized the inherent dangers of playing the game: distracted players wandering into traffic or off of cliffs, people’s homes being designated as Pokéstops and besieged by players, and traps being laid (using the games “lures”) to ambush and rob aspiring Pokétrainers. There have also been insightful critical analyses of the game. An early and oft-shared article by Omari Akil considered the implications of Pokémon Go in light of recent police shootings of black men, warning that “Pokemon Go is a death sentence if you are a black man“:

I spent less than 20 minutes outside. Five of those minutes were spent enjoying the game. One of those minutes I spent trying to look as pleasant and nonthreatening as possible as I walked past a somewhat visibly disturbed white woman on her way to the bus stop. I spent the other 14 minutes being distracted from the game by thoughts of the countless Black Men who have had the police called on them because they looked “suspicious” or wondering what a second amendment exercising individual might do if I walked past their window a 3rd or 4th time in search of a Jigglypuff.

Others questioned the distribution of Pokémon across neighborhoods, suggesting that poor or black neighborhoods had disproportionately fewer Pokémon and Pokéstops. Among urbanists, however, reaction to the game has been mixed. Mark Wilson at Fastcodesign declared that Pokémon Go “is quietly helping people fall in love with their cities“. Ross Brady of Architizer celebrated the game for sparking “a global wave of urban exploration“. Writing for de zeen, Alex Wiltshire boldly states that the game has “redrawn the map of what people find important about the world“. City Lab contributor Laura Bliss proclaimed “Pokémon Go has created a new kind of flaneur“.

Others have been more critical of the game, with Nicholas Korody at Archinect retorting: “No, Pokémon Go is not an urban fantasy for the new flaneur“. At Jacobin, Sam Kriss implores readers to “resist Pokémon Go“:

Walk around. Explore your neighborhood. Visit the park. Take in the sights. Have your fun. Pokémon Go is coercion, authority, a command issuing from out of a blank universe, which blasts through social and political cleavages to finally catch ‘em all. It must be resisted.

Some, like Jeff Sparrow at Overland, drew direct parallels to the Situationists:

On the one hand, that’s way cool – suddenly, the old pub near your house is inhabited by monsters.

On the other, there’s something faintly distasteful about the recuperation of specific real histories into a billion-dollar corporate mythology. Nearly 150 people lost their lives when the Triangle Shirtwaist Factory burned to the ground, entirely needless deaths caused by the atrocious working conditions of the garment trade. The tragedy became a rallying point for the trade union movement, the name of the factory, a shorthand reference to employers’ greed.

Now, though, it’s three free Pokeballs.

We might also say, then, that, even as the game leads players to embrace the derive, it also offers a remarkable demonstration of the phenomenon that Debord critiqued.

Writing for the Atlantic, Ian Bogost mediated on “the tragedy of Pokémon Go“:

We can have it both ways; we have to, even: Pokémon Go can be both a delightful new mechanism for urban and social discovery, and also a ghastly reminder that when it comes to culture, sequels rule. It’s easy to look at Pokémon Go and wonder if the game’s success might underwrite other, less trite or brazenly commercial examples of the genre. But that’s what the creators of pervasive games have been thinking for years, and still almost all of them are advertisements. Reality is and always has been augmented, it turns out. But not with video feeds of twenty-year old monsters in balls atop local landmarks. Rather, with swindlers shilling their wares to the everyfolk, whose ensuing dance of embrace and resistance is always as beautiful as it is ugly.

Pokémon Go’s popularity has led to many online comparisons to the Star Trek: TNG episode “The Game,” in which the crew of the Enterprise is overcome by a mind-controlling video game. The game in Star Trek is not strictly-speaking an augmented reality game, but does involve projecting images onto the player’s vision similar to an AR-overlay. Previous gaming and gadget fads have been compared to the TNG episode, notably Google Glass (for it’s similarity to the eye-beaming design used to interface with the game in Star Trek) and the pervasively popular Angry Birds game (as evident in this parody video). The comparison has regained cultural cachet because, unlike Angry Birds which can be played on the couch, Pokémon Go is played in motion. This, of course, has contributed to the perception of the game’s zombie-fying effects; we’ve grown accustomed to the fact that everyone’s eyes are glued to a smartphone screen in our public spaces, but now there are whole flocks of people milling around with their eyes on their devices.

My cynical side is inclined to agree with the critics who see Pokémon Go’s proliferation as proof positive of the passification and banalization of our society; the visions of Orwell, Bradbury, and Phil Dick all realized at once. But there’s something there that has me appreciative, even excited about this goofy game. As my wife and I wandered our neighborhood looking for pocket monsters, we noticed several other people walking around staring at their phones. This is not an uncommon sight, but it is re-contextualized in light of Pokémon Go’s popularity. “Look,” my wife would say, “I bet they’re playing, too.” After a while she had to know for sure, and started walking up to people and asking, “Are you playing Pokémon Go?” Every person she asked was indeed playing the game. Then we were walking along with these people we’ve just met, discussing play strategies, sharing  Pokéstop locations, spreading word of upcoming lure parties.

One night around 10:30 last week we went into the Oakland neighborhood, home to both Pitt and Carnegie Mellon’s campuses and a hotbed of  Pokémon Go activity. When we arrived, at least 20 people sat along the wall in front of the Soldiers & Sailors Memorial, smartphones in hands. We walked around the base of the Cathedral of Learning, where dozens of people in groups of two, three, or more were slowly pacing, stopping to capture a virtual creature. We crossed the street to Schenley plaza, where still dozens more people trekked through the grass, laughing and exclaiming and running up to their friends to share which Pokémon they had just got. Sure, most of these people were only talking to their own groups of friends, if they were talking at all, but it was still a cool experience. For me, the greatest thing was not which monsters I caught or XP my avatar earned; rather it was the energy, the unspoken but palpable buzz generated by all these people walking around in the dark of a warm summer night. Yes, I was giving attention to my smartphone screen, but what I remember most from that evening are the stars, and the fireflies, and the murmuring voices. Pokémon Go is promoting a sort of communal public activity, even if the sociality it produces is liminal at best. Yes, it is still shallow, still commercial, still programmed, but it’s something; there’s an energy there and a potential that is worth paying attention to.

Pokémon Go is not the be-all-end-all of augmented urban exploration, nor should be it considered the pinnacle of how mobile technology can enable new ways of interfacing with city space. But the game’s popularity, and my personal experience using it, has given me hope for the potential of AR apps to enrich our experience of urban spaces and engender new types of interactions in our shared environments.

McLuhan Monday: Print and Islam, mobile gaming medium theory, McLuhan’s relevance, and more

marshall-mcluhan-illustration_2

So, in the Muslim world, books and literacy became generally accessible (instead of being accessible only to the educated male and the wealthy) about a quarter of a millennium later than in European-Western culture. I found this information, together with an assessment of the damage this 250-year lag caused to Muslim society and culture, in the works of Muslim scholars.

This lag could be made up in the blink of an eye as the cultural world moved from Johannes Gutenberg’s galaxy into the era when “The medium is the message,” and with the development of the virtual and digital world (at the expense of the printed one, of course).

McLuhan had a lot of ideas and subsets of ideas. But he had one very big idea: that human civilization had passed through two stages of communication history, oral and print, and was embarking on another: electronic media. He believed the new media would change the way people relate to themselves and others and would change societies dramatically. Is the computer, then, the ubiquitous laptop and other devices, the McLuhan “audile-tactile” dream come true? There is no way to know. And it will take at least another 50 years to make a full evaluation of the work of Marshall McLuhan.

Taking a leaf from McLuhan then, I submit that the message is the product. The tone, approach and strategy of how marketing is conducted shapes what kinds of product can be allowed by a product’s developer. What kind of ad you’ll run determines what kind of game you’ll believe can work, and therefore what kind of game you’ll fund and make.

[…]

The medium is the message and the message is the product, remember. In Marvel’s case the medium of cinema sends the message of the big experience, and the message disseminated through a high value trailer leads to the will to make a high value product: a big splashy movie. That’s how it earned the right to be thought of as premium. That’s how games do that too.

When media guru Marshall McLuhan declared back in the 1960s that “Every innovation has within itself the seeds of its reversal,” I had no idea what he meant. But, like his other catchy quotables — “global village,” “cool media,” “the medium is the message” — it stayed with me. Now, in the Internet age, I am seeing proof of his prophecy every day.

For example, McLuhan predicted that a rapidly expanding automobile culture would lead to more traffic jams, air pollution and longing for space to take long walks or ride bicycles. I’m sure he’d give a knowing I-told-you-so nod to today’s battles between car people and bike people for asphalt space.

[…]

But more recently and less happily, I see far more sinister seeds of reversal in this era’s greatest innovation, the Internet. We greeted the Web as a liberator, but in today’s age of terrorism and post-Cold War autocrats it also poses a growing menace to the press freedoms it otherwise has invigorated.

Two common critiques of McLuhan’s are his obliviousness to political economy and his technological determinism. McLuhan’s prognosis on media appears to celebrate a burgeoning world order and global capitalism. The way he foreshadows cognitive capitalism appears deterministic. Critics attack McLuhan for being silent on the transformation of global capitalism. This criticism focuses on what McLuhan did not write inUnderstanding Media as opposed to what he did. It is interesting to note that European scholars, even those who with political economic inclinations do not scorn McLuhan the way North Americans do. They do not blame him for being the messenger of a cognitive capitalist message.

[…]

McLuhan rightly described and to some extent predicted how messages need not be unidirectional. When he argued that technology is an extension of the senses, he did not argue that a select few had agency over the shaping of the message. He argued that any person had that potential. Specifically, he described how alternates modes of literacy allowed non literary people to participate in a global discourse. This is McLuhan’s legacy and part of why his work should be celebrated today.

Ender’s Game analyzed, the Stanley Parable explored, Political Economy of zombies, semiotics of Twitter, much more

It’s been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.

  • I haven’t seen the new Ender’s Game movie, but this review by abbeyotis at Cyborgology calls the film “a lean and contemporary plunge into questions of morality mediated by technology”:

In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.

  • In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman’s latest Big Picture post uses his own review of the Ender’s Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism.  Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or “gimmick”) to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person “Hulk speak”  writing style the all caps seems to be played out. Nevertheless, I’m sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.
  • In this video of a presentation titled Game design: the medium is the message, Jonathan Blow discusses how commercial constraints dictate the form of products from TV shows to video games.
  • This video from Satchbag’s Goods is ostensibly a review of Hotline Miami, but develops into a discussion of art movements and Kanye West:
  • This short interview with Slavoj Žižek in New York magazine continues a trend I’ve noticed since Pervert’s Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek’s contribution to critical theory:

Žižek, after all, the ­Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.

Following the development of the environment on the team’s blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.

  • Stephen Totilo reviewed the new pirate-themed Assassin’s Creed game for the New York Times. I haven’t played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:

Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.

It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.

The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.

Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.

Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.

Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.

Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters’ ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.

Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.

This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves – our biology is verified, digitised, encrypted, as they are handed over to our devices.

Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn’t exactly rocket science, thanks to Tor and some bitcoins. Here’s a rundown of how Silk Road worked before the feds swooped in.

  • Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is “imaginary worlds,” and they touch upon the narratology vs. ludology conflict in gaming:

The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author.  So it’s not just a question of how many choices you make, but how many options there are per choice.  Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.

Manifesto for a Ludic Century, ludonarrative dissonance in GTA, games and mindf*cks, and more

Systems, play, design: these are not just aspects of the Ludic Century, they are also elements of gaming literacy. Literacy is about creating and understanding meaning, which allows people to write (create) and read (understand).

New literacies, such as visual and technological literacy, have also been identified in recent decades. However, to be truly literate in the Ludic Century also requires gaming literacy. The rise of games in our culture is both cause and effect of gaming literacy in the Ludic Century.

So, perhaps there is one fundamental challenge for the Manifesto for a Ludic Century: would a truly ludic century be a century of manifestos? Of declaring simple principles rather than embracing systems? Or, is the Ludic Manifesto meant to be the last manifesto, the manifesto to end manifestos, replacing simple answers with the complexity of “information at play?”

Might we conclude: videogames are the first creative medium to fully emerge after Marshall McLuhan. By the time they became popular, media ecology as a method was well-known. McLuhan was a popular icon. By the time the first generation of videogame players was becoming adults, McLuhan had become a trope. When the then-new publication Wired Magazine named him their “patron saint” in 1993, the editors didn’t even bother to explain what that meant. They didn’t need to.

By the time videogame studies became a going concern, McLuhan was gospel. So much so that we don’t even talk about him. To use McLuhan’s own language of the tetrad, game studies have enhanced or accelerated media ecology itself, to the point that the idea of studying the medium itself over its content has become a natural order.

Generally speaking, educators have warmed to the idea of the flipped classroom far more than that of the MOOC. That move might be injudicious, as the two are intimately connected. It’s no accident that private, for-profit MOOC startups like Coursera have advocated for flipped classrooms, since those organizations have much to gain from their endorsement by universities. MOOCs rely on the short, video lecture as the backbone of a new educational beast, after all. Whether in the context of an all-online or a “hybrid” course, a flipped classroom takes the video lecture as a new standard for knowledge delivery and transfers that experience from the lecture hall to the laptop.

  • Also, with increased awareness of Animal Crossing following from the latest game’s release for the Nintendo 3DS, Bogost recently posted an excerpt from his 2007 book Persuasive Games discussing consumption and naturalism in Animal Crossing:

Animal Crossing deploys a procedural rhetoric about the repetition of mundane work as a consequence of contemporary material property ideals. When my (then) five-year-old began playing the game seriously, he quickly recognized the dilemma he faced. On the one hand, he wanted to spend the money he had earned from collecting fruit and bugs on new furniture, carpets, and shirts. On the other hand, he wanted to pay off his house so he could get a bigger one like mine.

Ludonarrative dissonance is when the story the game is telling you and your gameplay experience somehow don’t match up. As an example, this was a particular issue in Rockstar’s most recent game, Max Payne 3. Max constantly makes remarks about how terrible he is at his job, even though he does more than is humanly possible to try to protect his employers – including making perfect one-handed head shots in mid-air while drunk and high on painkillers. The disparity and the dissonance between the narrative of the story and the gameplay leave things feeling off kilter and poorly inter-connnected. It doesn’t make sense or fit with your experience so it feels wrong and damages the cohesiveness of the game world and story. It’s like when you go on a old-lady only murdering spree as Niko, who is supposed to be a reluctant killer with a traumatic past, not a gerontophobic misogynist.

What I find strange, in light of our supposed anti-irony cultural moment, is a kind of old-fashioned ironic conceit behind a number of recent critical darlings in the commercial videogame space. 2007’s Bioshock and this year’s Bioshock: Infinite are both about the irony of expecting ‘meaningful choice’ to live in an artificial dome of technological and commercial constraints. Last year’s Spec Ops: The Line offers a grim alchemy of self-deprecation and preemptive disdain for its audience. The Grand Theft Auto series has always maintained a cool, dismissive cynicism beneath its gleefully absurd mayhem. These games frame choice as illusory and experience as artificial. They are expensive, explosive parodies of free will.

To cut straight to the heart of it, Bioshock seems to suffer from a powerful dissonance between what it is about as a game, and what it is about as a story. By throwing the narrative and ludic elements of the work into opposition, the game seems to openly mock the player for having believed in the fiction of the game at all. The leveraging of the game’s narrative structure against its ludic structure all but destroys the player’s ability to feel connected to either, forcing the player to either abandon the game in protest (which I almost did) or simply accept that the game cannot be enjoyed as both a game and a story, and to then finish it for the mere sake of finishing it.

The post itself makes a very important point: games, for the most part, can’t pull the Mindfuck like movies can because of the nature of the kind of storytelling to which most games are confined, which is predicated on a particular kind of interaction. Watching a movie may not be an entirely passive experience, but it’s clearly more passive than a game. You may identify with the characters on the screen, but you’re not meant to implicitly think of yourself as them. You’re not engaging in the kind of subtle roleplaying that most (mainstream) games encourage. You are not adopting an avatar. In a game, you are your profile, you are the character you create, and you are also to a certain degree the character that the game sets in front of you. I may be watching everything Lara Croft does from behind her, but I also control her; to the extent that she has choices, I make them. I get her from point A to B, and if she fails it’s my fault. When I talk about something that happened in the game, I don’t say that Lara did it. I say that I did.

Anachrony is a common storytelling technique in which events are narrated out of chronological order. A familiar example is a flashback, where story time jumps to the past for a bit, before returning to the present. The term “nonlinear narrative” is also sometimes used for this kind of out-of-order storytelling (somewhat less precisely).

While it’s a common technique in literature and film, anachrony is widely seen as more problematic to use in games, perhaps even to the point of being unusable. If the player’s actions during a flashback scene imply a future that differs considerably from the one already presented in a present-day scene (say, the player kills someone who they had been talking to in a present-day scene, or commits suicide in a flashback), this produces an inconsistent narrative. The root of the problem is that players generally have degree of freedom of action, so flashbacks are less like the case in literature and film—where already decided events are simply narrated out of order—and more like time travel, where the player travels back in time and can mess up the timeline.

The first of the books are set to be published in early 2014. Some of the writers that will be published by Press Select in its first round have written for publications like Edge magazine, Kotaku, Kill Screen and personal blogs, including writers like Chris Dahlen, Michael Abbott, Jenn Frank, Jason Killingsworth, Maddy Myers, Tim Rogers, Patricia Hernandez and Robert Yang.

Inside Korea’s gaming culture, virtual worlds and economic modeling, Hollywood’s Summer of Doom continues, and more

  • I’ve long been fascinated by the gaming culture in South Korea, and Tom Massey has written a great feature piece for Eurogamer titled Seoul Caliber: Inside Korea’s Gaming Culture. From this westerner’s perspective, having never visited Korea, the article reads almost more like cyberpunk fiction than games journalism:

Not quite as ubiquitous, but still extremely common, are PC Bangs: LAN gaming hangouts where 1000 Won nets you an hour of multiplayer catharsis. In Gangnam’s Maxzone, overhead fans rotate at Apocalypse Now speed, slicing cigarette smoke as it snakes through the blades. Korea’s own NCSoft, whose European base is but a stone’s throw from the Eurogamer offices, is currently going strong with its latest MMO, Blade & Soul.

“It’s relaxing,” says Min-Su, sipping a Milkis purchased from the wall-mounted vending machine. “And dangerous,” he adds. “It’s easy to lose track of time playing these games, especially when you have so much invested in them. I’m always thinking about achieving the next level or taking on a quick quest to try to obtain a weapon, and the next thing I know I’ve been here for half the day.”

HK cap

Creation and simulation in virtual worlds appear to offer the best domain to test the new ideas required to tackle the very real problems of depravation, inequality, unemployment, and poverty that exist in national economies. On that note the need to see our socioeconomic institutions for the games that they really are seems even more poignant.

In the words of Vili Lehdonvirta, a leading scholar in virtual goods and currencies, the suffering we see today is “not some consequence of natural or physical law” it instead “is a result of the way we play these games.”

The global economy seems to be bifurcating into a rich/tech track and a poor/non-tech track, not least because new technology will increasingly destroy/replace old non-tech jobs. (Yes, global. Foxconn is already replacing Chinese employees with one million robots.) So far so fairly non-controversial.

The big thorny question is this: is technology destroying jobs faster than it creates them?

[…]

We live in an era of rapid exponential growth in technological capabilities. (Which may finally be slowing down, true, but that’s an issue for decades hence.) If you’re talking about the economic effects of technology in the 1980s, much less the 1930s or the nineteenth century, as if it has any relevance whatsoever to today’s situation, then you do not understand exponential growth. The present changes so much faster that the past is no guide at all; the difference is qualitative, not just quantitative. It’s like comparing a leisurely walk to relativistic speeds.

We begin with a love story–from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives…and that’s chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own.

“These outages are absolutely going to continue,” said Neil MacDonald, a fellow at technology research firm Gartner. “There has been an explosion in data across all types of enterprises. The complexity of the systems created to support big data is beyond the understanding of a single person and they also fail in ways that are beyond the comprehension of a single person.”

From high volume securities trading to the explosion in social media and the online consumption of entertainment, the amount of data being carried globally over the private networks, such as stock exchanges, and the public internet is placing unprecedented strain on websites and on the networks that connect them.

What I want is systems that have intrinsic rewards; that are disciplines similar to drawing or playing a musical instrument. I want systems which are their own reward.

What videogames almost always give me instead are labor that I must perform for an extrinsic reward. I want to convince you that not only is this not what I want, this isn’t really what anyone wants.

This ‘celebrification’ is enlivening making games and giving players role models, drawing more people in to development, especially indie and auteured games. This shift is proving more prosperous than any Skillset-accredited course or government pot could ever hope for. We are making men sitting in pants at their laptops for 12 hours a day as glamorous as it could be.

Creating luminaries will lead to all the benefits that more people in games can bring: a bigger and brighter community, plus new and fresh talent making exciting games. However, celebritydom demands storms, turmoil and gossip.

Spielberg’s theory is essentially that a studio will eventually go under after it releases five or six bombs in a row. The reason: budgets have become so gigantic. And, indeed, this summer has been full of movies with giant budgets and modest grosses, all of which has elicited hand-wringing about financial losses, the lack of a quality product (another post-apocalyptic thriller? more superheroes?), and a possible connection between the two. There has been some hope that Hollywood’s troubles will lead to a rethinking of how movies get made, and which movies get greenlit by studio executives. But a close look at this summer’s grosses suggest a more worrisome possibility: that the studios will become more conservative and even less creative.

Bogost on Facebook feudalism, narrative possibilites in games, the gamification of sex

The short truth is this: Facebook doesn’t care if developers can use the platform easily or at all. In fact, it doesn’t seem to concern itself with any of the factors that might be at play in developers’ professional or personal circumstances. The Facebook Platform is a selfish, self-made altar to Facebook, at which developers are expected to kneel and cower, rather than a generous contribution to the success of developers that also happens to benefit Facebook by its aggregate effects.

A lot of reactions to the narrative of [Bioshock] Infinite that I encountered were that it “didn’t make sense,” and that it was “being weird for the sake of being weird.”

Those reminded me of criticisms leveled at two of my favorite filmmakers: David Lynch and Stanley Kubrick. I think these comments arise because Infinite doesn’t go all the way, it hesitates. It tries to stick to conventional logic. It strews about Voxaphones to explain its abstractions.

  • Shujaat Syed at Player Effort writes about “making linear story telling interesting in video games by acknowledging the fourth wall”:

At their core, video games are authoritarian. They have rules that need to be followed, and you are restricted to the game play systems and a story the programmers and designers have created. However, compared to other forms of media, they offer a breadth of freedom that is unmatched. I will not be speaking about the freedom of exploration. What I will be talking about is the freedom of creating a different type of narrative that is only possible through video games by breaking the 4th wall between the game and the player. This is one of our mediums greatest advantage, however, very rarely, is this power explored. With video games, we can have truly powerful forms of narrative, but at most we get ideas that could theoretically work as movies. Open-world sandbox games can dodge this because the player is free to create their own narrative alongside the main plotline, and this is a concept that is entirely unique to video games. It’s the linear story-based games where the narrative is usually much harder to distinguish than what you would get from a book or movie.

In addition to registering your decibel levels (I’m hoping mine will get a boost from the garbage truck always idling outside my window), Spreadsheets will also monitor your overall duration, frequency, and somehow, thrusts per minute. Apparently this does not require supplementary electrodes.

What’s more, you can unlock “badges” and the like. For example, to meet the “Hello Sunshine” achievement, worth 10 points, you must take on the ultimate challenge of our time: “perform morning sex.”

Multiple angles on gamification

  • This week my fiancée told me about an app she had recently installed on her phone. As she excitedly described it, users of the app can “check in” at a retail store (it sounded like your location is verified through GPS) and you receive points for doing so, presumably to redeem for store purchases but I don’t recall all the details. I should also mention that his app is not Foursquare, though I am not sure how the two apps differ specifically. Apps like this exemplify the gamification trend in marketing and advertising. There is an entire wiki dedicated to gamification, with detailed pages like this one describing the various game mechanics used in gamification.

Gamification applies basic game thinking and game mechanics to a non-gaming context. Many gamification models reward users for participating, completing defined user tasks, or achieving goals. A great example is Foursquare, which awards points and perks for “checking-in” to places you go. Although some models introduce distinguishable game-related features, gamification of online shopping includes any type of game thinking applied to an online shopping model.

Gamification makes things fun because it taps into our basic human appetite for competition, stature, and social interaction. Rather than feeling tricked or manipulated, we feel a sense of control when participating in transparent game-oriented shopping. As a result, shopping becomes more exciting and rewarding, while increasing highly sought-after engagement and customer loyalty for retailers and brands.

  • This LinkedIn post by Dan Sanker describes gamification as “the application of game elements and digital game design techniques to non-game problems” and considers potential applications:

Small tools, influenced by simple game mechanics can be used to modify people’s behavior. […] There is a long way to go to make some mundane tasks more engaging. I think the paradigm that rang true the most this week, especially after talking with the kids about their experiences – is that we need to start thinking about customers, consumers, employees and/or students less as ‘Users’ and more as ‘Players.’ Are there ways to enjoy the experience of buying, procuring, working and learning? It might be a better way for us to consider interacting with Generation Z and those who come after them.

But gamification hasn’t just grabbed the attention of the corporate world. Teachers are trying to make learning more fun by introducing games into the classroom in the hope of keeping children engaged for longer. This made me think about how many banks, building societies and other financial services providers are using gamification to encourage kids to start saving or educate them about money.

In the minds of Silicon Valley’s eternal optimists, and the journalists who so unconditionally love them, gamification is the possibility of rendering intricately complex processes, such as education or health care, more effective by transforming them into games. If kids aren’t reading, goes the gamified mantra, perhaps some friendly competitive system of badges and leaderboards might provide the missing incentive. And if adults are getting a tad too heavy, just slap a gizmo on their wrists that challenges them to burn more and more calories each day and they’ll play along.

As a professor of video games, I’ve strong doubts that the same principles that compel us to save Princess Zelda or defeat Donkey Kong apply in the classroom, the boardroom, or the emergency room. Like most game scholars, I view gamification as the creation of the TED-circle nabobs, largely empty, feel-good fodder for the intellectually limp. But the idea isn’t totally useless: There are some special categories of human events, rare and far-between, whose own innate absurdities are so profound that a touch of gamification might actually do them good.

I’m talking, of course, about the Israeli-Palestinian peace process.

We are naturally drawn to entertaining, visually appealing, easily digestible information sources and the power is in our hands to choose who, when, where and on what we will engage.  Witness the rise of video consumption on mobile as part of this trend.

Gamification may be the answer but the problem is that businesses can rush into it without necessarily lifting the bonnet to see what is making it work. There are a number of services putting their hands up to execute it for you but executing without a clear view of what motivates your audience can and will prove fatal.

The concept of gamifying products and services came into being when marketers realised that loyalty programmes are becoming too banal to retain consumers. A number of leading brand names, including Hungama, Zapak, Adobe and Microsoft, have used the concept successfully to create a habit of their product amongst users.

Microsoft created a unique gamified tool that allowed users to learn the new MS Office applications and earn rewards, thus making the whole process interactive.

Epic EVE battle, Critical games criticism, indie developer self-publishing

  • I’ve never played EVE Online, and I don’t even really understand how it works, but I find it fascinating. Last week saw the biggest battle in the game’s history. This breakdown from The Verge is headlined like a real-life dispatch from the frontier of mankind’s space-faring endeavors: Largest space battle in history claims 2,900 ships, untold virtual lives

Update, 9:18PM ET: The battle is over. After more than five hours of combat, the CFC has defeated TEST Alliance. Over 2,900 ships were destroyed today in the largest fleet battle in Eve Online’s history. TEST Alliance intended to make a definitive statement in 6VDT, but their defeat at the hands of the CFC was decisive and will likely result in TEST’s withdrawal from the Fountain region.

In a conversation with Whitten, he told us that the commitment to independent developers is full. There won’t be restrictions on the type of titles that can be created, nor will there be limits in scope. In response to a question on whether retail-scale games could be published independently, Whitten told us, “Our goal is to give them access to the power of Xbox One, the power of Xbox Live, the cloud, Kinect, Smartglass. That’s what we think will actually generate a bunch of creativity on the system.” With regard to revenue splitting with developers, we were told that more information will be coming at Gamescom, but that we could think about it “generally like we think about Marketplace today.” According to developers we’ve spoken with, that split can be approximately 50-50.

Another difference between the Xbox One and Xbox 360 is how the games will be published and bought by other gamers. Indie games will not be relegated to the Xbox Live Indie Marketplace like on the Xbox 360 or required to have a Microsoft-certified publisher to distribute physically or digitally outside the Indie Marketplace. All games will be featured in one big area with access to all kinds of games.

If anything has hurt modern video game design over the past several years, it has been the rise of ‘freemium‘. It seems that it is rare to see a top app or game in the app stores that has a business model that is something other than the ‘free-to-play with in-app purchases’ model. It has been used as an excuse to make lazy, poorly designed games that are predicated on taking advantage of psychological triggers in its players, and will have negative long term consequences for the video game industry if kept unchecked.

Many freemium games are designed around the idea of conditioning players to become addicted to playing the game. Many game designers want their games to be heavily played, but in this case the freemium games are designed to trigger a ‘reward’ state in the player’s brain in order to keep the player playing (and ultimately entice the user to make in-app purchases to continue playing). This type of conditioning is often referred to as a ‘Skinner box‘, named after the psychologist that created laboratory boxes used to perform behavioral experiments on animals.

It obviously isn’t beyond the realm of possibility that, not only do financial considerations influence a game’s structure and content, financial outcomes affect a studio’s likelihood of survival in the industry, based upon the machinations of its publishing overlords. Activision killed Bizarre Creations, Eidos ruined Looking Glass Studios, EA crushed Westood, Pandemic, Bullfrog, Origin Systems… well, the list could go on, until I turn a strange, purple color, but you get my point. And, when 3.4 million copies sold for a Tomb Raider reboot isn’t enough by a publisher’s standards, you can’t help but feel concern for a developer’s future.

This relationship between environment-learner-content interaction and transfer puts teachers in the unique position to capitalize on game engagement to promote reflection that positively shapes how students tackle real-world challenges. To some, this may seem like a shocking concept, but it’s definitely not a new one—roleplay as instruction, for example, was very popular among the ancient Greeks and, in many ways, served as the backbone for Plato’s renowned Allegory of the Cave. The same is true of Shakespeare’s works, 18th and 19th century opera, and many of the novels, movies, and other media that define our culture. More recently, NASA has applied game-like simulations to teach astronauts how to maneuver through space, medical schools have used them to teach robotic surgery, and the Federal Aviation Administration has employed them to test pilots.

The relationship between the creator, the product, and the audience, are all important contexts to consider during media analysis, especially with games. This is because the audience is an active participant in the media. So if you are creating a game you always have to keep in mind the audience. Even if you say the audience doesn’t matter to you, it won’t cease to exist, and it does not erase the impact your game will have.

Similarly, if you are critiquing or analyzing any media, you can’t ignore the creator and the creator’s intentions. Despite those who claim the “death of the author,” if the audience is aware of the creator’s intentions, it can affect how they perceive the game. Particularly, if you consider the ease in which creators can release statements talking about their work, you’ll have an audience with varying levels of awareness about the creator’s intentions. These factors all play off of each other–they do not exist in a vacuum.

When we talk about any medium’s legitimacy, be it film or videogames or painting, it’s a very historical phenomenon that is inextricably tied to its artness that allows for them to get in on the ground floor of “legitimate” and “important.” So if we contextualize the qualities that allowed for film or photography to find themselves supported through a panoply of cultural institutions it was a cultural and political economic process that lead them there.

[…]

Videogames, the kind that would be written about in 20 dollar glossy art magazines, would be exactly this. When creators of videogames want to point to their medium’s legitimacy, it would help to have a lot of smart people legitimate your work in a medium (glossy magazines, international newspapers) that you consider to be likewise legitimate. Spector concedes that ‘yes all the critics right now are online’, but the real battle is in getting these critics offline and into more “legitimate” spaces of representation. It’s a kind of unspoken hierarchy of mediums that is dancing before us here: at each step a new gatekeeper steps into play, both legitimating and separating the reader from the critic and the object of criticism.

All three games define fatherhood around the act of protection, primarily physical protection. And in each of these games, the protagonist fails—at least temporarily—to protect their ward. In Ethan’s case, his cheery family reflected in his pristine home collapses when he loses a son in a car accident. Later, when his other son goes missing, the game essentially tests Ethan’s ability to reclaim his protective-father status.

No video game grants absolute freedom; they all have rules or guidelines that govern what you can and can’t do. The sci-fi epic Mass Effect is a series that prides itself on choice, but even that trilogy ends on a variation of choosing between the “good” and “bad” ending. Minecraft, the open-world creation game, is extremely open-ended, but you can’t build a gun or construct a tower into space because it doesn’t let you. BioShock’s ending argues that the choices you think you’re making in these games don’t actually represent freedom. You’re just operating within the parameters set by the people in control, be they the developers or the guy in the game telling you to bash his skull with a golf club.

BioShock’s disappointing conclusion ends up illustrating Ryan’s point. A man chooses, a player obeys. It’s a grim and cynical message that emphasizes the constraints of its own art form. And given that the idea of choice is so important to BioShock’s story, I don’t think it could’ve ended any other way.

Hacker’s death, wearable tech, and some Cyberpunk

His genius was finding bugs in the tiny computers embedded in equipment, such as medical devices and cash machines. He often received standing ovations at conferences for his creativity and showmanship while his research forced equipment makers to fix bugs in their software.

Jack had planned to demonstrate his techniques to hack into pacemakers and implanted defibrillators at the Black Hat hackers convention in Las Vegas next Thursday. He told Reuters last week that he could kill a man from 30 feet away by attacking an implanted heart device.

Without the right approach, the continual distraction of multiple tasks exerts a toll that disrupts performance. It takes time to switch tasks, to get back what attention theorists call “situation awareness.” Interruptions disrupt performance, and even a voluntary switching of attention from one task to another is an interruption of the task being left behind.

Furthermore, it will be difficult to resist the temptation of using powerful technology that guides us with useful side information, suggestions, and even commands. Sure, other people will be able to see that we are being assisted, but they won’t know by whom, just as we will be able to tell that they are being minded, and we won’t know by whom.

9am to 1pm: Throughout the day you connect to your Dekko-powered augmented reality device, which overlays your vision with a broad range of information and entertainment. While many of the products the US software company is proposing are currently still fairly conceptual, Dekko hopes to find ways to integrate an extra layer of visual information into every part of daily life. Dekko is one of the companies supplying software to Google Glass, the wearable computer that gives users information through a spectacle-like visual display. Matt Miesnieks, CEO of Dekko, says that he believes “the power of wearables comes from connecting our senses to sensors.”

Researchers at Belgian nonelectronics reseach and development center Imec and Belgium’s Ghent University are in the very early stages of developing such a device, which would bring augmented reality–the insertion of digital imagery such as virtual signs and historical markers with the real world–right to your eyeballs. It’s just one of several such projects (see “Contact Lens Computer: It’s Like Google Glass Without The Glasses”), and while the idea is nowhere near the point where you could ask your eye doctor for a pair, it could become more realistic as the cost and size of electronic components continue to fall and wearable gadgets gain popularity.

Speaking on the sidelines of the Wearable Technologies conference in San Francisco on Tuesday, Eric Dy, Imec’s North America business development manager, said researchers are investigating the feasibility of integrating an array of micro lenses with LEDs, using the lenses to help focus light and project it onto the wearer’s retinas.

The biggest barrier, beyond the translation itself, is speech recognition. In so many words, background noise interferes with the translation software, thus affecting results. But Barra said it works “close to 100 percent” when used in “controlled environments.” Sounds perfect for diplomats, not so much for real-world conversations. Of course, Google’s non-real-time, text-based translation software built into Chrome leaves quite a bit to be desired, making us all the more wary of putting our faith into Google’s verbal solution. As the functionality is still “several years away,” though, there’s still plenty of time to convert us.

There will be limitations, however. It’s easy to think that a life-sized human being, standing in your living room, would be capable of giving you a hug, for instance. But if that breakthrough is coming, it hasn’t arrived yet. Holodeck creations these are not. And images projected through the magic of HoloVision won’t be able to follow you into the kitchen for a snack either — not unless you’ve got a whole network of HoloVision cameras, anyway.

The implications of Euclid’s technology do not stop at surveillance or privacy. Remember, these systems are meant to feed data to store owners so that they can rearrange store shelves or entire showroom floors to increase sales. Malls, casinos, and grocery stores have always been carefully planned out spaces—scientifically arranged and calibrated for maximum profit at minimal cost. Euclid’s systems however, allow for massive and exceedingly precise quantification and analysis. More than anything, what worries me is the deliberateness of these augmented spaces. Euclid will make spaces designed to do exactly one thing almost perfectly: sell you shit you don’t need. I worry about spaces that are as expertly and diligently designed as Amazon’s home page or the latest Pepsi advertisement. A space built on data so rich and thorough that it’ll make focus groups look quaint in comparison.

Of course the US is not a totalitarian society, and no equivalent of Big Brother runs it, as the widespread reporting of Snowden’s information shows. We know little about what uses the NSA makes of most information available to it—it claims to have exposed a number of terrorist plots—and it has yet to be shown what effects its activities may have on the lives of most American citizens. Congressional committees and a special federal court are charged with overseeing its work, although they are committed to secrecy, and the court can hear appeals only from the government.

Still, the US intelligence agencies also seem to have adopted Orwell’s idea of doublethink—“to be conscious of complete truthfulness,” he wrote, “while telling carefully constructed lies.” For example, James Clapper, the director of national intelligence, was asked at a Senate hearing in March whether “the NSA collect[s] any type of data at all on millions or hundreds of millions of Americans.” Clapper’s answer: “No, sir…. Not wittingly.”

The drone is carrying a laptop so it can communicate with the headset, but right now the sticking point is range; since it’s using wi-fi to communicate, it’ll only get to around 50-100m.

“It’s not a video game movie, it’s a cyberpunk movie,” Cargill said. “Eidos Montreal has given us a lot of freedom in terms of story; they want this movie to be Blade Runner. We want this movie to be Blade Runner.”

INTERVIEWER

There’s a famous story about your being unable to sit through Blade Runner while writing Neuromancer.

GIBSON

I was afraid to watch Blade Runner in the theater because I was afraid the movie would be better than what I myself had been able to imagine. In a way, I was right to be afraid, because even the first few minutes were better. Later, I noticed that it was a total box-office flop, in first theatrical release. That worried me, too. I thought, Uh-oh. He got it right and ­nobody cares! Over a few years, though, I started to see that in some weird way it was the most influential film of my lifetime, up to that point. It affected the way people dressed, it affected the way people decorated nightclubs. Architects started building office buildings that you could tell they had seen in Blade Runner. It had had an astonishingly broad aesthetic impact on the world.

The concept was formally introduced in William Gibson’s 1984 punkn novel, NEUROMANCER.  Although this first novel swept the Triple Crown of science fiction–the Hugo, the Nebula, and the Philip K. Dick awards–it is not really science fiction.  It could be called “science faction” in that it occurs not in another galaxy in the far future, but 20 years from now, in a BLADE RUNNER world just a notch beyond our silicon present.

In Gibson’s Cyberworld there is no-warp drive and “beam me up, Scotty.”  The high technology is the stuff that appears on today’s screens or that processes data in today’s laboratories: Super-computer boards.  Recombinant DNA chips.  AI systems and enormous data banks controlled by multinational combines based in Japan and Zurich.

Thoughts on Oculus Rift, modding, and assessing games journalism and criticism

Gaming journalism is, by some accounts, a broken field. By others, its unjournalistic process is a symptom of reporting online, where advertising revenue is minimal, at least when compared to revenue from newspapers or magazines. And that isn’t just exclusive to gaming journalism — most outlets, both online and in print, face an uncertain future under the weight of a change in the way we absorb news and opinion. (The change is evident when you account for how many sites have recently undergone a design to accommodate tablets better. USgamer, Kotaku and Polygon among others.)

[…]

That’s why gaming press seems like a corrupt industry, when it should be incorruptible. Corporate apologetics, publisher-granted exclusive reviews, mostly non-hard-hitting, superfluous bits to appease the companies. All of this is how modern journalism operates. (As an experiment, check notable outlets or magazines and look for the term “sponsored content”. More sites do it than you’d think.) But when the revenue stream is one-tenth of historical norms, journalists must find ways to continue writing, and that sometimes involves looking for sponsors. It’s not optimal, it’s not prestigious, it goes everything I learned in journalism school, but hey, money rules the world.

Initially, I’m excited about using it for actors: there’s no reason it can’t work directly with the MVN mocap suits we use, and having actors able to see the virtual environment they’re acting in is a pretty mind-blowing concept. I may need to invest in a supply of sick-bags, though…

I’m also working on a virtual camera for the Rift, some tests of aiming cameras WITH MY FACE, the previously-mentioned preview suite, and more. Look for a post specifically about the Rift and filmmaking later this week or early next.

But for now, if you’ll excuse me, there’s a demon-filled corridor in Doom 3 that I’ve got to go be scared witless by…

Issues like over-crowding start to fade away. Of course, physical education can’t be replaced (yet?), but actual problems that plague education for students, both young and old could be eradicated completely. Suddenly, post-secondary education becomes affordable once again. Taught by real teachers to real students with those social interactions at the core.

Political events could be attended by anyone. Having the ability to view political discussions on the hill are possible today through various news outlets, or public broadcasting. With integration of Oculus, you could physically be there, sitting there, watching anything and everything unfold as if you were actually there. Having something like this might increase public knowledge of the workings of government, and help youth become passionate about issues that really require their attention.

With both software and hardware modders growing in numbers at a staggering rate, and one that will presumably continue to increase, it’s safe to say that modding is the future of gaming. A single person or group of people going out of their way to improve the gaming experience for themselves and others for non-profit was almost unimaginable during the early stages of the industry. Today, it is the norm, albeit still a relatively underground one. Yet just as the amount of people who play games has risen dramatically over the years, I believe the same is destined to repeat itself for modders. In order for gaming companies to solidify their foothold in the industry, the implementation of cooperation with their target audience will soon be paramount.