In his 1976 book The Selfish Gene, biologist Richard Dawkins introduced the word “meme” to refer to a hypothetical unit of cultural transmission. The discussion of the meme concept was contained in a single chapter of a book that was otherwise dedicated to genetic transmission, but the idea spread. Over decades, other authors further developed the meme concept, establishing “memetics” as a field of study. Today, the word “meme” has entered the popular lexicon, as well as popular culture, and is primarily associated with specific internet artifacts, or “viral” online content. Although this popular usage of the term is not always in keeping with Dawkins’ original conception, these examples from internet culture do illustrate some key features of how memes have been theorized.
This essay is principally concerned with two strands of memetic theory: the relation of memetic transmission to the reproduction of ideology; and the role of memes in rhetorical analysis, especially in relation to the enthymeme as persuasive appeal. Drawing on these theories, I will advance two related arguments: ideology as manifested in discursive acts can be considered to spread memetically; and ideology functions enthymemetically. Lastly, I will present a case study analysis to demonstrate how the use of methods and terminology from rhetorical criticism, discourse analysis, and media studies, can be employed to analyze artifacts based on these arguments.
Examples of memes presented by Dawkins include “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or building arches” (p.192). The name “meme” was chosen due to its similarity to the word “gene”, as well as its relation to the Greek root “mimeme” meaning “that which is imitated” (p.192). Imitation is key to Dawkins’ notion of the meme because imitation is the means by which memes propagate themselves amongst members of a culture. Dawkins identifies three qualities associated with high survival in memes: longevity, fecundity, and copying-fidelity (p.194).
Distin (2005) further developed the meme hypothesis in The Selfish Meme. Furthering the gene/meme analogy, Distin defines memes as “units of cultural information” characterized by the representational content they carry (p.20), and the representational content is considered “the cultural equivalent of DNA” (p.37). This conceptualization of memes and their content forms the basis of Distin’s theory of cultural heredity. Distin then seeks to identify the representational system used by memes to carry their content (p.142). The first representational system considered is language, what Distin calls “the memes-as-words hypothesis” (p.145). Distin concludes that language itself is “too narrow to play the role of cultural DNA” (p.147).
Balkin (1998) took up the meme concept to develop a theory of ideology as “cultural software”. Balkin describes memes as “tools of understanding,” and states that there are “as many different kinds of memes as there are things that can be transmitted culturally” (p.48). Stating that the “standard view of memes as beliefs is remarkably similar to the standard view of ideology as a collection of beliefs” (p.49), Balkin links the theories of memetic transmission to theories of ideology. Employing metaphors of virility similar to how other authors have written of memes as “mind viruses,” Balkin considers memetic transmission as the spread of “ideological viruses” through social networks of communication, stating that “this model of ideological effects is the model of memetic evolution through cultural communication” (p.109). Balkin also presents a more favorable view of language as a vehicle for memes than Distin presented, writing: “Language is the most effective carrier of memes and is itself one of the most widespread forms of cultural software. Hence it is not surprising that many ideological mechanisms either have their source in features of language or are propagated through language” (p.175).
Balkin approaches the subject from a background in law, and although not a rhetorician and skeptical of the discursive turn in theories of ideology, Balkin does employ rhetorical concepts in discussing the influence of memes and ideology: “Rhetoric has power because understanding through rhetorical figures already forms part of our cultural software” (p.19). Balkin also cites Aristotle, remarking that “the successful rhetorician builds upon what the rhetorician and the audience have in common,” and “what the two have in common are shared cultural meanings and symbols” (p.209). In another passage, Balkin expresses a similar notion of the role of shared understanding in communication: “Much human communication requires the parties to infer and supplement what is being conveyed rather than simply uncoding it” (p.51).
Although Balkin never uses the term, these ideas are evocative of the rhetorical concept of the enthymeme. Aristotle himself discussed the enthymeme, though the concept was not elucidated with much specificity. Rhetorical scholars have since debated the nature of the enthymeme as employed in persuasion, and Bitzer (1959) surveyed various accounts to produce a more substantial definition. Bitzer’s analysis comes to focus on the enthymeme in relation to syllogisms, and the notion of the enthymeme as a syllogism with a missing (or unstated) proposition. Bitzer states: “To say that the enthymeme is an ‘incomplete syllogism’ – that is, a syllogism having one or more suppressed premises – means that the speaker does not lay down his premises but lets his audience supply them out of its stock of opinion and knowledge” (p.407).
Bitzer’s formulation of the enthymeme emphasizes that “enthymemes occur only when the speaker and audience jointly produce them” (p.408). That they are “jointly produced” is key to the role of the enthymeme is successful persuasive rhetoric: “Owing to the skill of the speaker, the audience itself helps construct the proofs by which it is persuaded” (p.408). That the enthymeme’s “premises are always drawn from the audience,” and the “successful construction is accomplished through the joint efforts of speaker and audience,” Bitzer defines as the “essential character” of the enthymeme. This joint construction, and supplying of the missing premise(s), resonates with Balkin’s view of the spread of cultural software, as well as various theories of subjects’ complicity in the functioning of ideology.
McGee (1980) supplied another link between rhetoric and ideology with the “ideograph”. McGee argued that “ideology is a political language composed of slogan-like terms signifying collective commitment” (p.15), and these terms he calls “ideographs”. Examples of ideographs, according to McGee, include “liberty,” “religion,” and “property” (p.16). Johnson (2007) applies the ideograph concept to memetics, to argue for the usefulness of the meme as a tool for materialist criticism. Johnson argues that although “the ideograph has been honed as a tool for political (“P”-politics) discourses, such as those that populate legislative arenas, the meme can better assess ‘superficial’ cultural discourses” (p.29). I also believe that the meme concept can be a productive tool for ideological critique. As an example, I will apply the concepts of ideology reproduction as memetic transmission, and ideological function as enthymematic, in an analysis of artifacts of online culture popularly referred to as “memes”.
As Internet culture evolved, users adapted and mutated the term “meme” to refer to specific online artifacts. Even though they may be considered a type of online artifact, Internet memes come in a variety of different forms. One of the oldest and most prominent series of image macro memes is the “LOLcats” series of memes. The template established by LOLcats of superimposing humorous text over static images became and remains the standard format for image macro memes. Two of the most prominent series of these types of memes are the “First World Problems” (FWP) and “Third World Success” image macros. Through analysis of these memes, it is possible to examine how the features of these artifacts and discursive practices demonstrate many of the traits of memes developed by theorists, and how theories of memetic ideological transmission and enthymematic ideological function can be applied to examine ideological characteristics of these artifacts.
Balkin, J.M. (1998). Cultural software: A theory of ideology. Dansbury, CT: Yale
Bitzer, L. F. (1959). Aristotle’s enthymeme revisited. Quarterly Journal Of Speech,
Dawkins, R. (2006). The Selfish Gene. New York, NY: Oxford University Press. (original
work published 1976)
Distin, K. (2005). The selfish meme: A critical reassessment. New York, NY: Cambridge
McGee, M. C. (1980). The “ideograph”: A link between rhetoric and ideology. Quarterly
Journal Of Speech, 66(1), 1-16.
In The Cultural Logic of Computation Golumbia raises questions and addresses issues that are promising, but then proceeds in making an argument that is ultimately unproductive. I am sympathetic to Golumbia’s aims; I share an attitude of skepticism toward the rhetoric surrounding the Internet and new media as inherently democratizing, liberating devices. Golumbia characterizes such narratives as “technological progressivism,” and writes that “technological progressivism […] conditions so much of computational discourse.” Following the “Arab Spring” and watching the events unfold was exhilarating, but I was always uncomfortable with the narrative promoted in the mainstream news media characterizing these social movements as a “Twitter revolution,” and I remain skeptical toward hashtag activism and similar trends.
So while I was initially inclined toward the project Golumbia laid out in the book’s introductory pages, the chapters that followed only muddled rather than clarified my understanding of the argument being presented. The first section contains a sustained attack on Noam Chomsky’s contributions to linguistics, and their various influences and permutations, but also on Chomsky himself. I don’t know why Golumbia needed to question Chomsky’s “implausible rise to prominence,” or why Chomsky’s “magnetic charisma” needs to be mentioned in this discussion of linguistic theory.
Golumbia focuses on Chomsky’s contributions to linguistics, because that is where his interests and argument draw him; based on my own interests and background I would’ve preferred engagement with the other side of Chomsky’s contributions to communication studies, namely the propaganda model and political economy of the media. I suspect that a fruitful analysis would be possible from considering some of the issues Golumbia brings up in relation the work of Chomsky and others in ideological analysis of news media content. The notion of computationalism as ideology is compelling to me; so is the institutionalized rhetoric of computationalism, which is a separate, promising argument, I think.
In reading I have a tendency to focus on what interests me, appeals to me, or may be useful for me. Some of Golumbia’s concepts, such as “technological-progressive neoliberalism” and its relation to centralized power, fall into this category. I’m still skeptical about computationalism as an operationalizable concept (it seems like there are already multiple theoretical models and critical perspectives that cover the same territory, I’m not convinced that Golumbia makes the case for a need for the term), others may be more productive. Ultimately I will use a quote from Golumbia (addressing Internet and emerging technologies) that reflects my feelings on this book: “We have to learn to critique even that which helps us.”
- Almetria Vaba of PBS Learning Media has posted a collection of resources for exploring media literacy through the legacy of Dr. Martin Luther King, jr.:
Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources. Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.
- Sonia Paul at PBS MediaShift reported on a recent Pew Research study on social media, stress, and the “cost of caring”:
Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.
“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”
- Lily Hay Newman reported on the survey for Slate:
In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.
“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”
- LiveScience writer Elizabeth Palermo looked at the gendered differences found by the study:
The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”
“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.
- Nicholas Carr recently penned an editorial for The Guardian considering whether we are becoming too reliant on computers:
The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.
- David Whelan at Vice interviewed Carr on the issue of technology dependency:
Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.
We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.
- Writing at Figure/Ground, John Dowd argues that being there still matters for teaching and learning in the digital age:
From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.
- Dowd’s argument here is similar to Carr’s thoughts on MOOCs:
In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.
- MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:
The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.
And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”
We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.
- Tomas Chamorro-Premuzic contributed a piece to The Guardian referencing Carr to consider how technology has downgraded attention:
As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.
There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.
- James Vincent at The Verge covered a recent study that links nighttime screen use with less REM sleep:
The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.
- At AdBusters, Douglas Haddow writes that sleep is the enemy of capital:
Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.
Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.
And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.
- In an interview with TruthOut to discuss his latest book, Robert McChesney addresses telecommunications monopolies, net neutrality, and advocates radical solutions to systemic problems:
What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.
This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.
- In light of recent terrorist attacks and renewed hysteria about fundamentalist ideologies, I revisited Mark Manson’s essay probing why there seems to be more fundamentalism in the world today:
The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.
Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.
- In a post at the Jacobin blog, Anthony Galluzzo considers how the mainstream media’s “fucking hipster” show mocks hipsters in the service of capital:
[Marxist geographer Neil] Smith offers a dry, but emphatically structural account of this process, which he first theorized in the late eighties with Soho and the Lower East Side in mind. Gentrification has since become central to neoliberal urbanization generally, and New York City in particular, under the developer-driven Bloomberg administration.
But why bother with “dry” and “structural” when you can tune-in to the “fucking hipster” show?
Unlike Smith’s rigorous Marxian analysis, most popular accounts from the spurious creative class mystifications of Richard Florida to standard issue conservative populist diatribes forget the larger forces and primary movers in this process, which is instead reduced, metonymically, to the catchall figure of the hipster.
On topics ranging from the capitalist dynamics of gentrification to the casualization of employment among ostensibly middle class Millennials, the “fucking hipster” show beats staid structural analysis every time — even for many members of the self-identified Left.
We should retire “hipster” as a term without referent or political salience. Its zombie-like persistence in anti-hipster discourse must be recognized for what it is: an urbane, and socially acceptable, form of ideologically inflected shaming on the part of American elites who must delegitimize those segments of a largely white, college educated population who didn’t do the “acceptable thing.”
The anti-hipster censure here includes a healthy dose of typically American anti-intellectualism, decked out in liberal bunting, subtle homophobia, and recognizably manipulative appeals to white, middle class resentment, now aimed at the lazy hipster, who either lives on his trust fund or, more perniciously, abuses public assistance, proving how racist templates are multi-use tools.
Our power elites’ rhetorical police action becomes increasingly necessary as large swaths of the people lumped under the hipster taxon slip into the ranks of the long-term un- and underemployed. Once innocuous alternative lifestyles could potentially metamorphosize into something else altogether. Better to frame “alternative lifestyle” in terms of avant-garde trend setting without remainder, providing suitably rarefied consumption options for Bloomberg’s new bourgeoisie, as they buy locally sourced creativity on Bedford Ave.
- Anti-homeless features in urban design became a trending media topic earlier this month after pictures of anti-homeless studs in London were shared on social media. The Mirror reports on the background and the eventual removal of the spikes:
Metal spikes designed to stop homeless people sleeping in the doorway of a London apartment block have been removed, after almost 130,000 people signed a petition calling for them to be taken out.
Pictures of the metal studs outside flats in Southwark Bridge Road were widely shared online last weekend, sparking outrage on social media.
Many criticised the spikes as inhumane, and compared them to those used to stop pigeons landing on buildings.
- An Atlantic article by Robert Rosenberger looks at how cities use design to drive homeless people away:
It has been encouraging to see the outrage over the London spikes. But the spikes that caused the uproar are by no means the only form of homeless-deterrent technology; they are simply the most conspicuous. Will public concern over the spikes extend to other less obvious instances of anti-homeless design? Perhaps the first step lies in recognizing the political character of the devices all around us.
An example of an everyday technology that’s used to forbid certain activities is “skateboard deterrents,” that is, those little studs added to handrails and ledges. These devices, sometimes also called “skatestoppers” or “pig ears,” prevent skateboarders from performing sliding—or “grinding”—tricks across horizontal edges. A small skateboard deterrence industry has developed, with vendors with names like “stopagrind.com” and “grindtoahault.com.”
An example of a pervasive homeless deterrence technology is benches designed to discourage sleeping. These include benches with vertical slats between each seat, individual bucket seats, large armrests between seats, and wall railings which enable leaning but not sitting or lying, among many other designs. There are even benches made to be slightly uncomfortable in order to dissuade people from sitting too long. Sadly, such designs are particularly common in subway, bus stops, and parks that present the homeless with the prospect of a safely public place to sleep.
The London spikes provide an opportunity to put a finger on our own intuitions about issues of homelessness and the design of open space. Ask yourself if you were appalled by the idea of the anti-homeless spikes. If so, then by implication you should have the same problems with other less obvious homeless deterrence designs like the sleep-prevention benches and the anti-loitering policies that target homeless people.
- In the Guardian, Ben Quinn writes that anti-homeless spikes are part of a wider phenomenon of “hostile architecture”:
In addition to anti-skateboard devices, with names such as “pig’s ears” and “skate stoppers”, ground-level window ledges are increasingly studded to prevent sitting, slanting seats at bus stops deter loitering and public benches are divided up with armrests to prevent lying down.
To that list, add jagged, uncomfortable paving areas, CCTV cameras with speakers and “anti-teenager” sound deterrents, such as the playing of classical music at stations and so-called Mosquito devices, which emit irritatingly high-pitched sounds that only teenagers can hear.
The architectural historian Iain Borden says the emergence of hostile architecture has its roots in 1990s urban design and public-space management. The emergence, he said, “suggested we are only republic citizens to the degree that we are either working or consuming goods directly.
“So it’s OK, for example, to sit around as long as you are in a cafe or in a designated place where certain restful activities such as drinking a frappucino should take place but not activities like busking, protesting or skateboarding. It’s what some call the ‘mallification’ of public space, where everything becomes like a shopping mall.”
- Immediately following Elliot Rodger’s spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a “manifesto” by the media. The written document and the videos documented Rodger’s sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him ” to endure an existence of loneliness, rejection and unfulfilled desires” and causing his violent “retribution”. Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:
Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.
- Writing at Cyborgology, Jenny Davis saw the tragedy as a terrible lesson in misogyny and digital dualism, the Cyborgology blog’s pet theory:
His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.
- Eryk Salvaggio at Like Fish. posted a thorough analysis of Rodger’s manifesto looking at how women function as objects and symbols in the text:
In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.
What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.
Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.
There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ […] These are the only interactions Elliot has with women: marking his territory.
When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.
- Some of these analyses recall Douglas Kellner’s take on school shootings as crises of masculinity:
While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.
Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.
- Influenced by Debord, Kellner used the term “spectacle” in discussing the role of media coverage in events like rampage shootings:
For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.
- Kellner’s comments from a 2008 interview talking about the Virginia Tech shooter’s videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:
Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.
People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.
- Almost immediately after news of the violence broke, and word of the killer’s YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community’s mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:
More than a dozen reporters were camped out on Pardall Road in front of the deli — and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.
The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:
“OUR TRAGEDY IS NOT YOUR COMMODITY.”
“Remembrance NOT ratings.”
“Stop filming our tears.”
“Let us heal.”
“NEWS CREWS GO HOME!”
- Commemorating the 25th anniversary of the publication of his infamous essay, “The End of History?“, Francis Fukuyama wrote an essay for the Wall Street Journal reflecting on how the world has changed since he declared the end of history:
I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.
So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.
Twenty-five years later, the most serious threat to the end-of-history hypothesis isn’t that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.
- An article by Eliane Glaser in The Guardian considers whether Fukuyama’s hypothesis is a rightwing argument in disguise:
When he wrote “The End of History?”, Fukuyama was a neocon. He was taught by Leo Strauss’s protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that “the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx.” This was a highly tendentious claim even in 1989.
Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a “disconcerting caesura”. Political society had rejected “the old apocalyptic and chiliastic visions”, he wrote, and “in the west, among the intellectuals, the old passions are spent.” Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the “end of history” and the “end of ideology” arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.
While I recognise the ideological subterfuge (the markets as “natural”), there is a broader aspect to Fukuyama’s essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: “The end of history will be a very sad time. The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”
- Late last year the International Forum for Democratic Studies interviewed Fukuyama about his article Democracy and the Quality of the State:
- Finally, the CATO Institute just held a conference where Fukuyama and several other scholars discussed “The End of History 25 Years Later”. Videos and podcasts of the panels are available at the conference site. Description of the conference and list of participants:
In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.
Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.
It’s been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.
- I haven’t seen the new Ender’s Game movie, but this review by abbeyotis at Cyborgology calls the film “a lean and contemporary plunge into questions of morality mediated by technology”:
In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.
- In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman’s latest Big Picture post uses his own review of the Ender’s Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism. Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or “gimmick”) to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person “Hulk speak” writing style the all caps seems to be played out. Nevertheless, I’m sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.
- In this video of a presentation titled Game design: the medium is the message, Jonathan Blow discusses how commercial constraints dictate the form of products from TV shows to video games.
- This somewhat related video from mynextappliance contextualizes Valve’s Steam machine place in gaming history.
- This video from Satchbag’s Goods is ostensibly a review of Hotline Miami, but develops into a discussion of art movements and Kanye West:
- This short interview with Slavoj Žižek in New York magazine continues a trend I’ve noticed since Pervert’s Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek’s contribution to critical theory:
Žižek, after all, the Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.
- Wired UK reported on university students who turned maps of seventeenth century London into a detailed 3D world:
Following the development of the environment on the team’s blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.
- Stephen Totilo reviewed the new pirate-themed Assassin’s Creed game for the New York Times. I haven’t played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:
Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.
It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.
- Speaking of postmodern, self-referential, meta-fictional video games: The Stanley Parable was released late last month. There has already been a bevy of analysis written about the game, but I am waiting for the Mac release to play the game and doing my best to avoid spoilers in the meantime. Brenna Hillier’s post at VG24/7 is spoiler free (assuming you are at least familiar with the games premise, or its original incarnation as a Half Life mod), and calls The Stanley parable “a reaction against, commentary upon, critique and celebration of narrative-driven game design”:
The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.
- An article at Techcrunch looks at how the Twitter-acquired Bluefin Labs “took the academic subject of semiotics and made it something “central” to the future of Twitter’s business“:
Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.
- Nathan at metopal posted their paper posing the question: What happens when we stop thinking about videogames as cinema and instead think of them through other media, like fashion, dance, or architecture?
Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.
Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.
- John Powers at the Airship posted this great longform piece on the political economy of zombies:
Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters’ ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.
- This article at The Conversation addresses the “touchy subject” of Apple’s Touch ID:
Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.
This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves – our biology is verified, digitised, encrypted, as they are handed over to our devices.
- In the wake of the Silk Road shut down last month, Chloe Albanesius at PC Mag asks: What was Silk Road and how did it work?
Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn’t exactly rocket science, thanks to Tor and some bitcoins. Here’s a rundown of how Silk Road worked before the feds swooped in.
- Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is “imaginary worlds,” and they touch upon the narratology vs. ludology conflict in gaming:
The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author. So it’s not just a question of how many choices you make, but how many options there are per choice. Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.
- Finally, GamesForChange has uploaded video of Ian Bogost’s keynote address from this year’s Games for Change Festival. Bogost extolls the virtues of “earnestness” over “seriousness” in game design:
- Slate writer Forrest Wickman was recently shocked to discover that The Texas Chainsaw Massacre functions as pro-vegetarian propaganda. Wickman writes that Chainsaw Massacre is “the last movie you’d expect” to be pro-vegetarian, but I thought this had been the general reading of the movie for years. I recall a reviewer’s blurb on my video copy of TCM saying the film “does for meat eating what Psycho did for taking showers”. The article includes a short video analysis by the great and always interesting Rob Ager.
As we learn early on, the movie’s killers, the murderous Sawyer family (comprised of Leatherface, Grandpa, et al), used to run a slaughterhouse, and the means they use to slaughter their victims are the same as those used to slaughter cattle. They knock them over the head with sledgehammers, hang them on meat hooks, and stuff them into freezers. Often this takes place as the victims are surrounded by animal bones, a detail that could be explained away as the evidence of their former occupation—except that the cries of farm animals (there are none around) are played over the scenes.
- Back when people were sizing-up the Lone Ranger flop Michael Agresta wrote this Atlantic piece reflection on “how the Western was lost (and why it matters)“:
Through the past century of Western movies, we can trace America’s self-image as it evolved from a rough-and-tumble but morally confident outsider in world affairs to an all-powerful sheriff with a guilty conscience. After World War I and leading into World War II, Hollywood specialized in tales of heroes taking the good fight to savage enemies and saving defenseless settlements in the process. In the Great Depression especially, as capitalism and American exceptionalism came under question, the cowboy hero was often mistaken for a criminal and forced to prove his own worthiness–which he inevitably did. Over the ’50s, ’60s, and ’70s however, as America enforced its dominion over half the planet with a long series of coups, assassinations, and increasingly dubious wars, the figure of the cowboy grew darker and more complicated. If you love Westerns, most of your favorites are probably from this era–Shane, The Searchers, Butch Cassidy and the Sundance Kid, McCabe & Mrs. Miller, the spaghetti westerns, etc. By the height of the Vietnam protest era, cowboys were antiheroes as often as they were heroes.
The dawn of the 1980s brought the inauguration of Ronald Reagan and the box-office debacle of the artsy, overblown Heaven’s Gate. There’s a sense of disappointment to the decade that followed, as if the era of revisionist Westerns had failed and a less nuanced patriotism would have to carry the day. Few memorable Westerns were made in the ’80s, and Reagan himself proudly associated himself with an old-fashioned, pre-Vietnam cowboy image. But victory in the Cold War coincided with a revival of the genre, including the revisionist strain, exemplified in Clint Eastwood’s career-topping Unforgiven. A new, gentler star emerged in Kevin Costner, who scored a post-colonial megahit with Dances With Wolves. Later, in the 2000s, George W. Bush reclaimed the image of the cowboy for a foreign policy far less successful than Reagan’s, and the genre retreated to the art house again.
- Responding to the Atlantic article Metafilter user justsomebodythatyouusedtoknow offered this insight:
Westerns are fundamentally about political isolation. The government is far away and weak. Institutions are largely irrelevant in a somewhat isolated town of 100 people. The law is what the sheriff says it is, or what the marshall riding through town says, or the posse. At that scale, there may be no meaningful distinction between war and crime. A single individual’s choices can tilt the balance of power. Samurai and Western stories cross-pollinated because when you strip away the surface detail the settings are surprisingly similar. The villagers in Seven Samurai and the women in Unforgiven are both buying justice/revenge because there is no one to appeal to from whom they could expect justice. Westerns are interesting in part because they are stories where individual moral judgment is almost totally unsupported by institutions.
Westerns clearly are not dying. We get a really great film in the genre once every few years. However, they’ve lost a lot of their place at the center of pop culture because the idea of an isolated community has grown increasingly implausible. In what has become a surveillance state, the idea of a place where the state has no authority does not resonate as relevant.
- In a piece for Vulture, Warren Ellis explains “why we need violent stories“:
The function of fiction is being lost in the conversation on violence. My book editor, Sean McDonald, thinks of it as “radical empathy.” Fiction, like any other form of art, is there to consider aspects of the real world in the ways that simple objective views can’t — from the inside. We cannot Other characters when we are seeing the world from the inside of their skulls. This is the great success of Thomas Harris’s Hannibal Lecter, both in print and as so richly embodied by Mads Mikkelsen in the Hannibal television series: For every three scary, strange things we discover about him, there is one thing that we can relate to. The Other is revealed as a damaged or alienated human, and we learn something about the roots of violence and the traps of horror.
- In an op-ed for CNN, Douglas Rushkoff examines what lessons the Bradley Manning verdict offers in the digital age:
We are just beginning to learn what makes a free people secure in a digital age. It really is different. The Cold War was an era of paper records, locked vaults and state secrets, for which a cloak-and-dagger mindset may have been appropriate. In a digital environment, our security comes not from our ability to keep our secrets but rather our ability to live our truth.
- Writing for The Guardian, Greg Burris considers the Chomsky-Žižek debate in light of Snowden’s NSA revelations (or vice-versa):
In light of the recent NSA surveillance scandal, Chomsky and Žižek offer us very different approaches, both of which are helpful for leftist critique. For Chomsky, the path ahead is clear. Faced with new revelations about the surveillance state, Chomsky might engage in data mining, juxtaposing our politicians’ lofty statements about freedom against their secretive actions, thereby revealing their utter hypocrisy. Indeed, Chomsky is a master at this form of argumentation, and he does it beautifully in Hegemony or Survival when he contrasts the democratic statements of Bush regime officials against their anti-democratic actions. He might also demonstrate how NSA surveillance is not a strange historical aberration but a continuation of past policies, including, most infamously, the FBI’s counter intelligence programme in the 1950s, 60s, and early 70s.
Žižek, on the other hand, might proceed in a number of ways. He might look at the ideology of cynicism, as he did so famously in the opening chapter of The Sublime Object of Ideology, in order to demonstrate how expressions of outrage regarding NSA surveillance practices can actually serve as a form of inaction, as a substitute for meaningful political struggle. We know very well what we are doing, but still we are doing it; we know very well that our government is spying on us, but still we continue to support it (through voting, etc). Žižek might also look at how surveillance practices ultimately fail as a method of subjectivisation, how the very existence of whistleblowers like Thomas Drake, Bradley Manning, Edward Snowden, and the others who are sure to follow in their footsteps demonstrates that technologies of surveillance and their accompanying ideologies of security can never guarantee the full participation of the people they are meant to control. As Žižek emphasises again and again, subjectivisation fails.
- Indiewire debuted the psychedelic poster for Žižek’s new film The Pervert’s Guide to Ideology. You can also watch the film’s trailer through that link; it opens in limited release in November.
- The Literary Review of Canada has posted a 12-part series titled Looking for Marshall McLuhan in Afghanistan:
In early 2011, award-winning photographer Rita Leistner was embedded with a U.S. marine battalion deployed to Helmand province as a member of Project Basetrack, an experiment in using new technologies in social media to extend traditional war reporting. This new LRC series draws on Leistner’s remarkable iPhone photos and her writings from her time in Afghanistan to use the ideas of Marshall McLuhan to make sense of what she saw there – “to examine the face of war through the extensions of man.”
- I’ve never played EVE Online, and I don’t even really understand how it works, but I find it fascinating. Last week saw the biggest battle in the game’s history. This breakdown from The Verge is headlined like a real-life dispatch from the frontier of mankind’s space-faring endeavors: Largest space battle in history claims 2,900 ships, untold virtual lives
Update, 9:18PM ET: The battle is over. After more than five hours of combat, the CFC has defeated TEST Alliance. Over 2,900 ships were destroyed today in the largest fleet battle in Eve Online’s history. TEST Alliance intended to make a definitive statement in 6VDT, but their defeat at the hands of the CFC was decisive and will likely result in TEST’s withdrawal from the Fountain region.
- Also last week, Microsoft confirmed that the retail version of the Xbox One will function as developer kits and the Xbox Live Arcade will allow independent developers to self-publish games. From the Game Informer article:
In a conversation with Whitten, he told us that the commitment to independent developers is full. There won’t be restrictions on the type of titles that can be created, nor will there be limits in scope. In response to a question on whether retail-scale games could be published independently, Whitten told us, “Our goal is to give them access to the power of Xbox One, the power of Xbox Live, the cloud, Kinect, Smartglass. That’s what we think will actually generate a bunch of creativity on the system.” With regard to revenue splitting with developers, we were told that more information will be coming at Gamescom, but that we could think about it “generally like we think about Marketplace today.” According to developers we’ve spoken with, that split can be approximately 50-50.
- Kris Ligman at Gama Sutra reports that self published games will also be available on Xbox 360. This HuffPo post aggregates links and provides an overview of the Xbox One/Self publishing story.
Another difference between the Xbox One and Xbox 360 is how the games will be published and bought by other gamers. Indie games will not be relegated to the Xbox Live Indie Marketplace like on the Xbox 360 or required to have a Microsoft-certified publisher to distribute physically or digitally outside the Indie Marketplace. All games will be featured in one big area with access to all kinds of games.
- DevWithTheHair argues that freemium is hurting modern video game design:
If anything has hurt modern video game design over the past several years, it has been the rise of ‘freemium‘. It seems that it is rare to see a top app or game in the app stores that has a business model that is something other than the ‘free-to-play with in-app purchases’ model. It has been used as an excuse to make lazy, poorly designed games that are predicated on taking advantage of psychological triggers in its players, and will have negative long term consequences for the video game industry if kept unchecked.
Many freemium games are designed around the idea of conditioning players to become addicted to playing the game. Many game designers want their games to be heavily played, but in this case the freemium games are designed to trigger a ‘reward’ state in the player’s brain in order to keep the player playing (and ultimately entice the user to make in-app purchases to continue playing). This type of conditioning is often referred to as a ‘Skinner box‘, named after the psychologist that created laboratory boxes used to perform behavioral experiments on animals.
- Grey Matter Gaming posted a thorough piece as the first part of a series on The Publisher-Developer Money-Go-Round and the Boom of the Indie Industry:
It obviously isn’t beyond the realm of possibility that, not only do financial considerations influence a game’s structure and content, financial outcomes affect a studio’s likelihood of survival in the industry, based upon the machinations of its publishing overlords. Activision killed Bizarre Creations, Eidos ruined Looking Glass Studios, EA crushed Westood, Pandemic, Bullfrog, Origin Systems… well, the list could go on, until I turn a strange, purple color, but you get my point. And, when 3.4 million copies sold for a Tomb Raider reboot isn’t enough by a publisher’s standards, you can’t help but feel concern for a developer’s future.
- Doctoral student Stephen Slota writes how video games can enhance learning and problem-solving:
This relationship between environment-learner-content interaction and transfer puts teachers in the unique position to capitalize on game engagement to promote reflection that positively shapes how students tackle real-world challenges. To some, this may seem like a shocking concept, but it’s definitely not a new one—roleplay as instruction, for example, was very popular among the ancient Greeks and, in many ways, served as the backbone for Plato’s renowned Allegory of the Cave. The same is true of Shakespeare’s works, 18th and 19th century opera, and many of the novels, movies, and other media that define our culture. More recently, NASA has applied game-like simulations to teach astronauts how to maneuver through space, medical schools have used them to teach robotic surgery, and the Federal Aviation Administration has employed them to test pilots.
- Alex Law at Nightmare Mode posted a great article titled Player-Character Dynamics, Identity, and Sexuality in Video Games:
The relationship between the creator, the product, and the audience, are all important contexts to consider during media analysis, especially with games. This is because the audience is an active participant in the media. So if you are creating a game you always have to keep in mind the audience. Even if you say the audience doesn’t matter to you, it won’t cease to exist, and it does not erase the impact your game will have.
Similarly, if you are critiquing or analyzing any media, you can’t ignore the creator and the creator’s intentions. Despite those who claim the “death of the author,” if the audience is aware of the creator’s intentions, it can affect how they perceive the game. Particularly, if you consider the ease in which creators can release statements talking about their work, you’ll have an audience with varying levels of awareness about the creator’s intentions. These factors all play off of each other–they do not exist in a vacuum.
- DROP OUT. HANG OUT. SPACE OUT. chimes in on Warren Spector’s call for better games criticism:
When we talk about any medium’s legitimacy, be it film or videogames or painting, it’s a very historical phenomenon that is inextricably tied to its artness that allows for them to get in on the ground floor of “legitimate” and “important.” So if we contextualize the qualities that allowed for film or photography to find themselves supported through a panoply of cultural institutions it was a cultural and political economic process that lead them there.
Videogames, the kind that would be written about in 20 dollar glossy art magazines, would be exactly this. When creators of videogames want to point to their medium’s legitimacy, it would help to have a lot of smart people legitimate your work in a medium (glossy magazines, international newspapers) that you consider to be likewise legitimate. Spector concedes that ‘yes all the critics right now are online’, but the real battle is in getting these critics offline and into more “legitimate” spaces of representation. It’s a kind of unspoken hierarchy of mediums that is dancing before us here: at each step a new gatekeeper steps into play, both legitimating and separating the reader from the critic and the object of criticism.
- Pop Matters looks at how fatherhood is represented in Heavy Rain, The Walking Dead, and The Last of Us:
All three games define fatherhood around the act of protection, primarily physical protection. And in each of these games, the protagonist fails—at least temporarily—to protect their ward. In Ethan’s case, his cheery family reflected in his pristine home collapses when he loses a son in a car accident. Later, when his other son goes missing, the game essentially tests Ethan’s ability to reclaim his protective-father status.
- Sam Barsanti at the The Gameological Society explains how the much-derided final act of BioShock actually drives home one of its most important themes:
No video game grants absolute freedom; they all have rules or guidelines that govern what you can and can’t do. The sci-fi epic Mass Effect is a series that prides itself on choice, but even that trilogy ends on a variation of choosing between the “good” and “bad” ending. Minecraft, the open-world creation game, is extremely open-ended, but you can’t build a gun or construct a tower into space because it doesn’t let you. BioShock’s ending argues that the choices you think you’re making in these games don’t actually represent freedom. You’re just operating within the parameters set by the people in control, be they the developers or the guy in the game telling you to bash his skull with a golf club.
BioShock’s disappointing conclusion ends up illustrating Ryan’s point. A man chooses, a player obeys. It’s a grim and cynical message that emphasizes the constraints of its own art form. And given that the idea of choice is so important to BioShock’s story, I don’t think it could’ve ended any other way.
- The latest “this week in videogame blogging” post at Critical Distance includes The Last of Us as exercise in emotional manipulation, analyzing Saints Row the Third through the lens of thematic self-sabotage, and a “Let’s Critique” commentary for Dishonored.