Tagged: ideology

Memes, Enthymemes, and the Reproduction of Ideology

Zizek ideology meme

In his 1976 book The Selfish Gene, biologist Richard Dawkins introduced the word “meme” to refer to a hypothetical unit of cultural transmission. The discussion of the meme concept was contained in a single chapter of a book that was otherwise dedicated to genetic transmission, but the idea spread. Over decades, other authors further developed the meme concept, establishing “memetics” as a field of study. Today, the word “meme” has entered the popular lexicon, as well as popular culture, and is primarily associated with specific internet artifacts, or “viral” online content. Although this popular usage of the term is not always in keeping with Dawkins’ original conception, these examples from internet culture do illustrate some key features of how memes have been theorized.

This essay is principally concerned with two strands of memetic theory: the relation of memetic transmission to the reproduction of ideology; and the role of memes in rhetorical analysis, especially in relation to the enthymeme as persuasive appeal. Drawing on these theories, I will advance two related arguments: ideology as manifested in discursive acts can be considered to spread memetically; and ideology functions enthymemetically. Lastly, I will present a case study analysis to demonstrate how the use of methods and terminology from rhetorical criticism, discourse analysis, and media studies, can be employed to analyze artifacts based on these arguments.

Examples of memes presented by Dawkins include “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or building arches” (p.192). The name “meme” was chosen due to its similarity to the word “gene”, as well as its relation to the Greek root “mimeme” meaning “that which is imitated” (p.192). Imitation is key to Dawkins’ notion of the meme because imitation is the means by which memes propagate themselves amongst members of a culture. Dawkins identifies three qualities associated with high survival in memes: longevity, fecundity, and copying-fidelity (p.194).

Distin (2005) further developed the meme hypothesis in The Selfish Meme. Furthering the gene/meme analogy, Distin defines memes as “units of cultural information” characterized by the representational content they carry (p.20), and the representational content is considered “the cultural equivalent of DNA” (p.37). This conceptualization of memes and their content forms the basis of Distin’s theory of cultural heredity. Distin then seeks to identify the representational system used by memes to carry their content (p.142). The first representational system considered is language, what Distin calls “the memes-as-words hypothesis” (p.145). Distin concludes that language itself is “too narrow to play the role of cultural DNA” (p.147).

Balkin (1998) took up the meme concept to develop a theory of ideology as “cultural software”. Balkin describes memes as “tools of understanding,” and states that there are “as many different kinds of memes as there are things that can be transmitted culturally” (p.48). Stating that the “standard view of memes as beliefs is remarkably similar to the standard view of ideology as a collection of beliefs” (p.49), Balkin links the theories of memetic transmission to theories of ideology. Employing metaphors of virility similar to how other authors have written of memes as “mind viruses,” Balkin considers memetic transmission as the spread of “ideological viruses” through social networks of communication, stating that “this model of ideological effects is the model of memetic evolution through cultural communication” (p.109). Balkin also presents a more favorable view of language as a vehicle for memes than Distin presented, writing: “Language is the most effective carrier of memes and is itself one of the most widespread forms of cultural software. Hence it is not surprising that many ideological mechanisms either have their source in features of language or are propagated through language” (p.175).

Balkin approaches the subject from a background in law, and although not a rhetorician and skeptical of the discursive turn in theories of ideology, Balkin does employ rhetorical concepts in discussing the influence of memes and ideology: “Rhetoric has power because understanding through rhetorical figures already forms part of our cultural software” (p.19). Balkin also cites Aristotle, remarking that “the successful rhetorician builds upon what the rhetorician and the audience have in common,” and “what the two have in common are shared cultural meanings and symbols” (p.209). In another passage, Balkin expresses a similar notion of the role of shared understanding in communication: “Much human communication requires the parties to infer and supplement what is being conveyed rather than simply uncoding it” (p.51).

Although Balkin never uses the term, these ideas are evocative of the rhetorical concept of the enthymeme. Aristotle himself discussed the enthymeme, though the concept was not elucidated with much specificity. Rhetorical scholars have since debated the nature of the enthymeme as employed in persuasion, and Bitzer (1959) surveyed various accounts to produce a more substantial definition. Bitzer’s analysis comes to focus on the enthymeme in relation to syllogisms, and the notion of the enthymeme as a syllogism with a missing (or unstated) proposition. Bitzer states: “To say that the enthymeme is an ‘incomplete syllogism’ – that is, a syllogism having one or more suppressed premises – means that the speaker does not lay down his premises but lets his audience supply them out of its stock of opinion and knowledge” (p.407).

Bitzer’s formulation of the enthymeme emphasizes that “enthymemes occur only when the speaker and audience jointly produce them” (p.408). That they are “jointly produced” is key to the role of the enthymeme is successful persuasive rhetoric: “Owing to the skill of the speaker, the audience itself helps construct the proofs by which it is persuaded” (p.408). That the enthymeme’s “premises are always drawn from the audience,” and the “successful construction is accomplished through the joint efforts of speaker and audience,” Bitzer defines as the “essential character” of the enthymeme. This joint construction, and supplying of the missing premise(s), resonates with Balkin’s view of the spread of cultural software, as well as various theories of subjects’ complicity in the functioning of ideology.

McGee (1980) supplied another link between rhetoric and ideology with the “ideograph”. McGee argued that “ideology is a political language composed of slogan-like terms signifying collective commitment” (p.15), and these terms he calls “ideographs”. Examples of ideographs, according to McGee, include “liberty,” “religion,” and “property” (p.16). Johnson (2007) applies the ideograph concept to memetics, to argue for the usefulness of the meme as a tool for materialist criticism. Johnson argues that although “the ideograph has been honed as a tool for political (“P”-politics) discourses, such as those that populate legislative arenas, the meme can better assess ‘superficial’ cultural discourses” (p.29). I also believe that the meme concept can be a productive tool for ideological critique. As an example, I will apply the concepts of ideology reproduction as memetic transmission, and ideological function as enthymematic, in an analysis of artifacts of online culture popularly referred to as “memes”.

As Internet culture evolved, users adapted and mutated the term “meme” to refer to specific online artifacts. Even though they may be considered a type of online artifact, Internet memes come in a variety of different forms. One of the oldest and most prominent series of image macro memes is the “LOLcats” series of memes. The template established by LOLcats of superimposing humorous text over static images became and remains the standard format for image macro memes. Two of the most prominent series of these types of memes are the “First World Problems” (FWP) and “Third World Success” image macros. Through analysis of these memes, it is possible to examine how the features of these artifacts and discursive practices demonstrate many of the traits of memes developed by theorists, and how theories of memetic ideological transmission and enthymematic ideological function can be applied to examine ideological characteristics of these artifacts.

References

Balkin, J.M. (1998). Cultural software: A theory of ideology. Dansbury, CT: Yale

University Press.

Bitzer, L. F. (1959). Aristotle’s enthymeme revisited. Quarterly Journal Of Speech,

45(4), 399-408.

Dawkins, R. (2006). The Selfish Gene. New York, NY: Oxford University Press. (original

work published 1976)

Distin, K. (2005). The selfish meme: A critical reassessment. New York, NY: Cambridge

University Press.

McGee, M. C. (1980). The “ideograph”: A link between rhetoric and ideology. Quarterly

Journal Of Speech, 66(1), 1-16.

Media Ecology Monday: Golumbia and the Political Economy of Computationalism

In The Cultural Logic of Computation Golumbia raises questions and addresses issues that are promising, but then proceeds in making an argument that is ultimately unproductive. I am sympathetic to Golumbia’s aims; I share an attitude of skepticism toward the rhetoric surrounding the Internet and new media as inherently democratizing, liberating devices. Golumbia characterizes such narratives as “technological progressivism,” and writes that “technological progressivism […] conditions so much of computational discourse.” Following the “Arab Spring” and watching the events unfold was exhilarating, but I was always uncomfortable with the narrative promoted in the mainstream news media characterizing these social movements as a “Twitter revolution,” and I remain skeptical toward hashtag activism and similar trends.

So while I was initially inclined toward the project Golumbia laid out in the book’s introductory pages, the chapters that followed only muddled rather than clarified my understanding of the argument being presented. The first section contains a sustained attack on Noam Chomsky’s contributions to linguistics, and their various influences and permutations, but also on Chomsky himself. I don’t know why Golumbia needed to question Chomsky’s “implausible rise to prominence,” or why Chomsky’s “magnetic charisma” needs to be mentioned in this discussion of linguistic theory.

Golumbia focuses on Chomsky’s contributions to linguistics, because that is where his interests and argument draw him; based on my own interests and background I would’ve preferred engagement with the other side of Chomsky’s contributions to communication studies, namely the propaganda model and political economy of the media. I suspect that a fruitful analysis would be possible from considering some of the issues Golumbia brings up in relation the work of Chomsky and others in ideological analysis of news media content. The notion of computationalism as ideology is compelling to me; so is the institutionalized rhetoric of computationalism, which is a separate, promising argument, I think.

In reading I have a tendency to focus on what interests me, appeals to me, or may be useful for me. Some of Golumbia’s concepts, such as “technological-progressive neoliberalism” and its relation to centralized power, fall into this category. I’m still skeptical about computationalism as an operationalizable concept (it seems like there are already multiple theoretical models and critical perspectives that cover the same territory, I’m not convinced that Golumbia makes the case for a need for the term), others may be more productive. Ultimately I will use a quote from Golumbia (addressing Internet and emerging technologies) that reflects my feelings on this book: “We have to learn to critique even that which helps us.”

MISC Monday: MLK media literacy; social media stress; the attention economy, and more

Woman_reading_a_book_on_an_eReader

Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources.  Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.

Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.

“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”

In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.

“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”

The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”

“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.

The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.

Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.

We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.

From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.

  • Dowd’s argument here is similar to Carr’s thoughts on MOOCs:

In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.

  • MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:

The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.

And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”

We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.

As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.

There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.

The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.

Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.

Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.

And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.

What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.

This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.

The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.

Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.

Gentrification and ‘the fucking hipster show’; hostile architercure and defensive urban design

Linda Nylind for the Guardian

Linda Nylind for the Guardian

[Marxist geographer Neil] Smith offers a dry, but emphatically structural account of this process, which he first theorized in the late eighties with Soho and the Lower East Side in mind. Gentrification has since become central to neoliberal urbanization generally, and New York City in particular, under the developer-driven Bloomberg administration.

But why bother with “dry” and “structural” when you can tune-in to the “fucking hipster” show?

Unlike Smith’s rigorous Marxian analysis, most popular accounts from the spurious creative class mystifications of Richard Florida to standard issue conservative populist diatribes forget the larger forces and primary movers in this process, which is instead reduced, metonymically, to the catchall figure of the hipster.

[…]

On topics ranging from the capitalist dynamics of gentrification to the casualization of employment among ostensibly middle class Millennials, the “fucking hipster” show beats staid structural analysis every time — even for many members of the self-identified Left.

[…]

We should retire “hipster” as a term without referent or political salience. Its zombie-like persistence in anti-hipster discourse must be recognized for what it is: an urbane, and socially acceptable, form of ideologically inflected shaming on the part of American elites who must delegitimize those segments of a largely white, college educated population who didn’t do the “acceptable thing.”

The anti-hipster censure here includes a healthy dose of typically American anti-intellectualism, decked out in liberal bunting, subtle homophobia, and recognizably manipulative appeals to white, middle class resentment, now aimed at the lazy hipster, who either lives on his trust fund or, more perniciously, abuses public assistance, proving how racist templates are multi-use tools.

Our power elites’ rhetorical police action becomes increasingly necessary as large swaths of the people lumped under the hipster taxon slip into the ranks of the long-term un- and underemployed. Once innocuous alternative lifestyles could potentially metamorphosize into something else altogether. Better to frame “alternative lifestyle” in terms of avant-garde trend setting without remainder, providing suitably rarefied consumption options for Bloomberg’s new bourgeoisie, as they buy locally sourced creativity on Bedford Ave.

Metal spikes designed to stop homeless people sleeping in the doorway of a London apartment block have been removed, after almost 130,000 people signed a petition calling for them to be taken out.

Pictures of the metal studs outside flats in Southwark Bridge Road were widely shared online last weekend, sparking outrage on social media.

Many criticised the spikes as inhumane, and compared them to those used to stop pigeons landing on buildings.

It has been encouraging to see the outrage over the London spikes. But the spikes that caused the uproar are by no means the only form of homeless-deterrent technology; they are simply the most conspicuous. Will public concern over the spikes extend to other less obvious instances of anti-homeless design? Perhaps the first step lies in recognizing the political character of the devices all around us.

An example of an everyday technology that’s used to forbid certain activities is “skateboard deterrents,” that is, those little studs added to handrails and ledges.  These devices, sometimes also called “skatestoppers” or “pig ears,” prevent skateboarders from performing sliding—or “grinding”—tricks across horizontal edges. A small skateboard deterrence industry has developed, with vendors with names like “stopagrind.com” and “grindtoahault.com.”

[…]

An example of a pervasive homeless deterrence technology is benches designed to discourage sleeping. These include benches with vertical slats between each seat, individual bucket seats, large armrests between seats, and wall railings which enable leaning but not sitting or lying, among many other designs. There are even benches made to be slightly uncomfortable in order to dissuade people from sitting too long. Sadly, such designs are particularly common in subway, bus stops, and parks that present the homeless with the prospect of a safely public place to sleep.

[..]

The London spikes provide an opportunity to put a finger on our own intuitions about issues of homelessness and the design of open space. Ask yourself if you were appalled by the idea of the anti-homeless spikes. If so, then by implication you should have the same problems with other less obvious homeless deterrence designs like the sleep-prevention benches and the anti-loitering policies that target homeless people.

In addition to anti-skateboard devices, with names such as “pig’s ears” and “skate stoppers”, ground-level window ledges are increasingly studded to prevent sitting, slanting seats at bus stops deter loitering and public benches are divided up with armrests to prevent lying down.

To that list, add jagged, uncomfortable paving areas, CCTV cameras with speakers and “anti-teenager” sound deterrents, such as the playing of classical music at stations and so-called Mosquito devices, which emit irritatingly high-pitched sounds that only teenagers can hear.

[…]

The architectural historian Iain Borden says the emergence of hostile architecture has its roots in 1990s urban design and public-space management. The emergence, he said, “suggested we are only republic citizens to the degree that we are either working or consuming goods directly.

“So it’s OK, for example, to sit around as long as you are in a cafe or in a designated place where certain restful activities such as drinking a frappucino should take place but not activities like busking, protesting or skateboarding. It’s what some call the ‘mallification’ of public space, where everything becomes like a shopping mall.”

Critical perspectives on the Isla Vista spree killer, media coverage

 

Reuters/Lucy Nicholson

Reuters/Lucy Nicholson

  • Immediately following Elliot Rodger’s spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a “manifesto” by the media. The written document and the videos documented Rodger’s sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him ” to endure an existence of loneliness, rejection and unfulfilled desires” and causing his violent “retribution”. Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:

Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.

His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.

In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.

[…]

What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.

[…]

Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.

[…]

There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ […] These are the only interactions Elliot has with women: marking his territory.

[…]

When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.

While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.

[…]

Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.

For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.

  • Kellner’s comments from a 2008 interview talking about the Virginia Tech shooter’s videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:

Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.

[…]

People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.

  • Almost immediately after news of the violence broke, and word of the killer’s YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community’s mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:

More than a dozen reporters were camped out on Pardall Road in front of the deli — and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.

The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:

“OUR TRAGEDY IS NOT YOUR COMMODITY.”

“Remembrance NOT ratings.”

“Stop filming our tears.”

“Let us heal.”

“NEWS CREWS GO HOME!”

Fukuyama: 25 years after the “End of History”

I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.

[…]

So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.

[…]

Twenty-five years later, the most serious threat to the end-of-history hypothesis isn’t that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.

When he wrote “The End of History?”, Fukuyama was a neocon. He was taught by Leo Strauss’s protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that “the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx.” This was a highly tendentious claim even in 1989.

[…]

Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a “disconcerting caesura”. Political society had rejected “the old apocalyptic and chiliastic visions”, he wrote, and “in the west, among the intellectuals, the old passions are spent.” Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the “end of history” and the “end of ideology” arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.

While I recognise the ideological subterfuge (the markets as “natural”), there is a broader aspect to Fukuyama’s essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: “The end of history will be a very sad time. The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”

In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.

Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.

Ender’s Game analyzed, the Stanley Parable explored, Political Economy of zombies, semiotics of Twitter, much more

It’s been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.

  • I haven’t seen the new Ender’s Game movie, but this review by abbeyotis at Cyborgology calls the film “a lean and contemporary plunge into questions of morality mediated by technology”:

In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.

  • In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman’s latest Big Picture post uses his own review of the Ender’s Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism.  Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or “gimmick”) to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person “Hulk speak”  writing style the all caps seems to be played out. Nevertheless, I’m sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.
  • In this video of a presentation titled Game design: the medium is the message, Jonathan Blow discusses how commercial constraints dictate the form of products from TV shows to video games.
  • This video from Satchbag’s Goods is ostensibly a review of Hotline Miami, but develops into a discussion of art movements and Kanye West:
  • This short interview with Slavoj Žižek in New York magazine continues a trend I’ve noticed since Pervert’s Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek’s contribution to critical theory:

Žižek, after all, the ­Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.

Following the development of the environment on the team’s blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.

  • Stephen Totilo reviewed the new pirate-themed Assassin’s Creed game for the New York Times. I haven’t played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:

Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.

It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.

The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.

Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.

Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.

Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.

Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters’ ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.

Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.

This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves – our biology is verified, digitised, encrypted, as they are handed over to our devices.

Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn’t exactly rocket science, thanks to Tor and some bitcoins. Here’s a rundown of how Silk Road worked before the feds swooped in.

  • Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is “imaginary worlds,” and they touch upon the narratology vs. ludology conflict in gaming:

The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author.  So it’s not just a question of how many choices you make, but how many options there are per choice.  Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.