Category: Theory

MISC Monday: MLK media literacy; social media stress; the attention economy, and more

Woman_reading_a_book_on_an_eReader

Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources.  Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.

Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.

“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”

In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.

“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”

The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”

“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.

The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.

Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.

We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.

From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.

  • Dowd’s argument here is similar to Carr’s thoughts on MOOCs:

In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.

  • MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:

The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.

And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”

We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.

As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.

There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.

The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.

Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.

Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.

And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.

What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.

This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.

The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.

Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.

McLuhan Monday: Print and Islam, mobile gaming medium theory, McLuhan’s relevance, and more

marshall-mcluhan-illustration_2

So, in the Muslim world, books and literacy became generally accessible (instead of being accessible only to the educated male and the wealthy) about a quarter of a millennium later than in European-Western culture. I found this information, together with an assessment of the damage this 250-year lag caused to Muslim society and culture, in the works of Muslim scholars.

This lag could be made up in the blink of an eye as the cultural world moved from Johannes Gutenberg’s galaxy into the era when “The medium is the message,” and with the development of the virtual and digital world (at the expense of the printed one, of course).

McLuhan had a lot of ideas and subsets of ideas. But he had one very big idea: that human civilization had passed through two stages of communication history, oral and print, and was embarking on another: electronic media. He believed the new media would change the way people relate to themselves and others and would change societies dramatically. Is the computer, then, the ubiquitous laptop and other devices, the McLuhan “audile-tactile” dream come true? There is no way to know. And it will take at least another 50 years to make a full evaluation of the work of Marshall McLuhan.

Taking a leaf from McLuhan then, I submit that the message is the product. The tone, approach and strategy of how marketing is conducted shapes what kinds of product can be allowed by a product’s developer. What kind of ad you’ll run determines what kind of game you’ll believe can work, and therefore what kind of game you’ll fund and make.

[…]

The medium is the message and the message is the product, remember. In Marvel’s case the medium of cinema sends the message of the big experience, and the message disseminated through a high value trailer leads to the will to make a high value product: a big splashy movie. That’s how it earned the right to be thought of as premium. That’s how games do that too.

When media guru Marshall McLuhan declared back in the 1960s that “Every innovation has within itself the seeds of its reversal,” I had no idea what he meant. But, like his other catchy quotables — “global village,” “cool media,” “the medium is the message” — it stayed with me. Now, in the Internet age, I am seeing proof of his prophecy every day.

For example, McLuhan predicted that a rapidly expanding automobile culture would lead to more traffic jams, air pollution and longing for space to take long walks or ride bicycles. I’m sure he’d give a knowing I-told-you-so nod to today’s battles between car people and bike people for asphalt space.

[…]

But more recently and less happily, I see far more sinister seeds of reversal in this era’s greatest innovation, the Internet. We greeted the Web as a liberator, but in today’s age of terrorism and post-Cold War autocrats it also poses a growing menace to the press freedoms it otherwise has invigorated.

Two common critiques of McLuhan’s are his obliviousness to political economy and his technological determinism. McLuhan’s prognosis on media appears to celebrate a burgeoning world order and global capitalism. The way he foreshadows cognitive capitalism appears deterministic. Critics attack McLuhan for being silent on the transformation of global capitalism. This criticism focuses on what McLuhan did not write inUnderstanding Media as opposed to what he did. It is interesting to note that European scholars, even those who with political economic inclinations do not scorn McLuhan the way North Americans do. They do not blame him for being the messenger of a cognitive capitalist message.

[…]

McLuhan rightly described and to some extent predicted how messages need not be unidirectional. When he argued that technology is an extension of the senses, he did not argue that a select few had agency over the shaping of the message. He argued that any person had that potential. Specifically, he described how alternates modes of literacy allowed non literary people to participate in a global discourse. This is McLuhan’s legacy and part of why his work should be celebrated today.

Political Economy in Mumford’s “Technics & Civilization”

technics

I’ve written about the media ecology tradition, attended the Media Ecology Association’s conferences and had an article published in their journal, but up to now Marshall McLuhan’s Understanding Media and Neil Postman’s Amusing Ourselves to Death are the only primary texts associated with the tradition that I’ve read. To broaden my knowledge of the tradition I’m reading some of the books considered foundational in the media ecology canon, beginning with Lewis Mumford’s Technics & Civilization. I paid special attention to Mumford’s references to capitalism in Technics & Civilization because I have an abiding interest in the marriage of critical/Marxian analysis and media ecological perspectives. One of the most common criticisms of McLuhan’s writings on media is the charge of technological determinism, and that McLuhan’s media theory focuses on wide-reaching social and psychological effects while ignoring the historical, political, and economic factors involved in the development and dissemination of technologies. Although this is a valid criticism, as McLuhan’s approach did not emphasize the political economy of media, a number of scholars have re-evaluated McLuhan and other media ecologists to identify parallels in their work with critical theory and the Marxian critique of capitalism. The same criticisms cannot be legitimately levied against Mumford, whose account of successive technological complexes demonstrates careful consideration of the historical, political, and economic situations in which these complexes developed. Technics & Civilization makes clear that a media ecology perspective can incorporate a pronounced political orientation and an analysis of political economy.

Reading through Mumford’s account of the phases of technological complexes, I noted how the capitalist mode of economics is heavily dependent on technology. The interdependence seemed so crucial to both that it almost seemed that the history of capitalism is the history of technological development. Though Mumford does distinguish technics and capitalism as separate but interrelated forces. In the conclusion of the final chapter, “Orientation,” Mumford writes “we have advanced as far as seems possible in considering mechanical civilization as an isolated system” (p. 434). Technics & Civilization was first published in 1934; a contemporary reader will likely extend Mumford’s analysis to account for the last 80 years of technological progress, particularly in consideration of the information and telecommunications revolutions (an editorial note before the main text states that Mumford “would have loved” the Internet). Such an extension must account for the associated developments in capitalism. Scholars have used terms like “hypercapitalism” and “network and informational capitalism” to describe the new outlets of capital accumulation made possible by the global telecommunications infrastructure. Mumford wrote that “we are now entering a phase of dissociation between capitalism and technics” (p. 366), due in part to the over-working of “the machine”. Hypercapitalism has seen new forms of over-exploitation, and the continued commodification of intangibles such as information and attention, calling into question the dissociation of capitalism and technics. Mumford’s warning of the capitalist threat to physical resources, however, remains pertinent today.

The attention Mumford gives to the psychological effects of technics is a fascinating component of his analysis that prefigures McLuhan’s observations on technology as extensions of the human organism. The introduction of introspection and self-reflection instigated by the mirror’s effect on the individual ego; the metamorphosis of thought from flowing and organic to verbal and categorical brought on by print and paper; the shift from self-examination to self-exposure ushered in by the introduction of cameras; these are just some of the examples cited by Mumford to establish that the technological complexes built up from every individual innovation are not constrained to the obvious external manifestations but involve dramatic internal changes as well. In fact, the psychological and material transformations are not distinct processes, but are necessarily interlinked, two sides of the same coin.

Gentrification and ‘the fucking hipster show’; hostile architercure and defensive urban design

Linda Nylind for the Guardian

Linda Nylind for the Guardian

[Marxist geographer Neil] Smith offers a dry, but emphatically structural account of this process, which he first theorized in the late eighties with Soho and the Lower East Side in mind. Gentrification has since become central to neoliberal urbanization generally, and New York City in particular, under the developer-driven Bloomberg administration.

But why bother with “dry” and “structural” when you can tune-in to the “fucking hipster” show?

Unlike Smith’s rigorous Marxian analysis, most popular accounts from the spurious creative class mystifications of Richard Florida to standard issue conservative populist diatribes forget the larger forces and primary movers in this process, which is instead reduced, metonymically, to the catchall figure of the hipster.

[…]

On topics ranging from the capitalist dynamics of gentrification to the casualization of employment among ostensibly middle class Millennials, the “fucking hipster” show beats staid structural analysis every time — even for many members of the self-identified Left.

[…]

We should retire “hipster” as a term without referent or political salience. Its zombie-like persistence in anti-hipster discourse must be recognized for what it is: an urbane, and socially acceptable, form of ideologically inflected shaming on the part of American elites who must delegitimize those segments of a largely white, college educated population who didn’t do the “acceptable thing.”

The anti-hipster censure here includes a healthy dose of typically American anti-intellectualism, decked out in liberal bunting, subtle homophobia, and recognizably manipulative appeals to white, middle class resentment, now aimed at the lazy hipster, who either lives on his trust fund or, more perniciously, abuses public assistance, proving how racist templates are multi-use tools.

Our power elites’ rhetorical police action becomes increasingly necessary as large swaths of the people lumped under the hipster taxon slip into the ranks of the long-term un- and underemployed. Once innocuous alternative lifestyles could potentially metamorphosize into something else altogether. Better to frame “alternative lifestyle” in terms of avant-garde trend setting without remainder, providing suitably rarefied consumption options for Bloomberg’s new bourgeoisie, as they buy locally sourced creativity on Bedford Ave.

Metal spikes designed to stop homeless people sleeping in the doorway of a London apartment block have been removed, after almost 130,000 people signed a petition calling for them to be taken out.

Pictures of the metal studs outside flats in Southwark Bridge Road were widely shared online last weekend, sparking outrage on social media.

Many criticised the spikes as inhumane, and compared them to those used to stop pigeons landing on buildings.

It has been encouraging to see the outrage over the London spikes. But the spikes that caused the uproar are by no means the only form of homeless-deterrent technology; they are simply the most conspicuous. Will public concern over the spikes extend to other less obvious instances of anti-homeless design? Perhaps the first step lies in recognizing the political character of the devices all around us.

An example of an everyday technology that’s used to forbid certain activities is “skateboard deterrents,” that is, those little studs added to handrails and ledges.  These devices, sometimes also called “skatestoppers” or “pig ears,” prevent skateboarders from performing sliding—or “grinding”—tricks across horizontal edges. A small skateboard deterrence industry has developed, with vendors with names like “stopagrind.com” and “grindtoahault.com.”

[…]

An example of a pervasive homeless deterrence technology is benches designed to discourage sleeping. These include benches with vertical slats between each seat, individual bucket seats, large armrests between seats, and wall railings which enable leaning but not sitting or lying, among many other designs. There are even benches made to be slightly uncomfortable in order to dissuade people from sitting too long. Sadly, such designs are particularly common in subway, bus stops, and parks that present the homeless with the prospect of a safely public place to sleep.

[..]

The London spikes provide an opportunity to put a finger on our own intuitions about issues of homelessness and the design of open space. Ask yourself if you were appalled by the idea of the anti-homeless spikes. If so, then by implication you should have the same problems with other less obvious homeless deterrence designs like the sleep-prevention benches and the anti-loitering policies that target homeless people.

In addition to anti-skateboard devices, with names such as “pig’s ears” and “skate stoppers”, ground-level window ledges are increasingly studded to prevent sitting, slanting seats at bus stops deter loitering and public benches are divided up with armrests to prevent lying down.

To that list, add jagged, uncomfortable paving areas, CCTV cameras with speakers and “anti-teenager” sound deterrents, such as the playing of classical music at stations and so-called Mosquito devices, which emit irritatingly high-pitched sounds that only teenagers can hear.

[…]

The architectural historian Iain Borden says the emergence of hostile architecture has its roots in 1990s urban design and public-space management. The emergence, he said, “suggested we are only republic citizens to the degree that we are either working or consuming goods directly.

“So it’s OK, for example, to sit around as long as you are in a cafe or in a designated place where certain restful activities such as drinking a frappucino should take place but not activities like busking, protesting or skateboarding. It’s what some call the ‘mallification’ of public space, where everything becomes like a shopping mall.”

Chris Hedges interviews Chomsky; Žižek on illusion of freedom; Bogost on ‘Darmok’ and ‘Yo’

AP/Nader Daoud

AP/Nader Daoud

Chomsky believes that the propaganda used to manufacture consent, even in the age of digital media, is losing its effectiveness as our reality bears less and less resemblance to the portrayal of reality by the organs of mass media. While state propaganda can still “drive the population into terror and fear and war hysteria, as we saw before the invasion of Iraq,” it is failing to maintain an unquestioned faith in the systems of power. Chomsky credits the Occupy movement, which he describes as a tactic, with “lighting a spark” and, most important, “breaking through the atomization of society.”

“There are all sorts of efforts to separate people from one another,” he said. “The ideal social unit [in the world of state propagandists] is you and your television screen. The Occupy actions brought that down for a large part of the population. People recognized that we could get together and do things for ourselves. We can have a common kitchen. We can have a place for public discourse. We can form our ideas. We can do something. This is an important attack on the core of the means by which the public is controlled. You are not just an individual trying to maximize consumption. You find there are other concerns in life. If those attitudes and associations can be sustained and move in new directions, that will be important.”

Not only have we learned a lot about the illegal activities of the US and other great powers. Not only have the WikiLeaks revelations put secret services on the defensive and set in motion legislative acts to better control them. WikiLeaks has achieved much more: millions of ordinary people have become aware of the society in which they live. Something that until now we silently tolerated as unproblematic is rendered problematic.

This is why Assange has been accused of causing so much harm. Yet there is no violence in what WikiLeaks is doing. We all know the classic scene from cartoons: the character reaches a precipice but goes on running, ignoring the fact that there is no ground underfoot; they start to fall only when they look down and notice the abyss. What WikiLeaks is doing is just reminding those in power to look down.

The reaction of all too many people, brainwashed by the media, to WikiLeaks’ revelations could best be summed up by the memorable lines of the final song from Altman’s film Nashville: “You may say I ain’t free but it don’t worry me.” WikiLeaks does make us worry. And, unfortunately, many people don’t like that.

“Darmok” gives us one vision of a future in which procedural rhetoric takes precedence over verbal and visual rhetoric, indeed in which the logic of logics subsume the logics of description, appearances, and even of narrative—that preeminent form that even Troi mistakes as paramount to the Children of Tama. The Tamarian’s media ecosystem is the opposite of ours, one in which behaviors are taken as primary, and descriptions as secondary, almost incidental. The Children of Tama are less interesting as aliens than they are as counterfactual versions of us, if we preferred logic over image or description.

At the end of “Darmok,” Riker finds Captain Picard sitting in his ready room, reading from an ancient book rather than off a tablet. “Greek, sir?” Riker asks. “The Homeric Hymns,” Picard responds, “One of the root metaphors of our own culture. “For the next time we encounter the Tamarians…” suggests the first officer. To which his captain replies, “More familiarity with our own mythology might help us relate to theirs.” A charming sentiment, and a move that always works for Star Trek—the juxtaposition of classical antiquity and science-fictional futurism. But Picard gets it wrong one last time. To represent the world as systems of interdependent logics we need not elevate those logics to the level of myth, nor focus on the logics of our myths. Instead, we would have to meditate on the logics in everything, to see the world as one built of weird, rusty machines whose gears squeal as they grind against one another, rather than as stories into which we might write ourselves as possible characters.

It’s stupid. There’s no other word for it. But according to TechCrunch, 50,000 people have sent 4 million Yos since the app was launched on, uhm, April Fool’s Day of this year. But sometimes in stupidity we find a kind of frankness, an honesty. For his part, Arbel has rather overstated the matter. “We like to call it context-based messaging,” he told The New York Times. “You understand by the context what is being said.”

[…]

Perhaps the problem with Yo isn’t what makes it stupid—its attempt to formalize the meta-communication common to online life—but what makes it gross: the need to contain all human activity within the logics of tech startups. The need to expect something from every idea, even the stupid ones, to feel that they deserve attention, users, data, and, inevitably, payout. Perhaps this is the greatest meta-communicative message of today’s technology scene. And it might not be inaccurate to summarize that message with a singular, guttural “yo.”

Critical perspectives on the Isla Vista spree killer, media coverage

 

Reuters/Lucy Nicholson

Reuters/Lucy Nicholson

  • Immediately following Elliot Rodger’s spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a “manifesto” by the media. The written document and the videos documented Rodger’s sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him ” to endure an existence of loneliness, rejection and unfulfilled desires” and causing his violent “retribution”. Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:

Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.

His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.

In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.

[…]

What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.

[…]

Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.

[…]

There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ […] These are the only interactions Elliot has with women: marking his territory.

[…]

When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.

While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.

[…]

Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.

For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.

  • Kellner’s comments from a 2008 interview talking about the Virginia Tech shooter’s videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:

Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.

[…]

People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.

  • Almost immediately after news of the violence broke, and word of the killer’s YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community’s mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:

More than a dozen reporters were camped out on Pardall Road in front of the deli — and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.

The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:

“OUR TRAGEDY IS NOT YOUR COMMODITY.”

“Remembrance NOT ratings.”

“Stop filming our tears.”

“Let us heal.”

“NEWS CREWS GO HOME!”

Fukuyama: 25 years after the “End of History”

I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.

[…]

So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.

[…]

Twenty-five years later, the most serious threat to the end-of-history hypothesis isn’t that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.

When he wrote “The End of History?”, Fukuyama was a neocon. He was taught by Leo Strauss’s protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that “the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx.” This was a highly tendentious claim even in 1989.

[…]

Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a “disconcerting caesura”. Political society had rejected “the old apocalyptic and chiliastic visions”, he wrote, and “in the west, among the intellectuals, the old passions are spent.” Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the “end of history” and the “end of ideology” arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.

While I recognise the ideological subterfuge (the markets as “natural”), there is a broader aspect to Fukuyama’s essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: “The end of history will be a very sad time. The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”

In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.

Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.