Tagged: mcluhan

MISC Monday: MLK media literacy; social media stress; the attention economy, and more

Woman_reading_a_book_on_an_eReader

Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources.  Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.

Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.

“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”

In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.

“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”

The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”

“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.

The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.

Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.

We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.

From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.

  • Dowd’s argument here is similar to Carr’s thoughts on MOOCs:

In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.

  • MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:

The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.

And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”

We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.

As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.

There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.

The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.

Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.

Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.

And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.

What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.

This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.

The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.

Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.

McLuhan Monday: Print and Islam, mobile gaming medium theory, McLuhan’s relevance, and more

marshall-mcluhan-illustration_2

So, in the Muslim world, books and literacy became generally accessible (instead of being accessible only to the educated male and the wealthy) about a quarter of a millennium later than in European-Western culture. I found this information, together with an assessment of the damage this 250-year lag caused to Muslim society and culture, in the works of Muslim scholars.

This lag could be made up in the blink of an eye as the cultural world moved from Johannes Gutenberg’s galaxy into the era when “The medium is the message,” and with the development of the virtual and digital world (at the expense of the printed one, of course).

McLuhan had a lot of ideas and subsets of ideas. But he had one very big idea: that human civilization had passed through two stages of communication history, oral and print, and was embarking on another: electronic media. He believed the new media would change the way people relate to themselves and others and would change societies dramatically. Is the computer, then, the ubiquitous laptop and other devices, the McLuhan “audile-tactile” dream come true? There is no way to know. And it will take at least another 50 years to make a full evaluation of the work of Marshall McLuhan.

Taking a leaf from McLuhan then, I submit that the message is the product. The tone, approach and strategy of how marketing is conducted shapes what kinds of product can be allowed by a product’s developer. What kind of ad you’ll run determines what kind of game you’ll believe can work, and therefore what kind of game you’ll fund and make.

[…]

The medium is the message and the message is the product, remember. In Marvel’s case the medium of cinema sends the message of the big experience, and the message disseminated through a high value trailer leads to the will to make a high value product: a big splashy movie. That’s how it earned the right to be thought of as premium. That’s how games do that too.

When media guru Marshall McLuhan declared back in the 1960s that “Every innovation has within itself the seeds of its reversal,” I had no idea what he meant. But, like his other catchy quotables — “global village,” “cool media,” “the medium is the message” — it stayed with me. Now, in the Internet age, I am seeing proof of his prophecy every day.

For example, McLuhan predicted that a rapidly expanding automobile culture would lead to more traffic jams, air pollution and longing for space to take long walks or ride bicycles. I’m sure he’d give a knowing I-told-you-so nod to today’s battles between car people and bike people for asphalt space.

[…]

But more recently and less happily, I see far more sinister seeds of reversal in this era’s greatest innovation, the Internet. We greeted the Web as a liberator, but in today’s age of terrorism and post-Cold War autocrats it also poses a growing menace to the press freedoms it otherwise has invigorated.

Two common critiques of McLuhan’s are his obliviousness to political economy and his technological determinism. McLuhan’s prognosis on media appears to celebrate a burgeoning world order and global capitalism. The way he foreshadows cognitive capitalism appears deterministic. Critics attack McLuhan for being silent on the transformation of global capitalism. This criticism focuses on what McLuhan did not write inUnderstanding Media as opposed to what he did. It is interesting to note that European scholars, even those who with political economic inclinations do not scorn McLuhan the way North Americans do. They do not blame him for being the messenger of a cognitive capitalist message.

[…]

McLuhan rightly described and to some extent predicted how messages need not be unidirectional. When he argued that technology is an extension of the senses, he did not argue that a select few had agency over the shaping of the message. He argued that any person had that potential. Specifically, he described how alternates modes of literacy allowed non literary people to participate in a global discourse. This is McLuhan’s legacy and part of why his work should be celebrated today.

Political Economy in Mumford’s “Technics & Civilization”

technics

I’ve written about the media ecology tradition, attended the Media Ecology Association’s conferences and had an article published in their journal, but up to now Marshall McLuhan’s Understanding Media and Neil Postman’s Amusing Ourselves to Death are the only primary texts associated with the tradition that I’ve read. To broaden my knowledge of the tradition I’m reading some of the books considered foundational in the media ecology canon, beginning with Lewis Mumford’s Technics & Civilization. I paid special attention to Mumford’s references to capitalism in Technics & Civilization because I have an abiding interest in the marriage of critical/Marxian analysis and media ecological perspectives. One of the most common criticisms of McLuhan’s writings on media is the charge of technological determinism, and that McLuhan’s media theory focuses on wide-reaching social and psychological effects while ignoring the historical, political, and economic factors involved in the development and dissemination of technologies. Although this is a valid criticism, as McLuhan’s approach did not emphasize the political economy of media, a number of scholars have re-evaluated McLuhan and other media ecologists to identify parallels in their work with critical theory and the Marxian critique of capitalism. The same criticisms cannot be legitimately levied against Mumford, whose account of successive technological complexes demonstrates careful consideration of the historical, political, and economic situations in which these complexes developed. Technics & Civilization makes clear that a media ecology perspective can incorporate a pronounced political orientation and an analysis of political economy.

Reading through Mumford’s account of the phases of technological complexes, I noted how the capitalist mode of economics is heavily dependent on technology. The interdependence seemed so crucial to both that it almost seemed that the history of capitalism is the history of technological development. Though Mumford does distinguish technics and capitalism as separate but interrelated forces. In the conclusion of the final chapter, “Orientation,” Mumford writes “we have advanced as far as seems possible in considering mechanical civilization as an isolated system” (p. 434). Technics & Civilization was first published in 1934; a contemporary reader will likely extend Mumford’s analysis to account for the last 80 years of technological progress, particularly in consideration of the information and telecommunications revolutions (an editorial note before the main text states that Mumford “would have loved” the Internet). Such an extension must account for the associated developments in capitalism. Scholars have used terms like “hypercapitalism” and “network and informational capitalism” to describe the new outlets of capital accumulation made possible by the global telecommunications infrastructure. Mumford wrote that “we are now entering a phase of dissociation between capitalism and technics” (p. 366), due in part to the over-working of “the machine”. Hypercapitalism has seen new forms of over-exploitation, and the continued commodification of intangibles such as information and attention, calling into question the dissociation of capitalism and technics. Mumford’s warning of the capitalist threat to physical resources, however, remains pertinent today.

The attention Mumford gives to the psychological effects of technics is a fascinating component of his analysis that prefigures McLuhan’s observations on technology as extensions of the human organism. The introduction of introspection and self-reflection instigated by the mirror’s effect on the individual ego; the metamorphosis of thought from flowing and organic to verbal and categorical brought on by print and paper; the shift from self-examination to self-exposure ushered in by the introduction of cameras; these are just some of the examples cited by Mumford to establish that the technological complexes built up from every individual innovation are not constrained to the obvious external manifestations but involve dramatic internal changes as well. In fact, the psychological and material transformations are not distinct processes, but are necessarily interlinked, two sides of the same coin.

Graeber on labor and leisure; the perils of hipster economics; and the educational value of MOOCs

Right after my original bullshit jobs piece came out, I used to think that if I wanted, I could start a whole career in job counseling – because so many people were writing to me saying “I realize my job is pointless, but how can I support a family doing something that’s actually worthwhile?” A lot of people who worked the information desk at Zuccotti Park, and other occupations, told me the same thing: young Wall Street types would come up to them and say “I mean, I know you’re right, we’re not doing the world any good doing what we’re doing. But I don’t know how to live on less than a six figure income. I’d have to learn everything over. Could you teach me?”

But I don’t think we can solve the problem by mass individual defection. Or some kind of spiritual awakening. That’s what a lot of people tried in the ‘60s and the result was a savage counter-offensive which made the situation even worse. I think we need to attack the core of the problem, which is that we have an economic system that, by its very nature, will always reward people who make other people’s lives worse and punish those who make them better. I’m thinking of a labor movement, but one very different than the kind we’ve already seen. A labor movement that manages to finally ditch all traces of the ideology that says that work is a value in itself, but rather redefines labor as caring for other people.

Proponents of gentrification will vouch for its benevolence by noting it “cleaned up the neighbourhood”. This is often code for a literal white-washing. The problems that existed in the neighbourhood – poverty, lack of opportunity, struggling populations denied city services – did not go away. They were simply priced out to a new location.

That new location is often an impoverished suburb, which lacks the glamour to make it the object of future renewal efforts. There is no history to attract preservationists because there is nothing in poor suburbs viewed as worth preserving, including the futures of the people forced to live in them. This is blight without beauty, ruin without romance: payday loan stores, dollar stores, unassuming homes and unpaid bills. In the suburbs, poverty looks banal and is overlooked.

In cities, gentrifiers have the political clout – and accompanying racial privilege – to reallocate resources and repair infrastructure. The neighbourhood is “cleaned up” through the removal of its residents. Gentrifiers can then bask in “urban life” – the storied history, the selective nostalgia, the carefully sprinkled grit – while avoiding responsibility to those they displaced.

Hipsters want rubble with guarantee of renewal. They want to move into a memory they have already made.

In the pedagogic trenches, MOOCs are considered a symptom of wider economic patterns which effectively vacuum resources up into the financial stratosphere, leaving those doing the actual work with many more responsibilities, and far less compensation. Basic questions about the sustainability of this model remain unanswered, but it is clear that there is little room for enfranchised, full-time, fully-compensated faculty. Instead, we find an army of adjuncts servicing thousands of students; a situation which brings to mind scenes from Metropolis rather than Dead Poets Society.

[…]

For companies pushing MOOCs, education is no different from entertainment: it is simply a question of delivering ‘content.’ But learning to think exclusively via modem is like learning to dance by watching YouTube videos. You may get a sense of it, but no-one is there to point out mistakes, deepen your understanding, contextualise the gestures, shake up your default perspective, and facilitate the process. The role of the professor or instructor is not simply the shepherd for the transmission of information from point A to point B, but the coforging of new types of knowledge, and critically testing these for various versions of soundness and feasibility. Wisdom may be eternal, but knowledge – both practical and theoretical – evolves over time, and especially exponentially in the last century, with all its accelerated technologies. Knowledge is always mediated, so we must consciously take the tools of mediation into account. Hence the need for a sensitive and responsive guide: someone students can bounce new notions off, rather than simply absorb information from. Without this element, distance learning all too often becomes distanced learning. Just as a class taken remotely usually leads to a sea of remote students.

[…]

Marshall McLuhan was half-right when he insisted that the electronic age is ushering in a post-literate society. But no matter how we like to talk of new audio-visual forms of literacy, there is still the ‘typographic man’ pulling the strings, encouraging us to express ourselves alphabetically. Indeed, the electronic and the literate are not mutually exclusive, much as people like to pit them against each other.

  • Pettman also quotes Ian Bogost’s comments on distance learning:

The more we buy into the efficiency argument, the more we cede ground to the technolibertarians who believe that a fusion of business and technology will solve all ills. But then again, I think that’s what the proponents of MOOCs want anyway. The issue isn’t online education per se, it’s the logics and rationales that come along with certain implementations of it.

Manifesto for a Ludic Century, ludonarrative dissonance in GTA, games and mindf*cks, and more

Systems, play, design: these are not just aspects of the Ludic Century, they are also elements of gaming literacy. Literacy is about creating and understanding meaning, which allows people to write (create) and read (understand).

New literacies, such as visual and technological literacy, have also been identified in recent decades. However, to be truly literate in the Ludic Century also requires gaming literacy. The rise of games in our culture is both cause and effect of gaming literacy in the Ludic Century.

So, perhaps there is one fundamental challenge for the Manifesto for a Ludic Century: would a truly ludic century be a century of manifestos? Of declaring simple principles rather than embracing systems? Or, is the Ludic Manifesto meant to be the last manifesto, the manifesto to end manifestos, replacing simple answers with the complexity of “information at play?”

Might we conclude: videogames are the first creative medium to fully emerge after Marshall McLuhan. By the time they became popular, media ecology as a method was well-known. McLuhan was a popular icon. By the time the first generation of videogame players was becoming adults, McLuhan had become a trope. When the then-new publication Wired Magazine named him their “patron saint” in 1993, the editors didn’t even bother to explain what that meant. They didn’t need to.

By the time videogame studies became a going concern, McLuhan was gospel. So much so that we don’t even talk about him. To use McLuhan’s own language of the tetrad, game studies have enhanced or accelerated media ecology itself, to the point that the idea of studying the medium itself over its content has become a natural order.

Generally speaking, educators have warmed to the idea of the flipped classroom far more than that of the MOOC. That move might be injudicious, as the two are intimately connected. It’s no accident that private, for-profit MOOC startups like Coursera have advocated for flipped classrooms, since those organizations have much to gain from their endorsement by universities. MOOCs rely on the short, video lecture as the backbone of a new educational beast, after all. Whether in the context of an all-online or a “hybrid” course, a flipped classroom takes the video lecture as a new standard for knowledge delivery and transfers that experience from the lecture hall to the laptop.

  • Also, with increased awareness of Animal Crossing following from the latest game’s release for the Nintendo 3DS, Bogost recently posted an excerpt from his 2007 book Persuasive Games discussing consumption and naturalism in Animal Crossing:

Animal Crossing deploys a procedural rhetoric about the repetition of mundane work as a consequence of contemporary material property ideals. When my (then) five-year-old began playing the game seriously, he quickly recognized the dilemma he faced. On the one hand, he wanted to spend the money he had earned from collecting fruit and bugs on new furniture, carpets, and shirts. On the other hand, he wanted to pay off his house so he could get a bigger one like mine.

Ludonarrative dissonance is when the story the game is telling you and your gameplay experience somehow don’t match up. As an example, this was a particular issue in Rockstar’s most recent game, Max Payne 3. Max constantly makes remarks about how terrible he is at his job, even though he does more than is humanly possible to try to protect his employers – including making perfect one-handed head shots in mid-air while drunk and high on painkillers. The disparity and the dissonance between the narrative of the story and the gameplay leave things feeling off kilter and poorly inter-connnected. It doesn’t make sense or fit with your experience so it feels wrong and damages the cohesiveness of the game world and story. It’s like when you go on a old-lady only murdering spree as Niko, who is supposed to be a reluctant killer with a traumatic past, not a gerontophobic misogynist.

What I find strange, in light of our supposed anti-irony cultural moment, is a kind of old-fashioned ironic conceit behind a number of recent critical darlings in the commercial videogame space. 2007’s Bioshock and this year’s Bioshock: Infinite are both about the irony of expecting ‘meaningful choice’ to live in an artificial dome of technological and commercial constraints. Last year’s Spec Ops: The Line offers a grim alchemy of self-deprecation and preemptive disdain for its audience. The Grand Theft Auto series has always maintained a cool, dismissive cynicism beneath its gleefully absurd mayhem. These games frame choice as illusory and experience as artificial. They are expensive, explosive parodies of free will.

To cut straight to the heart of it, Bioshock seems to suffer from a powerful dissonance between what it is about as a game, and what it is about as a story. By throwing the narrative and ludic elements of the work into opposition, the game seems to openly mock the player for having believed in the fiction of the game at all. The leveraging of the game’s narrative structure against its ludic structure all but destroys the player’s ability to feel connected to either, forcing the player to either abandon the game in protest (which I almost did) or simply accept that the game cannot be enjoyed as both a game and a story, and to then finish it for the mere sake of finishing it.

The post itself makes a very important point: games, for the most part, can’t pull the Mindfuck like movies can because of the nature of the kind of storytelling to which most games are confined, which is predicated on a particular kind of interaction. Watching a movie may not be an entirely passive experience, but it’s clearly more passive than a game. You may identify with the characters on the screen, but you’re not meant to implicitly think of yourself as them. You’re not engaging in the kind of subtle roleplaying that most (mainstream) games encourage. You are not adopting an avatar. In a game, you are your profile, you are the character you create, and you are also to a certain degree the character that the game sets in front of you. I may be watching everything Lara Croft does from behind her, but I also control her; to the extent that she has choices, I make them. I get her from point A to B, and if she fails it’s my fault. When I talk about something that happened in the game, I don’t say that Lara did it. I say that I did.

Anachrony is a common storytelling technique in which events are narrated out of chronological order. A familiar example is a flashback, where story time jumps to the past for a bit, before returning to the present. The term “nonlinear narrative” is also sometimes used for this kind of out-of-order storytelling (somewhat less precisely).

While it’s a common technique in literature and film, anachrony is widely seen as more problematic to use in games, perhaps even to the point of being unusable. If the player’s actions during a flashback scene imply a future that differs considerably from the one already presented in a present-day scene (say, the player kills someone who they had been talking to in a present-day scene, or commits suicide in a flashback), this produces an inconsistent narrative. The root of the problem is that players generally have degree of freedom of action, so flashbacks are less like the case in literature and film—where already decided events are simply narrated out of order—and more like time travel, where the player travels back in time and can mess up the timeline.

The first of the books are set to be published in early 2014. Some of the writers that will be published by Press Select in its first round have written for publications like Edge magazine, Kotaku, Kill Screen and personal blogs, including writers like Chris Dahlen, Michael Abbott, Jenn Frank, Jason Killingsworth, Maddy Myers, Tim Rogers, Patricia Hernandez and Robert Yang.

Videodrome turns 30

Videodrome’s depiction of techno-body synthesis is, to be sure, intense; Cronenberg has the unusual talent of making violent, disgusting, and erotic things seem even more so. The technology is veiny and lubed. It breaths and moans; after watching the film, I want to cut my phone open just to see if it will bleed. Fittingly, the film was originally titled “Network of Blood,” which is precisely how we should understand social media, as a technology not just of wires and circuits, but of bodies and politics. There’s nothing anti-human about technology: the smartphone that you rub and take to bed is a technology of flesh. Information penetrates the body in increasingly more intimate ways.

  • I also came across this short piece by Joseph Matheny at Alterati on Videodrome and YouTube:

Videodrome is even more relevant now that YouTube is delivering what cable television promised to in the 80s: a world where everyone has their own television station. Although digital video tools began to democratize video creation, it’s taken the further proliferation of broadband Internet and the emergence of convenient platforms like YouTube and Google Video to democratize video distribution.

  • There’s also my Videodrome-centric post from a couple of years ago. Coincidentally, I watched eXistenZ for the first time last week. I didn’t know much about the film going in, and initially I was enthusiastic that it seemed to be a spiritual successor to Videodrome, updating the media metaphor for the New Flesh from television to video games. I remained engaged throughout the movie (although about two thirds into the film I turned to my fiancee and asked “Do you have any idea what’s going on?”), and there were elements that I enjoyed but ultimately I was disappointed. I had a similar reaction at the ending of Cronenberg’s Spider, thinking “What was the point of all that?” when the closing credits started to roll, though it was much easier to stay awake during eXistenZ.

Rushkoff on Manning verdict, Chomsky/Žižek on NSA leaks, looking for McLuhan in Afghanistan

We are just beginning to learn what makes a free people secure in a digital age. It really is different. The Cold War was an era of paper records, locked vaults and state secrets, for which a cloak-and-dagger mindset may have been appropriate. In a digital environment, our security comes not from our ability to keep our secrets but rather our ability to live our truth.

In light of the recent NSA surveillance scandal, Chomsky and Žižek offer us very different approaches, both of which are helpful for leftist critique. For Chomsky, the path ahead is clear. Faced with new revelations about the surveillance state, Chomsky might engage in data mining, juxtaposing our politicians’ lofty statements about freedom against their secretive actions, thereby revealing their utter hypocrisy. Indeed, Chomsky is a master at this form of argumentation, and he does it beautifully in Hegemony or Survival when he contrasts the democratic statements of Bush regime officials against their anti-democratic actions. He might also demonstrate how NSA surveillance is not a strange historical aberration but a continuation of past policies, including, most infamously, the FBI’s counter intelligence programme in the 1950s, 60s, and early 70s.

Žižek, on the other hand, might proceed in a number of ways. He might look at the ideology of cynicism, as he did so famously in the opening chapter of The Sublime Object of Ideology, in order to demonstrate how expressions of outrage regarding NSA surveillance practices can actually serve as a form of inaction, as a substitute for meaningful political struggle. We know very well what we are doing, but still we are doing it; we know very well that our government is spying on us, but still we continue to support it (through voting, etc). Žižek might also look at how surveillance practices ultimately fail as a method of subjectivisation, how the very existence of whistleblowers like Thomas Drake, Bradley Manning, Edward Snowden, and the others who are sure to follow in their footsteps demonstrates that technologies of surveillance and their accompanying ideologies of security can never guarantee the full participation of the people they are meant to control. As Žižek emphasises again and again, subjectivisation fails.

In early 2011, award-winning photographer Rita Leistner was embedded with a U.S. marine battalion deployed to Helmand province as a member of Project Basetrack, an experiment in using new technologies in social media to extend traditional war reporting. This new LRC series draws on Leistner’s remarkable iPhone photos and her writings from her time in Afghanistan to use the ideas of Marshall McLuhan to make sense of what she saw there – “to examine the face of war through the extensions of man.”