In The Cultural Logic of Computation Golumbia raises questions and addresses issues that are promising, but then proceeds in making an argument that is ultimately unproductive. I am sympathetic to Golumbia’s aims; I share an attitude of skepticism toward the rhetoric surrounding the Internet and new media as inherently democratizing, liberating devices. Golumbia characterizes such narratives as “technological progressivism,” and writes that “technological progressivism […] conditions so much of computational discourse.” Following the “Arab Spring” and watching the events unfold was exhilarating, but I was always uncomfortable with the narrative promoted in the mainstream news media characterizing these social movements as a “Twitter revolution,” and I remain skeptical toward hashtag activism and similar trends.
So while I was initially inclined toward the project Golumbia laid out in the book’s introductory pages, the chapters that followed only muddled rather than clarified my understanding of the argument being presented. The first section contains a sustained attack on Noam Chomsky’s contributions to linguistics, and their various influences and permutations, but also on Chomsky himself. I don’t know why Golumbia needed to question Chomsky’s “implausible rise to prominence,” or why Chomsky’s “magnetic charisma” needs to be mentioned in this discussion of linguistic theory.
Golumbia focuses on Chomsky’s contributions to linguistics, because that is where his interests and argument draw him; based on my own interests and background I would’ve preferred engagement with the other side of Chomsky’s contributions to communication studies, namely the propaganda model and political economy of the media. I suspect that a fruitful analysis would be possible from considering some of the issues Golumbia brings up in relation the work of Chomsky and others in ideological analysis of news media content. The notion of computationalism as ideology is compelling to me; so is the institutionalized rhetoric of computationalism, which is a separate, promising argument, I think.
In reading I have a tendency to focus on what interests me, appeals to me, or may be useful for me. Some of Golumbia’s concepts, such as “technological-progressive neoliberalism” and its relation to centralized power, fall into this category. I’m still skeptical about computationalism as an operationalizable concept (it seems like there are already multiple theoretical models and critical perspectives that cover the same territory, I’m not convinced that Golumbia makes the case for a need for the term), others may be more productive. Ultimately I will use a quote from Golumbia (addressing Internet and emerging technologies) that reflects my feelings on this book: “We have to learn to critique even that which helps us.”
- In an article for Haaretz reflecting on last week’s terror attacks in Paris, Michael Handelzalts invokes McLuhan’s infamous aphorism in relation to the emergence of print culture in the Islamic world:
So, in the Muslim world, books and literacy became generally accessible (instead of being accessible only to the educated male and the wealthy) about a quarter of a millennium later than in European-Western culture. I found this information, together with an assessment of the damage this 250-year lag caused to Muslim society and culture, in the works of Muslim scholars.
This lag could be made up in the blink of an eye as the cultural world moved from Johannes Gutenberg’s galaxy into the era when “The medium is the message,” and with the development of the virtual and digital world (at the expense of the printed one, of course).
- Today the Santa Barbara Independent published an article by Dean Stewart looking at McLuhan’s message 50 years after the publication of Understanding Media:
McLuhan had a lot of ideas and subsets of ideas. But he had one very big idea: that human civilization had passed through two stages of communication history, oral and print, and was embarking on another: electronic media. He believed the new media would change the way people relate to themselves and others and would change societies dramatically. Is the computer, then, the ubiquitous laptop and other devices, the McLuhan “audile-tactile” dream come true? There is no way to know. And it will take at least another 50 years to make a full evaluation of the work of Marshall McLuhan.
- At TechCrunch, Tadhg Kelly uses McLuhan’s famous formula to consider how mobile games are marketed:
Taking a leaf from McLuhan then, I submit that the message is the product. The tone, approach and strategy of how marketing is conducted shapes what kinds of product can be allowed by a product’s developer. What kind of ad you’ll run determines what kind of game you’ll believe can work, and therefore what kind of game you’ll fund and make.
The medium is the message and the message is the product, remember. In Marvel’s case the medium of cinema sends the message of the big experience, and the message disseminated through a high value trailer leads to the will to make a high value product: a big splashy movie. That’s how it earned the right to be thought of as premium. That’s how games do that too.
- At Newsday, Clarence Page uses McLuhan’s media theory to argue that the Internet can be used to undermine freedom:
When media guru Marshall McLuhan declared back in the 1960s that “Every innovation has within itself the seeds of its reversal,” I had no idea what he meant. But, like his other catchy quotables — “global village,” “cool media,” “the medium is the message” — it stayed with me. Now, in the Internet age, I am seeing proof of his prophecy every day.
For example, McLuhan predicted that a rapidly expanding automobile culture would lead to more traffic jams, air pollution and longing for space to take long walks or ride bicycles. I’m sure he’d give a knowing I-told-you-so nod to today’s battles between car people and bike people for asphalt space.
But more recently and less happily, I see far more sinister seeds of reversal in this era’s greatest innovation, the Internet. We greeted the Web as a liberator, but in today’s age of terrorism and post-Cold War autocrats it also poses a growing menace to the press freedoms it otherwise has invigorated.
- Lastly, Hervé St-Louis at ComicBookBin considers whether McLuhan is still relevant:
Two common critiques of McLuhan’s are his obliviousness to political economy and his technological determinism. McLuhan’s prognosis on media appears to celebrate a burgeoning world order and global capitalism. The way he foreshadows cognitive capitalism appears deterministic. Critics attack McLuhan for being silent on the transformation of global capitalism. This criticism focuses on what McLuhan did not write inUnderstanding Media as opposed to what he did. It is interesting to note that European scholars, even those who with political economic inclinations do not scorn McLuhan the way North Americans do. They do not blame him for being the messenger of a cognitive capitalist message.
McLuhan rightly described and to some extent predicted how messages need not be unidirectional. When he argued that technology is an extension of the senses, he did not argue that a select few had agency over the shaping of the message. He argued that any person had that potential. Specifically, he described how alternates modes of literacy allowed non literary people to participate in a global discourse. This is McLuhan’s legacy and part of why his work should be celebrated today.
- Radio program New Tech City from WNYC interviewed Mike Rosenwald on his research into the effects of reading from a screen as opposed to print. Article and audio from the interview available here:
Neuroscience, in fact, has revealed that humans use different parts of the brain when reading from a piece of paper or from a screen. So the more you read on screens, the more your mind shifts towards “non-linear” reading — a practice that involves things like skimming a screen or having your eyes dart around a web page.
- Henry Jenkins recently posted a conversation with Tessa Jolls, President and CEO of the Center for Media Literacy, on the value of Media Literacy education in the 21st Century:
Using the technology approach, the iPhone is the “school” and anyone who uses it adeptly is the master and anyone over 30 is, well, handicapped at best. New technologies enable this approach because now, hardware and software are available and production has been democratized — everyone is a producer, a collaborator, a distributor and a participant. While experiential and project-based learning is truly exciting and an important component of media literacy, it is not synonymous because the outcome of the technology approach is often limited to technical proficiency without critical autonomy. Whether using an iPad, a pencil or a videocam, pressing the right buttons is important but not enough!
- In a Truthout op-ed, Henry Giroux explores how Disney magic and the corporate media shape youth identity in the digital age:
The information, entertainment and cultural pedagogy disseminated by massive multimedia corporations have become central in shaping and influencing every waking moment of children’s daily lives – all toward a lifetime of constant, unthinking consumption. Consumer culture in the United States and increasingly across the globe, does more than undermine the ideals of a secure and happy childhood: it exhibits the bad faith of a society in which, for children, “there can be only one kind of value, market value; one kind of success, profit; one kind of existence, commodities; and one kind of social relationship, markets.”
I’ve written about the media ecology tradition, attended the Media Ecology Association’s conferences and had an article published in their journal, but up to now Marshall McLuhan’s Understanding Media and Neil Postman’s Amusing Ourselves to Death are the only primary texts associated with the tradition that I’ve read. To broaden my knowledge of the tradition I’m reading some of the books considered foundational in the media ecology canon, beginning with Lewis Mumford’s Technics & Civilization. I paid special attention to Mumford’s references to capitalism in Technics & Civilization because I have an abiding interest in the marriage of critical/Marxian analysis and media ecological perspectives. One of the most common criticisms of McLuhan’s writings on media is the charge of technological determinism, and that McLuhan’s media theory focuses on wide-reaching social and psychological effects while ignoring the historical, political, and economic factors involved in the development and dissemination of technologies. Although this is a valid criticism, as McLuhan’s approach did not emphasize the political economy of media, a number of scholars have re-evaluated McLuhan and other media ecologists to identify parallels in their work with critical theory and the Marxian critique of capitalism. The same criticisms cannot be legitimately levied against Mumford, whose account of successive technological complexes demonstrates careful consideration of the historical, political, and economic situations in which these complexes developed. Technics & Civilization makes clear that a media ecology perspective can incorporate a pronounced political orientation and an analysis of political economy.
Reading through Mumford’s account of the phases of technological complexes, I noted how the capitalist mode of economics is heavily dependent on technology. The interdependence seemed so crucial to both that it almost seemed that the history of capitalism is the history of technological development. Though Mumford does distinguish technics and capitalism as separate but interrelated forces. In the conclusion of the final chapter, “Orientation,” Mumford writes “we have advanced as far as seems possible in considering mechanical civilization as an isolated system” (p. 434). Technics & Civilization was first published in 1934; a contemporary reader will likely extend Mumford’s analysis to account for the last 80 years of technological progress, particularly in consideration of the information and telecommunications revolutions (an editorial note before the main text states that Mumford “would have loved” the Internet). Such an extension must account for the associated developments in capitalism. Scholars have used terms like “hypercapitalism” and “network and informational capitalism” to describe the new outlets of capital accumulation made possible by the global telecommunications infrastructure. Mumford wrote that “we are now entering a phase of dissociation between capitalism and technics” (p. 366), due in part to the over-working of “the machine”. Hypercapitalism has seen new forms of over-exploitation, and the continued commodification of intangibles such as information and attention, calling into question the dissociation of capitalism and technics. Mumford’s warning of the capitalist threat to physical resources, however, remains pertinent today.
The attention Mumford gives to the psychological effects of technics is a fascinating component of his analysis that prefigures McLuhan’s observations on technology as extensions of the human organism. The introduction of introspection and self-reflection instigated by the mirror’s effect on the individual ego; the metamorphosis of thought from flowing and organic to verbal and categorical brought on by print and paper; the shift from self-examination to self-exposure ushered in by the introduction of cameras; these are just some of the examples cited by Mumford to establish that the technological complexes built up from every individual innovation are not constrained to the obvious external manifestations but involve dramatic internal changes as well. In fact, the psychological and material transformations are not distinct processes, but are necessarily interlinked, two sides of the same coin.
- Commemorating the 25th anniversary of the publication of his infamous essay, “The End of History?“, Francis Fukuyama wrote an essay for the Wall Street Journal reflecting on how the world has changed since he declared the end of history:
I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.
So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.
Twenty-five years later, the most serious threat to the end-of-history hypothesis isn’t that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.
- An article by Eliane Glaser in The Guardian considers whether Fukuyama’s hypothesis is a rightwing argument in disguise:
When he wrote “The End of History?”, Fukuyama was a neocon. He was taught by Leo Strauss’s protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that “the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx.” This was a highly tendentious claim even in 1989.
Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a “disconcerting caesura”. Political society had rejected “the old apocalyptic and chiliastic visions”, he wrote, and “in the west, among the intellectuals, the old passions are spent.” Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the “end of history” and the “end of ideology” arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.
While I recognise the ideological subterfuge (the markets as “natural”), there is a broader aspect to Fukuyama’s essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: “The end of history will be a very sad time. The struggle for recognition, the willingness to risk one’s life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands.”
- Late last year the International Forum for Democratic Studies interviewed Fukuyama about his article Democracy and the Quality of the State:
- Finally, the CATO Institute just held a conference where Fukuyama and several other scholars discussed “The End of History 25 Years Later”. Videos and podcasts of the panels are available at the conference site. Description of the conference and list of participants:
In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.
Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.
It’s been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.
- I haven’t seen the new Ender’s Game movie, but this review by abbeyotis at Cyborgology calls the film “a lean and contemporary plunge into questions of morality mediated by technology”:
In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.
- In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman’s latest Big Picture post uses his own review of the Ender’s Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism. Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or “gimmick”) to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person “Hulk speak” writing style the all caps seems to be played out. Nevertheless, I’m sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.
- In this video of a presentation titled Game design: the medium is the message, Jonathan Blow discusses how commercial constraints dictate the form of products from TV shows to video games.
- This somewhat related video from mynextappliance contextualizes Valve’s Steam machine place in gaming history.
- This video from Satchbag’s Goods is ostensibly a review of Hotline Miami, but develops into a discussion of art movements and Kanye West:
- This short interview with Slavoj Žižek in New York magazine continues a trend I’ve noticed since Pervert’s Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek’s contribution to critical theory:
Žižek, after all, the Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.
- Wired UK reported on university students who turned maps of seventeenth century London into a detailed 3D world:
Following the development of the environment on the team’s blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.
- Stephen Totilo reviewed the new pirate-themed Assassin’s Creed game for the New York Times. I haven’t played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:
Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.
It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.
- Speaking of postmodern, self-referential, meta-fictional video games: The Stanley Parable was released late last month. There has already been a bevy of analysis written about the game, but I am waiting for the Mac release to play the game and doing my best to avoid spoilers in the meantime. Brenna Hillier’s post at VG24/7 is spoiler free (assuming you are at least familiar with the games premise, or its original incarnation as a Half Life mod), and calls The Stanley parable “a reaction against, commentary upon, critique and celebration of narrative-driven game design”:
The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.
- An article at Techcrunch looks at how the Twitter-acquired Bluefin Labs “took the academic subject of semiotics and made it something “central” to the future of Twitter’s business“:
Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.
- Nathan at metopal posted their paper posing the question: What happens when we stop thinking about videogames as cinema and instead think of them through other media, like fashion, dance, or architecture?
Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.
Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.
- John Powers at the Airship posted this great longform piece on the political economy of zombies:
Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters’ ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.
- This article at The Conversation addresses the “touchy subject” of Apple’s Touch ID:
Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.
This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves – our biology is verified, digitised, encrypted, as they are handed over to our devices.
- In the wake of the Silk Road shut down last month, Chloe Albanesius at PC Mag asks: What was Silk Road and how did it work?
Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn’t exactly rocket science, thanks to Tor and some bitcoins. Here’s a rundown of how Silk Road worked before the feds swooped in.
- Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is “imaginary worlds,” and they touch upon the narratology vs. ludology conflict in gaming:
The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author. So it’s not just a question of how many choices you make, but how many options there are per choice. Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.
- Finally, GamesForChange has uploaded video of Ian Bogost’s keynote address from this year’s Games for Change Festival. Bogost extolls the virtues of “earnestness” over “seriousness” in game design: