Earlier this month the mobile-app game Pokémon Go was released in the U.S., and the game has been ubiquitous ever since. Aside from being a sudden pop culture phenomenon, the game’s success poses some significant implications. First of all, this is clearly a breakthrough moment for augmented reality. Pokémon Go is not the first augmented reality game, nor is it the most ambitious, but it has undoubtedly brought AR into mainstream consciousness. Secondly, the success of Pokémon Go has led me to reconsider all my previously held assumptions about the uses of mobile apps and gamification for interfacing with urban spaces. I have historically been cynical about the prospect of using mobile games or AR interfaces to interact with urban space, since they usually strike me as shallow and insignificant, typically resulting in a fleeting diversion like a flash mob dance party, rather than altering people’s perceptions of place in any lasting or meaningful way. Pokémon Go satisfies all the requirements of my earlier preconceptions, yet despite my best critical instincts, I really like the game.
The buzz about Pokémon Go had been building on various forums online, and after it was released it was virtually impossible to avoid Pokémon Go-related posts. Save for maybe 10 minutes with a friend’s Game Boy in the late 90s, I’ve never played a Pokémon game and I preemptively wrote off Pokémon Go as yet another cultural fad that I would never partake in or understand. Curiosity got the best of my wife, however, and she downloaded the app and we walked around our neighborhood to test it out. To my surprise, the game was a lot of fun; our familiar surroundings were now filled with digital surprises, and we were excited to see neighborhood landmarks and murals represented as Pokéstops, and wild Pokémon hanging out in the doorways of local shops. We meandered around discovering which of our local landmarks had been incorporated into the game, and each discovery increased my enjoyment of the app. Yes, the game is simple and shallow, but I was completely charmed. I downloaded the game so I could play, too.
Reactions to Pokémon Go have been as fascinating as the game’s widespread adoption. Many news articles sensationalized the inherent dangers of playing the game: distracted players wandering into traffic or off of cliffs, people’s homes being designated as Pokéstops and besieged by players, and traps being laid (using the games “lures”) to ambush and rob aspiring Pokétrainers. There have also been insightful critical analyses of the game. An early and oft-shared article by Omari Akil considered the implications of Pokémon Go in light of recent police shootings of black men, warning that “Pokemon Go is a death sentence if you are a black man“:
I spent less than 20 minutes outside. Five of those minutes were spent enjoying the game. One of those minutes I spent trying to look as pleasant and nonthreatening as possible as I walked past a somewhat visibly disturbed white woman on her way to the bus stop. I spent the other 14 minutes being distracted from the game by thoughts of the countless Black Men who have had the police called on them because they looked “suspicious” or wondering what a second amendment exercising individual might do if I walked past their window a 3rd or 4th time in search of a Jigglypuff.
Others questioned the distribution of Pokémon across neighborhoods, suggesting that poor or black neighborhoods had disproportionately fewer Pokémon and Pokéstops. Among urbanists, however, reaction to the game has been mixed. Mark Wilson at Fastcodesign declared that Pokémon Go “is quietly helping people fall in love with their cities“. Ross Brady of Architizer celebrated the game for sparking “a global wave of urban exploration“. Writing for de zeen, Alex Wiltshire boldly states that the game has “redrawn the map of what people find important about the world“. City Lab contributor Laura Bliss proclaimed “Pokémon Go has created a new kind of flaneur“.
Others have been more critical of the game, with Nicholas Korody at Archinect retorting: “No, Pokémon Go is not an urban fantasy for the new flaneur“. At Jacobin, Sam Kriss implores readers to “resist Pokémon Go“:
Walk around. Explore your neighborhood. Visit the park. Take in the sights. Have your fun. Pokémon Go is coercion, authority, a command issuing from out of a blank universe, which blasts through social and political cleavages to finally catch ‘em all. It must be resisted.
Some, like Jeff Sparrow at Overland, drew direct parallels to the Situationists:
On the one hand, that’s way cool – suddenly, the old pub near your house is inhabited by monsters.
On the other, there’s something faintly distasteful about the recuperation of specific real histories into a billion-dollar corporate mythology. Nearly 150 people lost their lives when the Triangle Shirtwaist Factory burned to the ground, entirely needless deaths caused by the atrocious working conditions of the garment trade. The tragedy became a rallying point for the trade union movement, the name of the factory, a shorthand reference to employers’ greed.
Now, though, it’s three free Pokeballs.
We might also say, then, that, even as the game leads players to embrace the derive, it also offers a remarkable demonstration of the phenomenon that Debord critiqued.
Writing for the Atlantic, Ian Bogost mediated on “the tragedy of Pokémon Go“:
We can have it both ways; we have to, even: Pokémon Go can be both a delightful new mechanism for urban and social discovery, and also a ghastly reminder that when it comes to culture, sequels rule. It’s easy to look at Pokémon Go and wonder if the game’s success might underwrite other, less trite or brazenly commercial examples of the genre. But that’s what the creators of pervasive games have been thinking for years, and still almost all of them are advertisements. Reality is and always has been augmented, it turns out. But not with video feeds of twenty-year old monsters in balls atop local landmarks. Rather, with swindlers shilling their wares to the everyfolk, whose ensuing dance of embrace and resistance is always as beautiful as it is ugly.
Pokémon Go’s popularity has led to many online comparisons to the Star Trek: TNG episode “The Game,” in which the crew of the Enterprise is overcome by a mind-controlling video game. The game in Star Trek is not strictly-speaking an augmented reality game, but does involve projecting images onto the player’s vision similar to an AR-overlay. Previous gaming and gadget fads have been compared to the TNG episode, notably Google Glass (for it’s similarity to the eye-beaming design used to interface with the game in Star Trek) and the pervasively popular Angry Birds game (as evident in this parody video). The comparison has regained cultural cachet because, unlike Angry Birds which can be played on the couch, Pokémon Go is played in motion. This, of course, has contributed to the perception of the game’s zombie-fying effects; we’ve grown accustomed to the fact that everyone’s eyes are glued to a smartphone screen in our public spaces, but now there are whole flocks of people milling around with their eyes on their devices.
My cynical side is inclined to agree with the critics who see Pokémon Go’s proliferation as proof positive of the passification and banalization of our society; the visions of Orwell, Bradbury, and Phil Dick all realized at once. But there’s something there that has me appreciative, even excited about this goofy game. As my wife and I wandered our neighborhood looking for pocket monsters, we noticed several other people walking around staring at their phones. This is not an uncommon sight, but it is re-contextualized in light of Pokémon Go’s popularity. “Look,” my wife would say, “I bet they’re playing, too.” After a while she had to know for sure, and started walking up to people and asking, “Are you playing Pokémon Go?” Every person she asked was indeed playing the game. Then we were walking along with these people we’ve just met, discussing play strategies, sharing Pokéstop locations, spreading word of upcoming lure parties.
One night around 10:30 last week we went into the Oakland neighborhood, home to both Pitt and Carnegie Mellon’s campuses and a hotbed of Pokémon Go activity. When we arrived, at least 20 people sat along the wall in front of the Soldiers & Sailors Memorial, smartphones in hands. We walked around the base of the Cathedral of Learning, where dozens of people in groups of two, three, or more were slowly pacing, stopping to capture a virtual creature. We crossed the street to Schenley plaza, where still dozens more people trekked through the grass, laughing and exclaiming and running up to their friends to share which Pokémon they had just got. Sure, most of these people were only talking to their own groups of friends, if they were talking at all, but it was still a cool experience. For me, the greatest thing was not which monsters I caught or XP my avatar earned; rather it was the energy, the unspoken but palpable buzz generated by all these people walking around in the dark of a warm summer night. Yes, I was giving attention to my smartphone screen, but what I remember most from that evening are the stars, and the fireflies, and the murmuring voices. Pokémon Go is promoting a sort of communal public activity, even if the sociality it produces is liminal at best. Yes, it is still shallow, still commercial, still programmed, but it’s something; there’s an energy there and a potential that is worth paying attention to.
Pokémon Go is not the be-all-end-all of augmented urban exploration, nor should be it considered the pinnacle of how mobile technology can enable new ways of interfacing with city space. But the game’s popularity, and my personal experience using it, has given me hope for the potential of AR apps to enrich our experience of urban spaces and engender new types of interactions in our shared environments.
Seltzer’s essay on serial killers and the pathological public sphere immediately calls J.G. Ballard to mind. Eventually Seltzer does cite Ballard, but it is in reference to Ballard’s Atrocity Exhibition, a selection that renders the author’s omission of Ballard’s subsequent novel, Crash, all the more conspicuous (Crash was adapted into a film by David Cronenberg in 1996, the year after Seltzer’s article was published). The article’s introductory anecdote about Sylvestre Matushka, who engineered train wrecks and claimed to only achieve sexual satisfaction when witnessing these accidents, is obviously evocative of Crash. Ballard’s story follows characters who are sexually excited by car crashes, and stage car accidents and recreate famous wrecks. Seltzer cites The Atrocity Exhibition in order to borrow Ballard’s phrase and relate it to his own notion of the pathological public sphere: “spectacular corporeal/machine violence, a drive to make mass technology and public space a vehicle of private desire in public spectacle: the spectacles of public sex and public violence” (p. 124). Though he never refers to Crash, Seltzer’s language here could have come direct from the book’s dust jacket: “The coupling of bodies and machines is thus also, at least in these cases, a coupling of private and public spaces” (p. 125).
Seltzer’s argument is also evocative of a different Crash: the identically-titled but textually-dissimilar Crash, a 2004 film exploring race relations in contemporary Los Angeles through the interweaving of multiple characters and plotlines. Los Angeles is famous for its iconic freeway system, and the city is often regarded as the apotheosis of car culture, an alternatingly visionary or dystopic manifestation of car-dependent society. The film Crash uses the city’s freeway network as a thematic device, beyond the relation of the story’s interweaving plot threads and intersecting characters to the on-ramps and cloverleaf interchanges of L.A.’s freeways as seen from above. The film opens at the scene of a car accident one of these L.A. freeways, and the first lines of dialogue (spoken by a character riding in a car involved in the accident) establishes the thematic significance of the film’s Los Angeles setting:
Graham: It’s the sense of touch. In any real city, you walk, you know? You brush past people, people bump into you. In L.A., nobody touches you. We’re always behind this metal and glass. I think we miss that touch so much, that we crash into each other, just so we can feel something.
Compare this sentiment with these words of serial killer Ted Bundy quoted in Seltzer’s article:
“Another factor that is almost indispensable to this kind of behavior is the mobility of contemporary American life. Living in a large center of population and living with lots of people, you can get used to dealing with strangers. It’s the anonymity factor.” (p. 133)
Seltzer does cite a Los Angeles-based film in his discussion of public and private space: the action-thriller Speed, a sort of wish-fulfillment Hollywood fantasy for Angelenos where the city’s congested freeways are cleared of all traffic and the hero’s speedometer never drops below 50 miles per hour. Seltzer notes the film’s use of “public vehicles of what might be called stranger-intimacy” (p. 125): elevators, buses, airplanes, and the city subway system. Seltzer’s highlighting of transit systems to illustrate the collisions of public and private space resonated with my own research in this area. Seltzer cites urban sociologist Georg Simmel’s account of “the stranger” in urban life; Simmel’s theories have influenced a great deal of urban studies, including theories of transportation and public space.
Toiskallio (2000) applied Simmel’s sociability to an analysis of “the interaction between the taxi driver and the fare as an example of an intensive urban semi-public situation where feasible and face-saving social interaction is needed” (p. 4). The term “semi-public” refers to that are neither public nor totally private, as taxicabs are neither public nor private transportation, but “paratransit” (p. 8). Such distinctions are further complicated by the recent advent of “car-share” or rideshare services such as Uber and Lyft. These services are essentially hired car services, and function much like taxicabs, but with significant differences. Most relevant to the current discussion is the fact that rideshare drivers do not drive company vehicles as taxi drivers do, but operate their private vehicles to transport customers. This situation transforms a person’s private car into a space of stranger-intimacy. There are consequences here not only for transformations of public and private space, but also the coupling of bodies and machines, as well as implication for affective labor and transportation services.
- Almetria Vaba of PBS Learning Media has posted a collection of resources for exploring media literacy through the legacy of Dr. Martin Luther King, jr.:
Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources. Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.
- Sonia Paul at PBS MediaShift reported on a recent Pew Research study on social media, stress, and the “cost of caring”:
Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.
“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”
- Lily Hay Newman reported on the survey for Slate:
In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.
“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”
- LiveScience writer Elizabeth Palermo looked at the gendered differences found by the study:
The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”
“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.
- Nicholas Carr recently penned an editorial for The Guardian considering whether we are becoming too reliant on computers:
The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.
- David Whelan at Vice interviewed Carr on the issue of technology dependency:
Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.
We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.
- Writing at Figure/Ground, John Dowd argues that being there still matters for teaching and learning in the digital age:
From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.
- Dowd’s argument here is similar to Carr’s thoughts on MOOCs:
In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.
- MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:
The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.
And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”
We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.
- Tomas Chamorro-Premuzic contributed a piece to The Guardian referencing Carr to consider how technology has downgraded attention:
As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.
There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.
- James Vincent at The Verge covered a recent study that links nighttime screen use with less REM sleep:
The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.
- At AdBusters, Douglas Haddow writes that sleep is the enemy of capital:
Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.
Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.
And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.
- In an interview with TruthOut to discuss his latest book, Robert McChesney addresses telecommunications monopolies, net neutrality, and advocates radical solutions to systemic problems:
What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.
This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.
- In light of recent terrorist attacks and renewed hysteria about fundamentalist ideologies, I revisited Mark Manson’s essay probing why there seems to be more fundamentalism in the world today:
The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.
Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.
- The 2014 World Cup kicked off yesterday with a futuristic twist on the opening ceremonies. A paraplegic kicked a soccer ball using an exoskeleton designed by the Walk Again Project:
The exoskeleton — a system comprising a helmet implanted with a microchip that sticks out from the underside; a T-shirt loaded with sensors; metal leg braces; and a battery worn in a backpack — is set in motion when the user envisions himself making the kick. The chip translates those electronic commands to a digital language that powers the skeleton, which then moves accordingly. The T-shirt vibrates to enhance the user’s sensation of movement (and eliminate the need to look at his feet to see if he’s stepping forward).
- Unfortunately, as io9 reports, the moment was not well-covered by TV networks:
Talk about dropping the ball. Earlier today, Juliano Pinto — a 29 year-old paraplegic — successfully kicked off the 2014 FIFA World Cup by using a mind-controlled exoskeleton. But sadly, most TV networks failed to show it.
After months of hype, the official broadcast of the opening ceremonies showed only a fraction of it, while some TV networks missed the event altogether. Commentators criticized the organizers for casting aside the moment in favor of performing acts.
- Thomas Frey at the Futurist Speaker blog forecasts the coming AI crash wars:
The invasion of high-frequency trading machines is now forcing capitalism far away from anything either Adam Smith or the founders of the NYSE could possibly find virtuous.
We’re not about to let robots compete in the Olympics, driverless cars race in the Indianapolis 500, or automated machines play sports like football, basketball, or baseball. So why is it we allow them to play a role in the most valuable contest of all, the world wide stock exchange?
With crude forms of AI now entering the quant manipulator’s toolbox, we are now teetering dangerously close to a total collapse of the stock market, one that will leave many corporations and individuals financially destitute.
- Microsoft has announced their version of apple’s Siri virtual assistant. Named Cortana, after the AI character from the Halo video game series, she is coming to Windows smartphones, and as Brad Molen at engadget reports, developers programmed her with a distinct personality:
Confident, caring, competent, loyal; helpful, but not bossy: These are just some of the words Susan Hendrich, the project manager in charge of overseeing Cortana’s personality, used to describe the program’s most significant character traits. “She’s eager to learn and can be downright funny, peppering her answers with banter or a comeback,” Hendrich said. “She seeks familiarity, but her job is to be a personal assistant.” With that kind of list, it sure sounds like Hendrich’s describing a human. Which is precisely what she and her team set out to do during Cortana’s development; create an AI with human-like qualities.
Microsoft’s decision to infuse Cortana with a personality stemmed from one end goal: user attachment. “We did some research and found that people are more likely to interact with [AI] when it feels more human,” said Hendrich. To illustrate that desired human-machine dynamic, Hendrich pointed to her grandmother’s experience with a Roomba vacuum: “She gave a name and a personality to an inanimate object, and it brought her joy.” That sense of familiarity is exactly what Microsoft wants Window Phone users to feel when interacting with Cortana on their own devices.
- Tech companies and weapons manufacturers are exploring the crossover potential for firearms and wearable technology devices like Google Glass. Brian Anderson at Motherboard reported Austin tech startup TrackingPoint’s foray into this inevitable extension of augmented reality applications and posted the company’s concept video:
“When paired with wearable technology, PGFs can provide unprecedented benefits to shooters, such as the ability to shoot around corners, from behind low walls, and from other positions that provide exceptional cover,” according to a TrackingPoint press release. “Without PGF technology, such positions would be extremely difficult, if not impossible, to fire from.”
The steadied rise of wearable technology is unlocking a dizzying number of potential killer apps. Indeed, If there was any lingering doubt that wearable tech is coming to the battlefield, the Glassification of a high-profile smart weapon should put any uncertainties to rest.
If being able to track and drop a moving target with single-shot accuracy at 1,500 feet using a long-range robo rifle wasn’t sobering enough already, to think basically anyone can now do so over a hill, perhaps overlooking a so-called “networked battlefield” shot through with data-driven soldiers, is sure to be even more so.
- Another recent Motherboard article reported on a model city being built in Michigan for the purpose of test driving self-driving cars:
The simulation is run by a proprietary software, and programmers will code in dangerous situations—traffic jams and potential collisions—so engineers can anticipate problems and, ideally, solve for them before the automated autos hit the streets. It’s laying the groundwork for the real-world system planned for 2021 in Ann Arbor.
There will surely be some technical barriers to work out, but the biggest hurdles self-driving cars will have to clear are likely regulatory, legal, and political. Will driverless cars be subsidized like public transit? If autonomous cars eliminate crashes, will insurance companies start tanking? Will the data-driven technology be a privacy invasion?
- In other robo-car news, this article by Philip Ross provides an extensive look at the future of driverless vehicles:
Today you can buy a top-of-the-line S-Class car from Mercedes-Benz that figuratively says “ahem” when you begin to stray out of your lane or tailgate. If you do nothing, it’ll turn the wheel slightly or lightly apply the brakes. And if you’re still intent on crashing, it will take command. In 5 years, cars will be quicker to intervene; in 20, they won’t need your advice; and in 30, they won’t take it.
Accident rates will plummet, parking problems will vanish, streets will narrow, cities will bulk up, and commuting by automobile will become a mere extension of sleep, work, and recreation. With no steering column and no need for a crush zone in front of the passenger compartment (after all, there aren’t going to be any crashes), car design will run wild: Collapsibility! Stackability! Even interchangeability, because when a car can come when called, pick up a second or third passenger for a fee, and park itself, even the need to own the thing will dwindle.
- Ray Kurzweil recently delivered a TED talk on hybrid computing. Video available on the TED site, which summarizes the talk thusly:
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.
- Finally, this post at SingularityHub covers Muse, a headband that monitors brain activity and may enable brain-to-computer interaction:
The headband picks up four channels from seven EEG sensors, five across the forehead and two conductive rubber ear sensors. Together, the sensors detect the five basic types of brain waves, and, unlike conventional sensors, they don’t need to be surrounded by gel to work. Software helps filter out the noise and syncs the signal, via Bluetooth, to a companion app. The app shows the user the brainwave information and offers stress-reduction exercises.
A bit further down the road of possibilities is brain-to-brain networking. Last year, researchers at the University of Washington used EEG sensors to detect one person’s intention to move his arm and used it to stimulate the other person’s brain with an external coil and watched as the second person moved his hand without planning to.
It’s been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.
- I haven’t seen the new Ender’s Game movie, but this review by abbeyotis at Cyborgology calls the film “a lean and contemporary plunge into questions of morality mediated by technology”:
In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.
- In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman’s latest Big Picture post uses his own review of the Ender’s Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism. Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or “gimmick”) to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person “Hulk speak” writing style the all caps seems to be played out. Nevertheless, I’m sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.
- In this video of a presentation titled Game design: the medium is the message, Jonathan Blow discusses how commercial constraints dictate the form of products from TV shows to video games.
- This somewhat related video from mynextappliance contextualizes Valve’s Steam machine place in gaming history.
- This video from Satchbag’s Goods is ostensibly a review of Hotline Miami, but develops into a discussion of art movements and Kanye West:
- This short interview with Slavoj Žižek in New York magazine continues a trend I’ve noticed since Pervert’s Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek’s contribution to critical theory:
Žižek, after all, the Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.
- Wired UK reported on university students who turned maps of seventeenth century London into a detailed 3D world:
Following the development of the environment on the team’s blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.
- Stephen Totilo reviewed the new pirate-themed Assassin’s Creed game for the New York Times. I haven’t played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:
Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.
It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.
- Speaking of postmodern, self-referential, meta-fictional video games: The Stanley Parable was released late last month. There has already been a bevy of analysis written about the game, but I am waiting for the Mac release to play the game and doing my best to avoid spoilers in the meantime. Brenna Hillier’s post at VG24/7 is spoiler free (assuming you are at least familiar with the games premise, or its original incarnation as a Half Life mod), and calls The Stanley parable “a reaction against, commentary upon, critique and celebration of narrative-driven game design”:
The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.
- An article at Techcrunch looks at how the Twitter-acquired Bluefin Labs “took the academic subject of semiotics and made it something “central” to the future of Twitter’s business“:
Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.
- Nathan at metopal posted their paper posing the question: What happens when we stop thinking about videogames as cinema and instead think of them through other media, like fashion, dance, or architecture?
Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.
Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.
- John Powers at the Airship posted this great longform piece on the political economy of zombies:
Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters’ ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.
- This article at The Conversation addresses the “touchy subject” of Apple’s Touch ID:
Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.
This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves – our biology is verified, digitised, encrypted, as they are handed over to our devices.
- In the wake of the Silk Road shut down last month, Chloe Albanesius at PC Mag asks: What was Silk Road and how did it work?
Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn’t exactly rocket science, thanks to Tor and some bitcoins. Here’s a rundown of how Silk Road worked before the feds swooped in.
- Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is “imaginary worlds,” and they touch upon the narratology vs. ludology conflict in gaming:
The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author. So it’s not just a question of how many choices you make, but how many options there are per choice. Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.
- Finally, GamesForChange has uploaded video of Ian Bogost’s keynote address from this year’s Games for Change Festival. Bogost extolls the virtues of “earnestness” over “seriousness” in game design:
- I’ve long been fascinated by the gaming culture in South Korea, and Tom Massey has written a great feature piece for Eurogamer titled Seoul Caliber: Inside Korea’s Gaming Culture. From this westerner’s perspective, having never visited Korea, the article reads almost more like cyberpunk fiction than games journalism:
Not quite as ubiquitous, but still extremely common, are PC Bangs: LAN gaming hangouts where 1000 Won nets you an hour of multiplayer catharsis. In Gangnam’s Maxzone, overhead fans rotate at Apocalypse Now speed, slicing cigarette smoke as it snakes through the blades. Korea’s own NCSoft, whose European base is but a stone’s throw from the Eurogamer offices, is currently going strong with its latest MMO, Blade & Soul.
“It’s relaxing,” says Min-Su, sipping a Milkis purchased from the wall-mounted vending machine. “And dangerous,” he adds. “It’s easy to lose track of time playing these games, especially when you have so much invested in them. I’m always thinking about achieving the next level or taking on a quick quest to try to obtain a weapon, and the next thing I know I’ve been here for half the day.”
- As a cyberpunk/hyperreality aside, the city of Hong Kong has put up a blue-sky backdrop for when the real sky is too smoggy for tourist photos.
- In yet another cyberpunk dystopian tangent, I recently came across Chris Rogers’ site Fragments of a Hologram Rose: Re-seeing Blade Runner, with an assortment of content and analysis relating to the film.
- And one final cyberpunk diversion: this video is the first part of a lecture by University of Michigan professor Eric Rabkin covering cyberpunk, postmodernism, and beyond:
- Writing for The New Economy, Aaran Franda examines how the virtual economies seen in games like EVE Online provide valuable perspectives on real world economic activity:
Creation and simulation in virtual worlds appear to offer the best domain to test the new ideas required to tackle the very real problems of depravation, inequality, unemployment, and poverty that exist in national economies. On that note the need to see our socioeconomic institutions for the games that they really are seems even more poignant.
In the words of Vili Lehdonvirta, a leading scholar in virtual goods and currencies, the suffering we see today is “not some consequence of natural or physical law” it instead “is a result of the way we play these games.”
- Jon Evans at Tech Crunch looks at jobs, robots, capitalism, inequality, and you:
The global economy seems to be bifurcating into a rich/tech track and a poor/non-tech track, not least because new technology will increasingly destroy/replace old non-tech jobs. (Yes, global. Foxconn is already replacing Chinese employees with one million robots.) So far so fairly non-controversial.
The big thorny question is this: is technology destroying jobs faster than it creates them?
We live in an era of rapid exponential growth in technological capabilities. (Which may finally be slowing down, true, but that’s an issue for decades hence.) If you’re talking about the economic effects of technology in the 1980s, much less the 1930s or the nineteenth century, as if it has any relevance whatsoever to today’s situation, then you do not understand exponential growth. The present changes so much faster that the past is no guide at all; the difference is qualitative, not just quantitative. It’s like comparing a leisurely walk to relativistic speeds.
- This recent episode of Radiolab focused on talking to machines:
We begin with a love story–from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives…and that’s chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own.
- This video shows a demo of using Google Glass for interactive augmented reality:
- A recent Guardian article by Juliette Garside warns that our digital infrastructure is exceeding the limits of human control:
“These outages are absolutely going to continue,” said Neil MacDonald, a fellow at technology research firm Gartner. “There has been an explosion in data across all types of enterprises. The complexity of the systems created to support big data is beyond the understanding of a single person and they also fail in ways that are beyond the comprehension of a single person.”
From high volume securities trading to the explosion in social media and the online consumption of entertainment, the amount of data being carried globally over the private networks, such as stock exchanges, and the public internet is placing unprecedented strain on websites and on the networks that connect them.
- In an “anti-videogame manifesto,” Keith Burgun argues for intrinsic rewards and against grinding in videogames:
What I want is systems that have intrinsic rewards; that are disciplines similar to drawing or playing a musical instrument. I want systems which are their own reward.
What videogames almost always give me instead are labor that I must perform for an extrinsic reward. I want to convince you that not only is this not what I want, this isn’t really what anyone wants.
- This video from PBS Digital Studios’ Off Book looks at the rise of competetive gaming & e-sports:
- Will Luton at GamesIndustry International writes about the celebrification of game developers:
This ‘celebrification’ is enlivening making games and giving players role models, drawing more people in to development, especially indie and auteured games. This shift is proving more prosperous than any Skillset-accredited course or government pot could ever hope for. We are making men sitting in pants at their laptops for 12 hours a day as glamorous as it could be.
Creating luminaries will lead to all the benefits that more people in games can bring: a bigger and brighter community, plus new and fresh talent making exciting games. However, celebritydom demands storms, turmoil and gossip.
- The ongoing survey of Hollywood’s Summer of Doom continues with Isaac Chotiner’s New Republic article, Hollywood is in trouble and we’re all going to pay:
Spielberg’s theory is essentially that a studio will eventually go under after it releases five or six bombs in a row. The reason: budgets have become so gigantic. And, indeed, this summer has been full of movies with giant budgets and modest grosses, all of which has elicited hand-wringing about financial losses, the lack of a quality product (another post-apocalyptic thriller? more superheroes?), and a possible connection between the two. There has been some hope that Hollywood’s troubles will lead to a rethinking of how movies get made, and which movies get greenlit by studio executives. But a close look at this summer’s grosses suggest a more worrisome possibility: that the studios will become more conservative and even less creative.
- Finally, video of Slavoj Žižek and Paul A. Taylor discussing the difficulty of conveying philosophical ideas in today’s media:
- David Cronenberg’s Videodrome premiered in February 1983. To mark the 30th anniversary of the film’s release, Cyborgology co-founder Nathan Jurgenson reflects on the New Flesh in relation to contemporary social media:
Videodrome’s depiction of techno-body synthesis is, to be sure, intense; Cronenberg has the unusual talent of making violent, disgusting, and erotic things seem even more so. The technology is veiny and lubed. It breaths and moans; after watching the film, I want to cut my phone open just to see if it will bleed. Fittingly, the film was originally titled “Network of Blood,” which is precisely how we should understand social media, as a technology not just of wires and circuits, but of bodies and politics. There’s nothing anti-human about technology: the smartphone that you rub and take to bed is a technology of flesh. Information penetrates the body in increasingly more intimate ways.
- I also came across this short piece by Joseph Matheny at Alterati on Videodrome and YouTube:
Videodrome is even more relevant now that YouTube is delivering what cable television promised to in the 80s: a world where everyone has their own television station. Although digital video tools began to democratize video creation, it’s taken the further proliferation of broadband Internet and the emergence of convenient platforms like YouTube and Google Video to democratize video distribution.
- There’s also my Videodrome-centric post from a couple of years ago. Coincidentally, I watched eXistenZ for the first time last week. I didn’t know much about the film going in, and initially I was enthusiastic that it seemed to be a spiritual successor to Videodrome, updating the media metaphor for the New Flesh from television to video games. I remained engaged throughout the movie (although about two thirds into the film I turned to my fiancee and asked “Do you have any idea what’s going on?”), and there were elements that I enjoyed but ultimately I was disappointed. I had a similar reaction at the ending of Cronenberg’s Spider, thinking “What was the point of all that?” when the closing credits started to roll, though it was much easier to stay awake during eXistenZ.
- White hat hacker Barnaby Jack was found dead in San Francisco this week; he was 35 years old. From the Reuters article on his death:
His genius was finding bugs in the tiny computers embedded in equipment, such as medical devices and cash machines. He often received standing ovations at conferences for his creativity and showmanship while his research forced equipment makers to fix bugs in their software.
Jack had planned to demonstrate his techniques to hack into pacemakers and implanted defibrillators at the Black Hat hackers convention in Las Vegas next Thursday. He told Reuters last week that he could kill a man from 30 feet away by attacking an implanted heart device.
- Writing in the MIT Technology Review, Don Norman asks whether wearable devices can augment our activities without distracting us from the real world:
Without the right approach, the continual distraction of multiple tasks exerts a toll that disrupts performance. It takes time to switch tasks, to get back what attention theorists call “situation awareness.” Interruptions disrupt performance, and even a voluntary switching of attention from one task to another is an interruption of the task being left behind.
Furthermore, it will be difficult to resist the temptation of using powerful technology that guides us with useful side information, suggestions, and even commands. Sure, other people will be able to see that we are being assisted, but they won’t know by whom, just as we will be able to tell that they are being minded, and we won’t know by whom.
- This CNN.com article looks at “wearable tech that will turn man into machine by 2015” via an hour-by-hour breakdown of a hypothetical day in the life of a wearable tech aficionado:
9am to 1pm: Throughout the day you connect to your Dekko-powered augmented reality device, which overlays your vision with a broad range of information and entertainment. While many of the products the US software company is proposing are currently still fairly conceptual, Dekko hopes to find ways to integrate an extra layer of visual information into every part of daily life. Dekko is one of the companies supplying software to Google Glass, the wearable computer that gives users information through a spectacle-like visual display. Matt Miesnieks, CEO of Dekko, says that he believes “the power of wearables comes from connecting our senses to sensors.”
- In another article from the MIT Technology Review, Rachel Metz ponders the possibility of smart contact lenses:
Researchers at Belgian nonelectronics reseach and development center Imec and Belgium’s Ghent University are in the very early stages of developing such a device, which would bring augmented reality–the insertion of digital imagery such as virtual signs and historical markers with the real world–right to your eyeballs. It’s just one of several such projects (see “Contact Lens Computer: It’s Like Google Glass Without The Glasses”), and while the idea is nowhere near the point where you could ask your eye doctor for a pair, it could become more realistic as the cost and size of electronic components continue to fall and wearable gadgets gain popularity.
Speaking on the sidelines of the Wearable Technologies conference in San Francisco on Tuesday, Eric Dy, Imec’s North America business development manager, said researchers are investigating the feasibility of integrating an array of micro lenses with LEDs, using the lenses to help focus light and project it onto the wearer’s retinas.
- Ben Gilbert at engadget reports on prototypical Universal Translators ala Star Trek:
The biggest barrier, beyond the translation itself, is speech recognition. In so many words, background noise interferes with the translation software, thus affecting results. But Barra said it works “close to 100 percent” when used in “controlled environments.” Sounds perfect for diplomats, not so much for real-world conversations. Of course, Google’s non-real-time, text-based translation software built into Chrome leaves quite a bit to be desired, making us all the more wary of putting our faith into Google’s verbal solution. As the functionality is still “several years away,” though, there’s still plenty of time to convert us.
- DVICE writer Colin Druce-Mcfadden looks at the potential of life-size humanoid holograms:
There will be limitations, however. It’s easy to think that a life-sized human being, standing in your living room, would be capable of giving you a hug, for instance. But if that breakthrough is coming, it hasn’t arrived yet. Holodeck creations these are not. And images projected through the magic of HoloVision won’t be able to follow you into the kitchen for a snack either — not unless you’ve got a whole network of HoloVision cameras, anyway.
- Cyborgology contributor davidbanks addresses “what’s really disturbing about retailers tracking your every move“:
The implications of Euclid’s technology do not stop at surveillance or privacy. Remember, these systems are meant to feed data to store owners so that they can rearrange store shelves or entire showroom floors to increase sales. Malls, casinos, and grocery stores have always been carefully planned out spaces—scientifically arranged and calibrated for maximum profit at minimal cost. Euclid’s systems however, allow for massive and exceedingly precise quantification and analysis. More than anything, what worries me is the deliberateness of these augmented spaces. Euclid will make spaces designed to do exactly one thing almost perfectly: sell you shit you don’t need. I worry about spaces that are as expertly and diligently designed as Amazon’s home page or the latest Pepsi advertisement. A space built on data so rich and thorough that it’ll make focus groups look quaint in comparison.
- In the New York Review of Books, James Bamford says the NSA knows “much more than you think”:
Of course the US is not a totalitarian society, and no equivalent of Big Brother runs it, as the widespread reporting of Snowden’s information shows. We know little about what uses the NSA makes of most information available to it—it claims to have exposed a number of terrorist plots—and it has yet to be shown what effects its activities may have on the lives of most American citizens. Congressional committees and a special federal court are charged with overseeing its work, although they are committed to secrecy, and the court can hear appeals only from the government.
Still, the US intelligence agencies also seem to have adopted Orwell’s idea of doublethink—“to be conscious of complete truthfulness,” he wrote, “while telling carefully constructed lies.” For example, James Clapper, the director of national intelligence, was asked at a Senate hearing in March whether “the NSA collect[s] any type of data at all on millions or hundreds of millions of Americans.” Clapper’s answer: “No, sir…. Not wittingly.”
- Kotaku reports on using the Oculus Rift to pilot a drone:
The drone is carrying a laptop so it can communicate with the headset, but right now the sticking point is range; since it’s using wi-fi to communicate, it’ll only get to around 50-100m.
- Now for the Cyberpunk promised in the post title: the script-writer of the film adaptation of the Deus Ex video game series says the movie will be “a cyberpunk film, not a video game film”:
“It’s not a video game movie, it’s a cyberpunk movie,” Cargill said. “Eidos Montreal has given us a lot of freedom in terms of story; they want this movie to be Blade Runner. We want this movie to be Blade Runner.”
- io9 recently linked to this interview with William Gibson from the Paris Review:
There’s a famous story about your being unable to sit through Blade Runner while writing Neuromancer.
I was afraid to watch Blade Runner in the theater because I was afraid the movie would be better than what I myself had been able to imagine. In a way, I was right to be afraid, because even the first few minutes were better. Later, I noticed that it was a total box-office flop, in first theatrical release. That worried me, too. I thought, Uh-oh. He got it right and nobody cares! Over a few years, though, I started to see that in some weird way it was the most influential film of my lifetime, up to that point. It affected the way people dressed, it affected the way people decorated nightclubs. Architects started building office buildings that you could tell they had seen in Blade Runner. It had had an astonishingly broad aesthetic impact on the world.
- Finally, a piece by Tim Leary on the “Cyberpunks”:
The concept was formally introduced in William Gibson’s 1984 punkn novel, NEUROMANCER. Although this first novel swept the Triple Crown of science fiction–the Hugo, the Nebula, and the Philip K. Dick awards–it is not really science fiction. It could be called “science faction” in that it occurs not in another galaxy in the far future, but 20 years from now, in a BLADE RUNNER world just a notch beyond our silicon present.
In Gibson’s Cyberworld there is no-warp drive and “beam me up, Scotty.” The high technology is the stuff that appears on today’s screens or that processes data in today’s laboratories: Super-computer boards. Recombinant DNA chips. AI systems and enormous data banks controlled by multinational combines based in Japan and Zurich.
- In case you haven’t already heard, scientists have implanted false memories into the brains of mice.
Scientists have created a false memory in mice by manipulating neurons that bear the memory of a place. The work further demonstrates just how unreliable memory can be. It also lays new ground for understanding the cell behavior and circuitry that controls memory, and could one day help researchers discover new ways to treat mental illnesses influenced by memory.
- The inevitable Total Recall references have already appeared. Others have gone with Inception as the pop culture touchstone.
- I recently discovered the Augmented Reality Trends website. Some noteworthy posts: How augmented reality aids advertising.
Augmented reality blurs the line between the virtual and real-world environment. This capability of augmented reality often confuses users, making them unable to determine the difference between the real world experience and the computer generated experience. It creates an interactive world in real-time and using this technology, businesses can give customers the opportunity to feel their products and service as if it is real right from their current dwelling.
AR technology imposes on the real world view with the help of computer-generated sensory, changing what we see. It can use any kind of object to alter our senses. The enhancements usually include sound, video, graphics and GPS data. And its potentials are tremendous as developers have just started exploring the world of augmented reality. However, you must not confuse between virtual reality and augmented reality, as there is a stark difference between them. Virtual reality, as the name suggests, is not real. It is just a made up world. On the other hand, augmented reality is enhancing the real world, providing an augmented view of the reality. The enhancements can be minor or major, but AR technology only changes how the real world around the user looks like.
And a profile of SeeMore Interactive and their work on augmented reality shopping:
Augmentedrealitytrends.com: Why augmented reality and why your prime focus is on retail industry?
SeeMore Interactive: We recognize the importance of merging brick-and-mortar retail with cloud-based technology to create the ultimate dynamic shopping experience. It’s simply a matter of tailoring a consumer’s shopping experience based on how he or she wants to shop; the ability to research reviews, compare prices, receive new merchandise recommendations, share photos and make purchases while shopping in-store or from the comfort of their home.
- Brian Matchick at Geek Exchange writes about how Deep Learning brings A.I. one step closer to Hal, Skynet, and the Matrix:
Deep learning is based on neural networks, simplified models of the way clusters of neurons act within the brain that were first proposed in the 1950s. The difference now is that new programming techniques combined with the incredible computing power we have today are allowing these neural networks to learn on their own, just as humans do. The computer is given a huge pile of data and asked to sort the information into categories on its own, with no specific instruction. This is in contrast to previous systems that had to be programmed by hand. By learning incrementally, the machine can grasp the low-level stuff before the high-level stuff. For example, sorting through 10,000 handwritten letters and grouping them into like categories, the machine can then move on to entire words, sentences, signage, etc. This is called “unsupervised learning,” and deep learning systems are very good at it.
- This Economist article looks at predictive policing and American company PredPol (amazingly, a sly sub-section heading is the only reference to the book or film Minority Report and it’s pre-crime unit and pre-cog mutants.):
Intelligent policing can convert these modest gains into significant reductions in crime. Cops working with predictive systems respond to call-outs as usual, but when they are free they return to the spots which the computer suggests. Officers may talk to locals or report problems, like broken lights or unsecured properties, that could encourage crime. Within six months of introducing predictive techniques in the Foothill area of Los Angeles, in late 2011, property crimes had fallen 12% compared with the previous year; in neighbouring districts they rose 0.5% (see chart). Police in Trafford, a suburb of Manchester in north-west England, say relatively simple and sometimes cost-free techniques, including routing police driving instructors through high-risk areas, helped them cut burglaries 26.6% in the year to May 2011, compared with a decline of 9.8% in the rest of the city.
- The BBC web site has published an article on the cities of the future:
Although they may all look very different, the cities of the future share a new way of doing things, from sustainable buildings to walkable streets to energy-efficient infrastructure. While some are not yet complete – or even built – these five locations showcase the cutting edge of urban planning, both in developing new parts of an existing metropolitan area and building entirely new towns. By 2050, it is forecast that 70% of the world’s population will live in cities. These endeavours may help determine the way we will live then, and in decades beyond.
- This piece from Vice’s Motherboard examines Bill Gates’ nuclear power company TerraPower and alternative nuclear fuel thorium:
Mention thorium—an alternative fuel for nuclear power—to the right crowd, and faces will alight with the same look of spirited devotion you might see in, say, Twin Peaks and Chicago Cubs fans. People love thorium against the odds. And now Bill Gates has given them a new reason to keep rooting for the underdog element.
TerraPower, the Gates-chaired nuclear power company, has garnered attention for pursuing traveling wave reactor tech, which runs entirely on spent uranium and would rarely need to be refueled. But the concern just quietly announced that it’s going to start seriously exploring thorium power, too.
- Unsurprisingly, a porno movie filmed using Google Glass has already wrapped:
Google might have put the kibosh on allowing x-rated apps onto Glass (for now) but that hasn’t stopped the porn industry from doing what they do best: using new technology to enhance the, um, adult experience. The not yet titled film stars James Deen and Andy San Dimas.
- Speaking of Google, the company’s research department recently announced details about their Machine Vision visual recognition program:
There has always been a basic split in machine vision work. The engineering approach tries to solve the problem by treating it as a signal detection task using standard engineering techniques. The more “soft” approach has been to try to build systems that are more like the way humans do things. Recently it has been this human approach that seems to have been on top, with DNNs managing to learn to recognize important features in sample videos. This is very impressive and very important, but as is often the case the engineering approach also has a trick or two up its sleeve.
- From Google Research:
We demonstrate the advantages of our approach by scaling object detection from the current state of the art involving several hundred or at most a few thousand of object categories to 100,000 categories requiring what would amount to more than a million convolutions. Moreover, our demonstration was carried out on a single commodity computer requiring only a few seconds for each image. The basic technology is used in several pieces of Google infrastructure and can be applied to problems outside of computer vision such as auditory signal processing.