Tagged: attentioneconomy

MISC Monday: MLK media literacy; social media stress; the attention economy, and more

Woman_reading_a_book_on_an_eReader

Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources.  Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.

Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.

“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”

In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.

“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”

The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”

“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.

The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.

Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.

We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.

From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.

  • Dowd’s argument here is similar to Carr’s thoughts on MOOCs:

In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.

  • MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:

The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.

And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”

We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.

As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.

There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.

The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.

Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.

Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.

And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.

What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.

This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.

The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.

Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.

In Medias Res: Chomsky Occupied, lolcats invade aca-meme-ia, the intention economy and more…

  • Microsoft is opening a research lab in New York City staffed by A-list sociologists, computational scientists, and network theorists among others.

The NYC lab recruits bring in mathematical and computation tools that could work magic with existing social media research already underway at Microsoft Research, led by folks like Gen-fluxer danah boyd. “I would say that the highly simplified version of what happens is that data scientists do patterns and ethnographers tell stories,” boyd tells Fast Company. While Microsoft Research New England has strengths in qualitative social science, empirical economics, machine learning, and mathematics, “We’ve long noted the need for data science types who can bridge between us,” boyd explained in a blog post announcing the NYC labs.

So the world is now indeed splitting into a plutonomy and a precariat — in the imagery of the Occupy movement, the 1 percent and the 99 percent. Not literal numbers, but the right picture. Now, the plutonomy is where the action is and it could continue like this.

If it does, the historic reversal that began in the 1970s could become irreversible. That’s where we’re heading. And the Occupy movement is the first real, major, popular reaction that could avert this. But it’s going to be necessary to face the fact that it’s a long, hard struggle. You don’t win victories tomorrow. You have to form the structures that will be sustained, that will go on through hard times and can win major victories. And there are a lot of things that can be done.

  • An article at the Atlantic poses the question: Are LOLCats making us smart? The article quotes Kate Miltner who wrote her dissertation on LOLCat memes:

According to Miltner, “When it came to LOLCats, sharing and creating were often different means to the same end: making meaningful connections with others.” At their core LOLCats weren’t about those funny captions, the weird grammar, or the cute kitties, but people employed those qualities in service of that primary goal of human connection.

A newer idea outgrowth of this is that information is so omnipresent and that consumers face so much of it that businesses are now in a completely different economy model fighting to get people’s attention. This Attentioneconomy has new rules based on how much time people are willing to spend paying attention to some piece of information and to their hopes the advertisements that may surround it. New tools are emerging to analyze not just what is talked about but also sentiment, audience demographics, and how quickly it spreads.

To push efficiency, the better way would be to be able the craft the message more accurately to specific people, not just a demographic: to me personally, not just to ‘people who live in that part of the city’. How would that be possible? It starts with trying to understand the intention of what people want, rather than trying to just grab their attention as they walk away. If we knew, or better yet, if the consumer each told us what they wanted and we could craft the message for each person as well as target exactly who would be interested, then the efficiency of that message suddenly shoots way up. It hinges on that dialogue with the consumer.

Scott Merrill at Tech Crunch also covered Searl’s book:

Another substantial topic of the book is just how incorrect most of the information collected about us actually is. And still this factually wrong data is used to select which advertisements are presented to you, in the hope that you’ll click through. Aside from how intrusive advertising is, is it any surprise that click-through rates are so low when the data used to target ads to viewers is so wildly off-base?

Searls also advocates strongly for Vendor Relationship Management (VRM) solutions to give to consumers the same kind of tracking and information collection about vendors that the vendors use against us. The point of VRM is not adversarial, according to Searls. Instead, it restores balance to the overall market and seeks to actively reward those companies that pay attention to individual intentions.

In medias res: end-of-the-semester reading list

Due to end-of-the-semester activities posting has been slow the last couple of weeks. But my exams are finished and I’ve submitted grades so here’s a celebratory news roundup:

In an interview published Sunday, Google’s co-founder cited a wide range of attacks on “the open internet,” including government censorship and interception of data, overzealous attempts to protect intellectual property, and new communication portals that use web technologies and the internet, but under restrictive corporate control.

There are “very powerful forces that have lined up against the open internet on all sides and around the world,” says Brin. “I thought there was no way to put the genie back in the bottle, but now it seems in certain areas the genie has been put back in the bottle.”

The post-social world is an “attention economy.” If you don’t have engagement, you don’t have attention and if you don’t have attention – well you don’t have anything really.

In the 1970s, the scholar Herbert Simon argued that “in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients.”

His arguments give rise both to the notion of “information overload” but also to the “attention economy”. In the attention economy, people’s willingness to distribute their attention to various information stimuli create value for said stimuli. Indeed, the economic importance of advertisements is predicated on the notion that getting people to pay attention to something has value.

If one wanted to track three trends likely to have the most impact on international relations over the next decade, what three trends could help us anticipate global political crises? At the top of my news feed are items about who is in jail and why, rigged elections, and social media.

School shootings and domestic terrorism have proliferated on a global level. In recent months there have been school shootings in Finland, Germany, Greece, and other countries as well as the United States. Although there may be stylistic differences, in all cases young men act out their rage through the use of guns and violence to create media spectacles and become celebrities-of-the-moment.

Class dismissed, have a great summer!

In medias res: Semiology of Batman, economics of attention, hypodermic needles, magic bullets and more

So I’ve decided to headline these posts with interesting (to me) media-related content from around the web “In medias res”. Not very original, I know, but “in the middle of things” seems appropriate.

Following the semiotics goals I defined earlier, we will explore the complex network of sign language of AAA games, comic books, the Batman universe and related pop-culture, we will explore the narrative themes behind it all and we will examine how Rocksteady implemented said sign language using semiotic principles.

Schiller elaborates on the ways in which, “Corporate speech has become the dominant discourse…While the corporate voice booms across the land, individual expression, at best, trickles through tiny constricted public circuits. This has allowed the effective right to free speech to be transferred from individuals to billion dollar companies which, in effect, monopolize public communication (pg. 45).” Privatization, deregulation and the expansion of market relationships are cited by Schiller as the environment in which the national information infrastructure has been eroded (pg. 46).

  • Tomi Ahonen, apparently the person who declared mobile technology the 7th mass medium (who knew?), has declared augmented reality the 8th mass media. The list of media, in order of appearance:

1st mass media PRINT – from 1400s (books, pamphlets, newspapers, magazines, billboards)

2nd mass media RECORDINGS – from 1890s (records, tapes, cartridges, videocassettes, CDs, DVDs)

3rd mass media CINEMA – from 1900s

4th mass media RADIO – from 1920s

5th mass media TELEVISION – from 1940s

6th mass media INTERNET – from 1992

7th mass media MOBILE – from 1998

8th mass media AUGMENTED REALITY – from 2010

The return to the “magic bullet” theory has led many Arab and Western media scholars to focus on the study of the role of social media in developing popular movements. Little or no attention is paid to folk and traditional communication outlets such as Friday sermons, coffeehouse storytellers (“hakawati”), and mourning gatherings of women (“subhieh”). These face-to-face folk communication vehicles play an important role in developing the Arab public sphere as well as in introducing change.

And this piece about a new sex-advice show on MTV mentions the “hypodermic needle” theory:

When you talk about “young viewers” as helpless victims who are targeted by a message and instantly fall prey to it, you are positing a pre-World-War-II era mass communications theory known as the hypodermic model.

This model saw mass media as a giant hypodermic needle that “injected” messages into our brains. And no brains were more susceptible to the injections than those of children.