- Almetria Vaba of PBS Learning Media has posted a collection of resources for exploring media literacy through the legacy of Dr. Martin Luther King, jr.:
Examine the life and legacy of Dr. Martin Luther King Jr. and the Civil Rights Movement with hundreds of PBS LearningMedia resources. Here is a sampling of resources from the extensive offering in PBS LearningMedia. Use these resources to explore media literacy from historical documentaries to media coverage of social movements.
- Sonia Paul at PBS MediaShift reported on a recent Pew Research study on social media, stress, and the “cost of caring”:
Among the survey’s major findings is that women are much more likely than men to feel stressed after becoming aware of stressful events in the lives of others in their networks.
“Stress is kind of contagious in that way,” said Keith Hampton, an associate professor at Rutgers University and the chief author of the report. “There’s a circle of sharing and caring and stress.”
- Lily Hay Newman reported on the survey for Slate:
In a survey of 1,801 adults, Pew found that frequent engagement with digital services wasn’t directly correlated to increased stress. Women who used social media heavily even recorded lower stress. The survey relied on the Perceived Stress Scale, a widely used stress-measurement tool developed in the early 1980s.
“We began to work fully expecting that the conventional wisdom was right, that these technologies add to stress,” said Lee Rainie, the director of Internet, science, and technology research at Pew. “So it was a real shock when [we] first looked at the data and … there was no association between technology use, especially heavy technology use, and stress.”
- LiveScience writer Elizabeth Palermo looked at the gendered differences found by the study:
The higher incidence of stress among the subset of technology users who are aware of stressful events in the lives of others is something that Hampton and his colleagues call “the cost of caring.”
“You can use these technologies and, as a woman, it’s probably going to be beneficial for your level of stress. But every now and then, bad things are going to happen to people you know, and there’s going to be a cost for that,” Hampton said.
- Nicholas Carr recently penned an editorial for The Guardian considering whether we are becoming too reliant on computers:
The real danger we face from computer automation is dependency. Our inclination to assume that computers provide a sufficient substitute for our own intelligence has made us all too eager to hand important work over to software and accept a subservient role for ourselves. In designing automated systems, engineers and programmers also tend to put the interests of technology ahead of the interests of people. They transfer as much work as possible to the software, leaving us humans with passive and routine tasks, such as entering data and monitoring readouts. Recent studies of the effects of automation on work reveal how easily even very skilled people can develop a deadening reliance on computers. Trusting the software to handle any challenges that may arise, the workers fall victim to a phenomenon called “automation complacency”.
- David Whelan at Vice interviewed Carr on the issue of technology dependency:
Should we be scared of the future?
I think we should be worried of the future. We are putting ourselves passively into the hands of those who design the systems. We need to think critically about that, even as we maintain our enthusiasm of the great inventions that are happening. I’m not a Luddite. I’m not saying we should trash our laptops and run off to the woods.
We’re basically living out Freud’s death drive, trying our best to turn ourselves into inorganic lumps.
Even before Freud, Marx made the point that the underlying desire of technology seemed to be to create animate technology and inanimate humans. If you look at the original radios, they were transmission as well as reception devices, but before long most people just stopped transmitting and started listening.
- Writing at Figure/Ground, John Dowd argues that being there still matters for teaching and learning in the digital age:
From an educational perspective, what we must understand is the relationship between information and meaning. Meaning is not an inevitable outcome of access to information but rather, emerges slowly when one has cultivated his or her abilities to incorporate that information in purposeful and ethical ways. Very often this process requires a slowdown rather than a speedup, the latter of which being a primary bias of many digital technologies. The most powerful educational experiences stem from the relationships formed between teacher and student, peer and peer. A smart classroom isn’t necessarily one that includes the latest technologies, but one that facilitates greater interaction among teachers and students, and responsibility for the environment within which one learns. A smart classroom is thus spatially, not primarily technologically, smart. While the two are certainly not mutually exclusive (and much has been written on both), we do ourselves a disservice when privileging the latter over the former.
- Dowd’s argument here is similar to Carr’s thoughts on MOOCs:
In education, computers are also falling short of expectations. Just a couple of years ago, everyone thought that massive open online courses – Moocs – would revolutionise universities. Classrooms and teachers seemed horribly outdated when compared to the precision and efficiency of computerised lessons. And yet Moocs have largely been a flop. We seem to have underestimated the intangible benefits of bringing students together with a real teacher in a real place. Inspiration and learning don’t flow so well through fibre-optic cables.
- MediaPost editor Steve Smith writes about his relationship with his iPhone, calling it life’s new remote:
The idea that the cell phone is an extension of the self is about as old as the device itself. We all recall the hackneyed “pass your phone to the person next to you” thought experiment at trade shows four or five years ago. It was designed to make the point of how “personally” we take these devices.
And now the extraordinary and unprecedented intimacy of these media devices is a part of legal precedent. The recent Supreme Court ruling limiting searches of cell phone contents grounded the unanimous opinion on an extraordinary observation. Chief Justice John Roberts described these devices as being “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”
We are only beginning to understand the extent to which these devices are blending the functionality of media with that of real world tools. And it is in line with one of Marshall McLuhan’s core observations in his “Understanding Media” book decades ago.
- Tomas Chamorro-Premuzic contributed a piece to The Guardian referencing Carr to consider how technology has downgraded attention:
As early as 1971 Herbert Simon observed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it”. Thus instead of reaping the benefits of the digital revolution we are intellectually deprived by our inability to filter out sensory junk in order to translate information into knowledge. As a result, we are collectively wiser, in that we can retrieve all the wisdom of the world in a few minutes, but individually more ignorant, because we lack the time, self-control, or curiosity to do it.
There are also psychological consequences of the distraction economy. Although it is too soon to observe any significant effects from technology on our brains, it is plausible to imagine that long-term effects will occur. As Nicholas Carr noted in The Shallows: What the internet is doing to our brains, repeated exposure to online media demands a cognitive change from deeper intellectual processing, such as focused and critical thinking, to fast autopilot processes, such as skimming and scanning, shifting neural activity from the hippocampus (the area of the brain involved in deep thinking) to the prefrontal cortex (the part of the brain engaged in rapid, subconscious transactions). In other words, we are trading speed for accuracy and prioritise impulsive decision-making over deliberate judgment. In the words of Carr: “The internet is an interruption system. It seizes our attention only to scramble it”.
- James Vincent at The Verge covered a recent study that links nighttime screen use with less REM sleep:
The research carried out by the Harvard Medical School and published in the journal Proceedings of the National Academy of Sciences studied the sleeping patterns of 12 volunteers over a two-week period. Each individual read a book before their strict 10PM bedtime — spending five days with an iPad and five days with a paper book. The scientists found that when reading on a lit screen, volunteers took an average of 10 minutes longer to fall asleep and received 10 minutes less REM sleep. Regular blood samples showed they also had lower levels of the sleep hormone melatonin consistent with a circadian cycle delayed by one and a half hour.
- At AdBusters, Douglas Haddow writes that sleep is the enemy of capital:
Ever since the frequent cocaine user and hater of sleep Thomas Edison flicked on the first commercially-viable electric lightbulb, a process has taken hold through which the darkness of sleep time has been systematically deconstructed and illuminated.
Most of us now live in insomniac cities with starless skies, full of twinkling neon signage and flickering gadgets that beg us to stay awake longer and longer. But for all this technological innovation, we still must submit to our diurnal rhythm if we want to stay alive.
And even though sleep may “frustrate and confound strategies to exploit and reshape it,” as Crary says, it, like anything, remains a target of exploitation and reshaping – and in some cases, all-out elimination.
- In an interview with TruthOut to discuss his latest book, Robert McChesney addresses telecommunications monopolies, net neutrality, and advocates radical solutions to systemic problems:
What is striking about this corporate monopolization of the internet is that all the wealth and power has gone to a small number of absolutely enormous firms. As we enter 2015, 13 of the 33 most valuable corporations in the United States are internet firms, and nearly all of them enjoy monopolistic market power as economists have traditionally used the term. If you continue to scan down the list there are precious few internet firms to be found. There is not much of a middle class or even an upper-middle class of internet corporations to be found.
This poses a fundamental problem for democracy, though it is one that mainstream commentators and scholars appear reluctant to acknowledge: If economic power is concentrated in a few powerful hands you have the political economy for feudalism, or authoritarianism, not democracy. Concentrated economic power invariably overwhelms the political equality democracy requires, leading to routinized corruption and an end of the rule of law. That is where we are today in the United States.
- In light of recent terrorist attacks and renewed hysteria about fundamentalist ideologies, I revisited Mark Manson’s essay probing why there seems to be more fundamentalism in the world today:
The short answer is technology. Yes, Facebook really did ruin everything. The explosion in communication technologies over the past decades has re-oriented society and put more psychological strain on us all to find our identities and meaning. For some people, the way to ease this strain is to actually reject complexity and ambiguity for absolutist beliefs and traditional ideals.
Philosopher Charles Taylor wrote that it would be just as difficult to not believe in God in 1500 as it is to believe in God in the year 2000. Obviously, most of humanity believes in God today, but it’s certainly become a much more complicated endeavor. With the emergence of modern science, evolution, liberal democracy, and worldwide 24-hour news coverage of corruption, atrocities, war and religious hypocrisy, today a person of faith has their beliefs challenged more in a week than a person a few generations ago would have in half a lifetime.
I’ve written about the media ecology tradition, attended the Media Ecology Association’s conferences and had an article published in their journal, but up to now Marshall McLuhan’s Understanding Media and Neil Postman’s Amusing Ourselves to Death are the only primary texts associated with the tradition that I’ve read. To broaden my knowledge of the tradition I’m reading some of the books considered foundational in the media ecology canon, beginning with Lewis Mumford’s Technics & Civilization. I paid special attention to Mumford’s references to capitalism in Technics & Civilization because I have an abiding interest in the marriage of critical/Marxian analysis and media ecological perspectives. One of the most common criticisms of McLuhan’s writings on media is the charge of technological determinism, and that McLuhan’s media theory focuses on wide-reaching social and psychological effects while ignoring the historical, political, and economic factors involved in the development and dissemination of technologies. Although this is a valid criticism, as McLuhan’s approach did not emphasize the political economy of media, a number of scholars have re-evaluated McLuhan and other media ecologists to identify parallels in their work with critical theory and the Marxian critique of capitalism. The same criticisms cannot be legitimately levied against Mumford, whose account of successive technological complexes demonstrates careful consideration of the historical, political, and economic situations in which these complexes developed. Technics & Civilization makes clear that a media ecology perspective can incorporate a pronounced political orientation and an analysis of political economy.
Reading through Mumford’s account of the phases of technological complexes, I noted how the capitalist mode of economics is heavily dependent on technology. The interdependence seemed so crucial to both that it almost seemed that the history of capitalism is the history of technological development. Though Mumford does distinguish technics and capitalism as separate but interrelated forces. In the conclusion of the final chapter, “Orientation,” Mumford writes “we have advanced as far as seems possible in considering mechanical civilization as an isolated system” (p. 434). Technics & Civilization was first published in 1934; a contemporary reader will likely extend Mumford’s analysis to account for the last 80 years of technological progress, particularly in consideration of the information and telecommunications revolutions (an editorial note before the main text states that Mumford “would have loved” the Internet). Such an extension must account for the associated developments in capitalism. Scholars have used terms like “hypercapitalism” and “network and informational capitalism” to describe the new outlets of capital accumulation made possible by the global telecommunications infrastructure. Mumford wrote that “we are now entering a phase of dissociation between capitalism and technics” (p. 366), due in part to the over-working of “the machine”. Hypercapitalism has seen new forms of over-exploitation, and the continued commodification of intangibles such as information and attention, calling into question the dissociation of capitalism and technics. Mumford’s warning of the capitalist threat to physical resources, however, remains pertinent today.
The attention Mumford gives to the psychological effects of technics is a fascinating component of his analysis that prefigures McLuhan’s observations on technology as extensions of the human organism. The introduction of introspection and self-reflection instigated by the mirror’s effect on the individual ego; the metamorphosis of thought from flowing and organic to verbal and categorical brought on by print and paper; the shift from self-examination to self-exposure ushered in by the introduction of cameras; these are just some of the examples cited by Mumford to establish that the technological complexes built up from every individual innovation are not constrained to the obvious external manifestations but involve dramatic internal changes as well. In fact, the psychological and material transformations are not distinct processes, but are necessarily interlinked, two sides of the same coin.
- In case you haven’t already heard, scientists have implanted false memories into the brains of mice.
Scientists have created a false memory in mice by manipulating neurons that bear the memory of a place. The work further demonstrates just how unreliable memory can be. It also lays new ground for understanding the cell behavior and circuitry that controls memory, and could one day help researchers discover new ways to treat mental illnesses influenced by memory.
- The inevitable Total Recall references have already appeared. Others have gone with Inception as the pop culture touchstone.
- I recently discovered the Augmented Reality Trends website. Some noteworthy posts: How augmented reality aids advertising.
Augmented reality blurs the line between the virtual and real-world environment. This capability of augmented reality often confuses users, making them unable to determine the difference between the real world experience and the computer generated experience. It creates an interactive world in real-time and using this technology, businesses can give customers the opportunity to feel their products and service as if it is real right from their current dwelling.
AR technology imposes on the real world view with the help of computer-generated sensory, changing what we see. It can use any kind of object to alter our senses. The enhancements usually include sound, video, graphics and GPS data. And its potentials are tremendous as developers have just started exploring the world of augmented reality. However, you must not confuse between virtual reality and augmented reality, as there is a stark difference between them. Virtual reality, as the name suggests, is not real. It is just a made up world. On the other hand, augmented reality is enhancing the real world, providing an augmented view of the reality. The enhancements can be minor or major, but AR technology only changes how the real world around the user looks like.
And a profile of SeeMore Interactive and their work on augmented reality shopping:
Augmentedrealitytrends.com: Why augmented reality and why your prime focus is on retail industry?
SeeMore Interactive: We recognize the importance of merging brick-and-mortar retail with cloud-based technology to create the ultimate dynamic shopping experience. It’s simply a matter of tailoring a consumer’s shopping experience based on how he or she wants to shop; the ability to research reviews, compare prices, receive new merchandise recommendations, share photos and make purchases while shopping in-store or from the comfort of their home.
- Brian Matchick at Geek Exchange writes about how Deep Learning brings A.I. one step closer to Hal, Skynet, and the Matrix:
Deep learning is based on neural networks, simplified models of the way clusters of neurons act within the brain that were first proposed in the 1950s. The difference now is that new programming techniques combined with the incredible computing power we have today are allowing these neural networks to learn on their own, just as humans do. The computer is given a huge pile of data and asked to sort the information into categories on its own, with no specific instruction. This is in contrast to previous systems that had to be programmed by hand. By learning incrementally, the machine can grasp the low-level stuff before the high-level stuff. For example, sorting through 10,000 handwritten letters and grouping them into like categories, the machine can then move on to entire words, sentences, signage, etc. This is called “unsupervised learning,” and deep learning systems are very good at it.
- This Economist article looks at predictive policing and American company PredPol (amazingly, a sly sub-section heading is the only reference to the book or film Minority Report and it’s pre-crime unit and pre-cog mutants.):
Intelligent policing can convert these modest gains into significant reductions in crime. Cops working with predictive systems respond to call-outs as usual, but when they are free they return to the spots which the computer suggests. Officers may talk to locals or report problems, like broken lights or unsecured properties, that could encourage crime. Within six months of introducing predictive techniques in the Foothill area of Los Angeles, in late 2011, property crimes had fallen 12% compared with the previous year; in neighbouring districts they rose 0.5% (see chart). Police in Trafford, a suburb of Manchester in north-west England, say relatively simple and sometimes cost-free techniques, including routing police driving instructors through high-risk areas, helped them cut burglaries 26.6% in the year to May 2011, compared with a decline of 9.8% in the rest of the city.
- The BBC web site has published an article on the cities of the future:
Although they may all look very different, the cities of the future share a new way of doing things, from sustainable buildings to walkable streets to energy-efficient infrastructure. While some are not yet complete – or even built – these five locations showcase the cutting edge of urban planning, both in developing new parts of an existing metropolitan area and building entirely new towns. By 2050, it is forecast that 70% of the world’s population will live in cities. These endeavours may help determine the way we will live then, and in decades beyond.
- This piece from Vice’s Motherboard examines Bill Gates’ nuclear power company TerraPower and alternative nuclear fuel thorium:
Mention thorium—an alternative fuel for nuclear power—to the right crowd, and faces will alight with the same look of spirited devotion you might see in, say, Twin Peaks and Chicago Cubs fans. People love thorium against the odds. And now Bill Gates has given them a new reason to keep rooting for the underdog element.
TerraPower, the Gates-chaired nuclear power company, has garnered attention for pursuing traveling wave reactor tech, which runs entirely on spent uranium and would rarely need to be refueled. But the concern just quietly announced that it’s going to start seriously exploring thorium power, too.
- Unsurprisingly, a porno movie filmed using Google Glass has already wrapped:
Google might have put the kibosh on allowing x-rated apps onto Glass (for now) but that hasn’t stopped the porn industry from doing what they do best: using new technology to enhance the, um, adult experience. The not yet titled film stars James Deen and Andy San Dimas.
- Speaking of Google, the company’s research department recently announced details about their Machine Vision visual recognition program:
There has always been a basic split in machine vision work. The engineering approach tries to solve the problem by treating it as a signal detection task using standard engineering techniques. The more “soft” approach has been to try to build systems that are more like the way humans do things. Recently it has been this human approach that seems to have been on top, with DNNs managing to learn to recognize important features in sample videos. This is very impressive and very important, but as is often the case the engineering approach also has a trick or two up its sleeve.
- From Google Research:
We demonstrate the advantages of our approach by scaling object detection from the current state of the art involving several hundred or at most a few thousand of object categories to 100,000 categories requiring what would amount to more than a million convolutions. Moreover, our demonstration was carried out on a single commodity computer requiring only a few seconds for each image. The basic technology is used in several pieces of Google infrastructure and can be applied to problems outside of computer vision such as auditory signal processing.