Monthly Archives: January 2014

American gun use is out of control

(Shouldn’t the world intervene asks the Guardian)

Concepts/issues  – civil rights, the 2nd Amendment, freedom to bear arms, polarisation of opinion, gun violence, fear/peril

The death toll from firearms in the US suggests that the country is gripped by civil war
guns, Henry Porter

A man on a rifle range: ‘More Americans lost their lives from firearms in the past 45 years than in all wars involving the US.’ Photograph: Scott Olson/Getty Images

Last week, Starbucks asked its American customers to please not bring their guns into the coffee shop. This is part of the company’s concern about customer safety and follows a ban in the summer on smoking within 25 feet of a coffee shop entrance and an earlier ruling about scalding hot coffee. After the celebrated Liebeck v McDonald’s case in 1994, involving a woman who suffered third-degree burns to her thighs, Starbucks complies with theSpecialty Coffee Association of America‘s recommendation that drinks should be served at a maximum temperature of 82C.

Although it was brave of Howard Schultz, the company’s chief executive, to go even this far in a country where people are better armed and only slightly less nervy than rebel fighters in Syria, we should note that dealing with the risks of scalding and secondary smoke came well before addressing the problem of people who go armed to buy a latte. There can be no weirder order of priorities on this planet.

That’s America, we say, as news of the latest massacre breaks – last week it was the slaughter of 12 people by Aaron Alexis at Washington DC’s navy yard – and move on. But what if we no longer thought of this as just a problem for America and, instead, viewed it as an international humanitarian crisis – a quasi civil war, if you like, that calls for outside intervention? As citizens of the world, perhaps we should demand an end to the unimaginable suffering of victims and their families – the maiming and killing of children – just as America does in every new civil conflict around the globe.

The annual toll from firearms in the US is running at 32,000 deaths and climbing, even though the general crime rate is on a downward path (it is 40% lower than in 1980). If this perennial slaughter doesn’t qualify for intercession by the UN and all relevant NGOs, it is hard to know what does.

To absorb the scale of the mayhem, it’s worth trying to guess the death toll of all the wars in American history since the War of Independence began in 1775, and follow that by estimating the number killed by firearms in the US since the day that Robert F. Kennedy was shot in 1968 by a .22 Iver-Johnson handgun, wielded by Sirhan Sirhan. The figures fromCongressional Research Service, plus recent statistics from icasualties.org, tell us that from the first casualties in the battle of Lexington to recent operations in Afghanistan, the toll is 1,171,177. By contrast, the number killed by firearms, including suicides, since 1968, according to the Centres for Disease Control and Prevention and the FBI, is 1,384,171.

That 212,994 more Americans lost their lives from firearms in the last 45 years than in all wars involving the US is a staggering fact, particularly when you place it in the context of the safety-conscious, “secondary smoke” obsessions that characterise so much of American life.

Everywhere you look in America, people are trying to make life safer. On roads, for example, there has been a huge effort in the past 50 years to enforce speed limits, crack down on drink/drug driving and build safety features into highways, as well as vehicles. The result is a steadily improving record; by 2015, forecasters predict that for first time road deaths will be fewer than those caused by firearms (32,036 to 32,929).

Plainly, there’s no equivalent effort in the area of privately owned firearms. Indeed, most politicians do everything they can to make the country less safe. Recently, a Democrat senator from Arkansas named Mark Pryor ran a TV ad against the gun-control campaign funded by NY mayor Michael Bloomberg – one of the few politicians to stand up to the NRA lobby – explaining why he was against enhanced background checks on gun owners yet was committed to “finding real solutions to violence”.

About their own safety, Americans often have an unusual ability to hold two utterly opposed ideas in their heads simultaneously. That can only explain the past decade in which the fear of terror has cost the country hundreds of billions of dollars in wars, surveillance and intelligence programmes and homeland security. Ten years after 9/11, homeland security spending doubled to $69bn . The total bill since the attacks is more than $649bn.

One more figure. There have been fewer than 20 terror-related deaths on American soil since 9/11 and about 364,000 deaths caused by privately owned firearms. If any European nation had such a record and persisted in addressing only the first figure, while ignoring the second, you can bet your last pound that the State Department would be warning against travel to that country and no American would set foot in it without body armour.

But no nation sees itself as outsiders do. Half the country is sane and rational while the other half simply doesn’t grasp the inconsistencies and historic lunacy of its position, which springs from the second amendment right to keep and bear arms, and is derived from English common law and our 1689 Bill of Rights. We dispensed with these rights long ago, but American gun owners cleave to them with the tenacity that previous generations fought to continue slavery. Astonishingly, when owning a gun is not about ludicrous macho fantasy, it is mostly seen as a matter of personal safety, like the airbag in the new Ford pick-up or avoiding secondary smoke, despite conclusive evidence that people become less safe as gun ownership rises.

Last week, I happened to be in New York for the 9/11 anniversary: it occurs to me now that the city that suffered most dreadfully in the attacks and has the greatest reason for jumpiness is also among the places where you find most sense on the gun issue in America. New Yorkers understand that fear breeds peril and, regardless of tragedies such as Sandy Hook and the DC naval yard, the NRA, the gun manufacturers, conservative-inclined politicians and parts of the media will continue to advocate a right, which, at base, is as archaic as a witch trial.

Talking to American friends, I always sense a kind of despair that the gun lobby is too powerful to challenge and that nothing will ever change. The same resignation was evident in President Obama’s rather lifeless reaction to the Washington shooting last week. There is absolutely nothing he can do, which underscores the fact that America is in a jam and that international pressure may be one way of reducing the slaughter over the next generation. This has reached the point where it has ceased to be a domestic issue. The world cannot stand idly by.

(Here’s also another perspective – intervention other than legislation, and viewing gun crime as a public health problem)

More: A year in mass shootings

Tagged ,

What’s new in Big Brother Technology?

Does an increase in surveillance correlate with safety in cities?

Concepts: Orwellian societies, Big Brother (from George Orwell’s 1984), Surveillance, Rights/Freedoms, control, power, dehumanisation

Using surveillance to make cities safer

Thursday, 14 Nov 2013 | 7:41 PM ET

In the final episode of our theme week on ‘Innovation Cities,’ Tom Mackenzie takes a look at the new technology designed to help make us all more secure.

Singapore may already have one of the lowest crime rates in the world, but this has not stopped authorities there to push for further innovations in security. Its one-year Safe City pilot program is testing out a range of advanced technologies in the hope of improving public safety.

Singapore’s government has worked with Accenture, the management and technology consultancy, using video analytics to help stop crime. Facial recognition technology is used on streets to identify suspects, suspected gang members, and those on wanted lists.

(Read more: Minority report: Predicting where to put your policeman)

“We’re still in the early stages of that program, but we’re hoping that there will be results showing that this technology can make a real difference to public safety in a more cost-effective way, but also in a more effective way,” Ger Daly, Global Managing Director of Defense and Public Safety, Accenture, said in a report for CNBC’s Innovation Cities.

As well as using facial recognition software, the technology developed by Accenture can scan the way we move and assess whether a crime is being committed. “Now the technology can even detect patterns, so for example people fighting or break-ins at shops,” Daly added.

“It can pick up that behavior just from the shapes and the movement of the people and generate an alert even if nobody is watching that television feed.”

(Read more: Basque Country reaches out to the elderly)

For some, the use of this kind of technology conjures visions of Orwell’s Big Brother: where our every single movement and action is surveyed by the state. But Daly argues this kind of technology is already in use. “Biometric data has become a very important part of identifying people, it can be your fingerprints, it can be your iris, but it could also be your face,” he said.

“The technology is there, today…to really uniquely identify you and me and other people just by the patterns and shape of our face,” he added. “What maybe people don’t realize is that they’re actually using it already today. All new passports have… the chip symbol. It has your personal information on it, same as what’s printed on the passport. But it also has your face as a biometric identifier.”

Tracking illegal activity and criminals using facial recognition is one thing. Being alerted to offences by sounds is another. ShotSpotter, a U.S. company, has developed innovative acoustic surveillance technology that can detect gunfire and alert authorities to its location.

Joern Haufe | Getty Images

(Read more: Cities take some decongestant)

In 2011, the Minneapolis Police Department used ShotSpotter to great effect. “The system that we have of integrating our public safety cameras with ShotSpotter… [was] able to help us identify and catch the suspects in a homicide,” Commander Scott Gerlicher said this week.

“When we had a drive by shooting at a local convenience store, the ShotSpotter audio system captured the shots and our camera, which happened to be located at the corner where this incident occurred, also captured video tape of the vehicle actually doing the shooting,” he added.

With rapid advancements in security technology, are we nearing a future when police officers on the ground become redundant? “Technology has its limitations and it’s not meant to replace police officers,” Gerlicher said.

“But I think it’s been proven time and time again that technology, if used appropriately, can greatly improve the effectiveness and efficiency of delivering police services to a community.”

 

Tagged ,

What’s ahead for 2014?

The Atlantic’s Future Trend series is a pretty good read – the only thing that remains constant is the elusive effect tried-and-tested

http://www.theatlantic.com/special-report/2014-preview/

Tagged ,

How People in Muslim Countries Think Women Should Dress

Concepts/Issues: Cultural perceptions, discrimination/prejudice, rights, gender equality, freedoms.

One of the region’s most liberal societies prefers one of the more conservative head coverings
JAN 9 2014, 10:14 AM ET (The Atlantic) 
How respondents in various countries said women should dress. (Pew Research Center)

Wearing some form of head covering in public is an important sign of Islamic identity in many Muslim-majority countries, but there is considerable variation in the extent to which women are expected (and sometimes mandated) to cover up.

A recent Pew report, based on a survey conducted by the University of Michigan’s Institute for Social Research from 2011 to 2013 in seven majority-Muslim nations, reveals just how widely opinions about female attire differ in the region.

The researchers asked the respondents in each country, “Which one of these women is dressed most appropriately for public places?” while showing them this panel:

In the full paper, the study’s authors explain that “style #1 is en vogue in Afghanistan; #2 is popular among both conservatives and fundamentalists in Saudi Arabia and other Persian Gulf Arab countries; #3 is the style vigorously promoted by Shi’i fundamentalism and conservatives in Iran, Iraq, and Lebanon; #4 and #5 are considered most appropriate by modern Muslim women in Iran and Turkey; and #6 is preferred by secular women in the region.”

The fourth style, a white hijab that fully covers the hair, ears, and neck, was the most popular across all of the nations on average, while a fully uncovered look (#6) was only embraced among the comparatively liberal Lebanese.

The authors also asked participants if women should be able to choose how they dress, and majorities in only two countries—Turkey and Tunisia—agreed.

A country’s economic development, it seems, had little correlation with preferences for a less-conservative veil. One of the richest countries of the lot, Saudi Arabia, also had the most people saying they preferred a black niqab that covers the entire face.

Instead, the authors found that dress preferences tracked most strongly with each country’s level of gender equality and social freedoms.

That makes Tunisia’s preference for a relatively conservative hijab particularly interesting, since Tunisians hold otherwise relatively liberal values: The country showed tepid support for an Islamic government, it had the most respondents who were supportive of a woman’s right to dress as she wishes, and it also had the largest percentage of people disagreeing with the idea that university education is more important for boys than for girls.

And while respondents in all of the countries rated their own country as more moral than the U.S., Tunisians were the most likely to say they’d want Americans as neighbors.

Mevs.org

At the very least, the survey shows that there’s more to wearing a veil than conservative Islamic values.

Legality and effectiveness of extensive surveillance?

THE AL QAEDA SWITCHBOARD BY JANUARY 13, 2014, The Atlantic

Concepts: Privacy, control, rights, duties, responsibilities

Background: Surveillance, post 9/11 world, 2013 NSA/Edward Snowden leaks 

Edward Snowden has started a critical debate about the legality and the effectiveness of the N.S.A.’s practice of collecting unlimited records of telephone calls made to, from, and within the United States. Last month, two federal judges came to opposing conclusions about these issues. On December 16th, Judge Richard J. Leon, in Washington, D.C., ruled that the indiscriminate hoarding violates the Fourth Amendment right to privacy and its prohibition of unreasonable searches. Two weeks later, in New York, Judge William H. Pauley III ruled that the metadata-collection program was lawful and effective.

Judge Pauley invoked the example of Khalid al-Mihdhar, a Saudi jihadist who worked for Al Qaeda. On 9/11, he was one of the five hijackers of American Airlines Flight 77, which crashed into the Pentagon. In early 2000, Mihdhar made seven calls from San Diego to an Al Qaeda safe house in Yemen. According to Pauley, the N.S.A. intercepted the calls, but couldn’t identify where Mihdhar was calling from. Relying on testimony by Robert Mueller, the former director of the F.B.I., Pauley concluded that metadata collection could have allowed the bureau to discover that the calls were being made from the U.S., in which case the bureau could have stopped 9/11.

If he is right, advocates of extensive monitoring by the government have a strong case. But the Mihdhar calls tell a different story about why the bureau failed to prevent the catastrophe. The C.I.A. withheld crucial intelligence from the F.B.I., which has the ultimate authority to investigate terrorism in the U.S. and attacks on Americans abroad. Continue reading

Self Defeating Soft Power – Revisiting the ghost of Yasukuni

Only one step could have made conditions worse among Japan, China, and South Korea, with spillover effects on America. That is the step Japan’s prime minister has just taken.

 DEC 25 2013, 8:56 PM ET, The Atlantic

Main hall of Yasukuni Shrine, via Wikipedia.                 

At first I didn’t believe the news this evening that Japanese prime minister Shinzo Abe had visited Yasukuni Shrine in Tokyo. I didn’t believe it, because such a move would be guaranteed to make a delicate situation in East Asia far, far worse. So Abe wouldn’t actually do it, right?

It turns out that he has. For a Japanese leader to visit Yasukuni, in the midst of tensions with China, is not quite equivalent to a German chancellor visiting Auschwitz or Buchenwald in the midst of some disagreement with Israel. Or a white American politician visiting some lynching site knowing that the NAACP is watching. But it’s close. Continue reading

Writers, technology and the future

Writers, Technology, and the Future

These are hard times for those who live by the pen. But technology will not decide their fate. The future of writers—and the articles, novels, and nonfiction books they create—ultimately rests with those who read them.

Writing for a living is a unique profession. It’s also a relatively young one, dating essentially from the 18th century; the literary historian Alvin Kernan has called Samuel Johnson’s 1755 letter to Lord Chesterfield, in which Johnson proudly declared his independence of aristocratic patronage, “the Magna Carta of the modern author.” There’s a kaleidoscope of genres and a scale of incomes from effectively subminimum wages to seven figures. Most of all, writing is a profession that millions of people would like to join, at least part-time. To the alarm of critics such as the essayist Joseph Epstein, one survey revealed that more than 80 percent of Americans believe they have a book in them.

Today, many worry that technology, an ally of authorship since 19th-century innovations slashed the cost of printing, may no longer be so healthy for Samuel Johnson’s ideal of writing supported by the purchases of a growing literate public. Fifty years ago, almost a generation before the introduction of personal computing, the prospects for authorship looked bright. The New York Times reported in 1966 that publishing executives were concerned that their industry’s profitability might make them the target of hostile corporate takeovers. The next year, CBS paid a premium price of $280 million in a friendly acquisition of the venerable imprint Holt, Rinehart, and Winston. IBM and RCA had already bought into the burgeoning publishing industry, believing that the growth of college enrollments promised an expansion of the book market.

The Great Society era seemed a bonanza for publishers and authors, the vanguard of the new “knowledge workers” celebrated by the popular management guru Peter Drucker. Trade book publishers saw revenues grow 10 to 12 percent annually in those golden years, including an 18 percent jump in 1966 alone. Textbook publishers did even better. Books of all kinds were in high demand.

Sadly, the idyll was short lived. In 1969, when President Richard Nixon called for a large increase in federal support for the arts and humanities, he noted that many cultural institutions found themselves in “acute financial crisis.” By 1971, publishers were struggling with inflation and stagnant markets. Not only was the Great Society’s plan for leveling upward in trouble; the New Frontier’s notion of diffusing high culture downward to the masses was also losing ground. Campus protests and countercultural lifestyles had alienated many in the middle class from the universities and what they represented. It did not help that the style of youthful rebellion had changed, with early activists such as Mario Savio, leader of the mid-1960s Free Speech Movement at the University of California, Berkeley, and a serious graduate student who went on to a physics scholarship at Oxford University, giving way to the likes of the Yippie pranksters Abbie Hoffman (author of Steal This Book) and Jerry Rubin.

Today, publishing is the weakest link in the old media-entertainment-education nexus. Rupert Murdoch’s giant News Corporation is spinning off its lagging newspaper and book publishing operations from its Fox entertainment business. Houghton Mifflin Harcourt, a venerable book publisher, filed for Chapter 11 bankruptcy earlier this year, laden with $3 billion in debts.

There are many other gloomy signs for the future of reading and writing. The plight of newspapers is well known, summed up in the Pew Research Center’s report State of the News Media 2012: The papers’ print advertising revenues dropped by $2.1 billion in 2011, while online revenues increased by only $207 million—a 10:1 differential, even larger than in the previous year. Magazines have also been losing circulation and advertising, reaching what New York Times media correspondent David Carr has called, with some exaggeration, “the edge of the cliff.”

Most authors consider retail bookstores a cornerstone of their effort to build an audience for their books—places where the personal recommendations of staff members and readers’ accidental discoveries can work wonders. (John Kennedy Jr., who once startled me with a telephone call inviting me to write for his magazineGeorge, explained that he had come across my book while looking for another in a store.) But bricks-and-mortar booksellers are reeling. The bankruptcy of the Borders chain last year shuttered almost 400 stores. The other major chain, Barnes & Noble, is struggling. The news is worse among independently owned bookstores. Their leading trade group lost more than half its membership between 1993 and 2008.

No wonder even some of the most commercially successful authors see the heavens darkening. In February, the popular novelist Jodi Picoult (50 million copies in print) told a reporter from The Times of London that the trend toward electronic publishing, with its lower royalties, has been reducing her income. “If you sell the same number of books now as you did a year ago you will make a third less money,” she said. “In America my sales are now just shy of 50-50 print to e-books this year.”

Some detractors of the publishing industry, such as the author and marketing specialist Seth Godin, foresee a totally new world: “Who said you have a right to cash money from writing? . . . .  The future is going to be filled with amateurs, and the truly talented and persistent will make a great living. But the days of journeyman writers who make a good living by the word [is] over.”

Such dire predictions are hypnotic. Cultural pessimism was a growth industry even in what we think of as print’s golden age a century and more ago, when a burgeoning literate public was not distracted by radio or Hollywood, let alone television. The taste for gloom is so strong that it even brings old books back to life. The philosopher Allan Bloom’s Closing of the American Mind (1987) became a surprise million-copy popular hit and was recently reissued in a 25th-anniversary edition. The critic Sven Birkerts’s Gutenberg Elegies (1995) has likewise been reissued. These and other gloomy tomes have recently been one-upped (one-downed?) in curmudgeonly provocation by the science writer and cultural critic Nicholas Carr’s The Shallows and the English professor Mark Bauerlein’s Dumbest Generation. No wonder some psychological researchers believe that negativity bias is an innate feature of the human mind.

Yet despair is not universal. When I spoke with him by telephone, David Fenza, executive director of the Association of Writers and Writing Programs, the largest academic organization in creative writing, argued that publishing is more vigorous, and more open to a diversity of voices, than ever. He rejected the idea that it’s harder for writers to succeed, observing that greater numbers of prose writers than ever before are able to sell 100,000 copies of a book, and greater numbers of poets to sell 10,000 copies. In many universities, the creative writing major has become an alternative to pursuit of the conventional English degree, attracting many students who love reading but not necessarily the latest hyper-specialized scholarly trends in the humanities.

That is only one reason to hope that a more vigorous and participatory culture is arising among at least some young people. The short story, which once flourished in popular magazines, has found a modest revival in One Story, a nonprofit print magazine that now has 15,000 subscribers and will soon be complemented by a new publication for teenagers. Book industry statistics also argue against cultural despair. American book publishers reported small but notable gains in the number of books sold (print and digital) and in net revenue during the difficult years from 2008 and 2010, according to The New York Times. (The numbers have since remained essentially flat.) Children’s books have been a particular bright spot, thanks partly to continuing enthusiasm for the Harry Potter stories. Of course, pinched revenues are disappointing, and newspaper and magazine closings hurt writers and readers. But is today’s hyperangst justified, especially at a time when many industries would be happy to be in steady state? After all, as the Atlantic blogger Derek Thompson points out, the revolution in digital music slashed recording industry revenues by 57 percent in just a 10-year stretch after 1999.

However, there are two sets of pressures that rightly concern authors: the squeeze and the crush. The squeeze is the result of technology’s dilution of attention time and spending power; the crush is the product of overreach by oligarchic intermediaries and insurgent information consumers.

The squeeze, a growing supply of words competing for limited amounts of reader time, is partly a reflection of the popularity of writing as a career. Technological change has lowered what economists call the “barriers to entry” in writing. This phenomenon helped push the number of books published in the United States from 240,000 in 2003 to more than 347,000 in 2011. Technology has also allowed the already prolific to become more so. The invention of the typewriter in the 1860s made editors’ lives easier, but hardly changed the pace of writing itself. (Think of the literary output of Dickens and Thackeray, or the nearly 20,000 letters Thomas Jefferson is known to have written.) Computers have been a different story, as the experience of the masterly British historian Roy Porter shows. “The steady stream of books,” the Guardian said in Porter’s 2002 obituary, “became an avalanche once he had mastered the computer.”

There is also more pressure on established writers and editors to generate content. Newspaper staff must now blog, tweet, and write Facebook posts in addition to doing their primary jobs, an existence Dean Starkman of The Columbia Journalism Review characterizes as a kind of journalistic hamster wheel. The quest for Web traffic, he argues, has been diverting precious resources from the core mission of journalism. In 2010, Demand Media, operating sites such as eHow.com and employing thousands of minimally paid freelancers, published 4,500 articles per day, mainly on practical topics from health and careers to home repair, and drew more Web traffic than The New York Times. The early assumption that high-quality professional writing would prevail on the Web has proved too optimistic.

If the squeeze is putting pressure on writers’ income, the crush is threatening it more radically. The crush is not the direct result of electronic publishing, which is not inherently good or bad for writing as a business. Indirectly, though, the electronic book brings with it two opposing but equally disturbing trends, monopoly and piracy.

Today, the challenge to writers is not so much oligopoly as the prospect of hegemony by a single company, Amazon. Until recently, authors could regard it as one of their best friends. It has let large and small publishers alike find readers, especially for backlist titles and other slow-selling books few retailers would stock. It has encouraged discussion of books among its customers, let authors set up personal pages on its site, and made it easier for customers to discover other books by favorite authors.

With the advent of Amazon’s aggressively promoted Kindle readers, the picture has darkened. The long-tailed, friendly underdog has been turning alpha Rottweiler. Unlike vendors of competing readers and tablets, including Apple and Barnes & Noble, Amazon wants to do more than sell platforms for reading the electronic content it sells. It appears to be promoting self-publication through its site as an alternative to—indeed, a replacement for—conventional publishing. (Seth Godin briefly worked with Amazon in one such effort to supplant traditional publishers.) In his annual letter to shareholders this year, Amazon CEO and founder Jeff Bezos argued that “even well-meaning gatekeepers slow innovation,” a jab at publishers. He was surely cheered when the U.S. Department of Justice filed an antitrust suit charging Apple and five major publishers with colluding to keep the prices of e-books high and prevent price cutting competition between Amazon and its rivals. (Three of the publishers recently settled the claim.) Critics of Amazon argue that its ability to market bestsellers at a loss threatens publishers’ ability to promote new authors. They fear that the company will make nightmares of downward-spiraling compensation come true. Senator Charles Schumer (D-N.Y.) has criticized the Justice Department, arguing that “the suit could wipe out the publishing industry as we know it.”

The novelist Scott Turow, president of the Authors Guild, acknowledging that Amazon has been good for him personally and calling the Kindle “a great innovation,” has nonetheless warned, “It’s only rational to fear what they’re going to do with this accumulation of power.” Steve Wasserman, writing in The Nation, cites what Amazon has already done: When the 500-member Independent Publishers Group refused to accept its demand for deeper discounts on IPG members’ products, it deleted almost 5,000 of the publishers’ digital titles from its site. One independent publisher in Texas declared what many publishers and writers have come to believe: “Amazon seemingly wants to kill off the distributors, then kill off the independent publishers and bookstores, and become the only link between the reader and the author.” At that point, writers could be almost completely at its mercy.

Piracy is the inverse of monopoly. Though there is disagreement about its extent, illegal e-book sharing hasn’t reached the levels of theft that plague film studios and music labels. For some writers, the real threat is not piracy itself but pressure to reduce prices to discourage illegal copying. As the novelist Ewan Morrison has suggested, “In every digital industry the attempt to combat piracy has led to a massive reduction in cover price: the slippery slope towards free digital content.”

Frightening as they are, the squeeze and the crush do not portend an unavoidably dark future. Previous economic and technological crises have been crucibles of innovation, spurring the emergence of new genres and drawing in new writers. Edgar Allan Poe’s puzzle-based mystery stories such as “The Murders in the Rue Morgue” and “The Gold-Bug” were a commercially minded response to the Panic of 1837, as the Poe scholar Terence Whalen has argued, that introduced the scientific detective to literature. The Panic of 1893 hurt traditional subscription-based magazines but gave rise to a new breed of inexpensive, mass-circulation counterparts that placed heavier reliance on advertisers for revenues. Some of the greatest writing successes of the 1930s were businessmen who had been bloodied by the Crash of 1929: Yip Harburg, who wrote the lyrics of “Brother, Can You Spare a Dime?” and the songs inThe Wizard of Oz, and Benjamin Graham, who distilled the hard financial lessons he had learned in Security Analysis, now considered a canonical work on “value” investing.

It’s no less true for being a cliché that problems are opportunities. The travails of newspapers are due in part to the public’s impatience with chronic formulaic similarity. As the historian and director of the Harvard University Library, Robert Darnton, a onetime police reporter, observed in a classic 1975 ethnographic study of journalists’ tribal ways, “Nothing could be less competitive than a group of reporters on the same story.” Technology has exposed mercilessly what critics and insiders have long acknowledged.

The structural problems of journalism leave room for innovation, as they did more than a century ago, when the 38-year-old Chattanooga newspaper publishing prodigyAdolph S. Ochs, nearly bankrupt after the Panic of 1893, somehow found backers for a takeover of the struggling New York Times, turning it into the first elite newspaper priced for the masses. Are there new Ochses in our midst? The greatest disciple of Benjamin Graham, Warren Buffett, has been acquiring newspapers even as Rupert Murdoch has been spinning them off.

Like newspapers, the print encyclopedia business had a chronic problem, in its case the impossibility of keeping many entries up to date. Yet the nemesis of commercial encyclopedias, Wikipedia, has its own structural limitations. Open editing may correct errors and pile up references and images, but it’s not suited to creating the kind of intellectual synthesis that the classic 11th edition of the Encyclopedia Britannica achieved more than a century ago. Could a 21st-century counterpart of that landmark work be the future of the encyclopedia?

What of the average writer? Nobody ever aspired to be an average writer. Apart from technical and contract writing, the profession has always been what economists call a tournament, a competitive environment with only a few big winners, whose successes motivate the rest. It’s very possible that the solid middle of the profession will erode further, and a few favored authors will pull farther ahead. The median may decline, but the glittering prizes will remain.

The future depends more on writers themselves than on technology. If they accept the proletarianization thesis, it will become a self-fulfilling prophecy. If they can show how copyright and good compensation are in the long-term interest of the reading public, if they can mobilize readers to help defeat would-be monopolists of various kinds, if they can use social media to enhance relations with readers, there will still be many disappointed writers, but there will also be new kinds of opportunity. Optimism may fail; pessimism can’t succeed. As the sociologist Erving Goffman, whose first book, The Presentation of Self in Everyday Life (1959), has sold 500,000 copies, put it when his Marxist colleague Alvin Gouldner complained of being treated like a commodity by the publisher they shared: There’s nothing wrong with being treated like a commodity as long as you’re an expensive commodity.

About the Author

Edward Tenner, author of Our Own Devices: How Technology Remakes Humanity(2003) and Why Things Bite Back: Technology and the Revenge of Unintended Consequences (1996), is a research affiliate of the Princeton Center for Arts and Cultural Policy Studies and a WQ