Reinventing the Newsroom

Why the Spat Over Murdoch, Bing and Google Doesn’t Matter

Posted in Communities, Social Media, Social Search, Twitter by reinventingthenewsroom on November 25, 2009

I tried to resist the thought, but I couldn’t talk myself out of it: None of this furor over Bing and Google and Rupert Murdoch will matter very much, or for very long.

An astonishing number of pixels have been spilled over social media, with the usual digital mix of interesting insights and wild claims of revolution. But even amid the hype, what’s definitely true is that social media is remaking how we live our lives online. And in some vital ways, social media is back-to-the-future Web stuff, fulfilling the long-deferred promise of Web publishing and search.

The idea that the Web makes everybody a publisher has been around for more than a decade, but for a long time the possibilities weren’t sufficiently supported by the technology platforms for Web publishing to be a truly democratic phenomenon. Sure, you could be an online diarist or cataloger or critic in 1995, but practically speaking you needed coding chops that were beyond most people. Blogging changed that, simplifying the process of creating and maintaining Web pages so that a much larger group of people could become publishers. But even then, setting up a blog was a technological bridge too far for most people — practically speaking, being a Web publisher was still a relatively techie endeavor. MySpace and Facebook and other social-media platforms were what finally married the technology with its possibilities. Setting up a social-media account is dead easy, as is answering the question “What’s on your mind?” with a bit of typing and clicking SHARE. Finally, the idea that we can all be publishers doesn’t sound like an invocation of rhetoric, but a description of reality.

With social media, we’re not just publishers — we’re sharers. And this is back to the future, too. Google’s search algorithms were created to replicate something that literally dates back to the Stone Age: our finely-honed sense of trust and social relationships. All things considered, even socially inept people are born with really good algorithms for figuring out social rank, influence and trustworthiness. Google did a remarkably clever job copying those — and they’ve earned billions upon billions from that foundation — but Google was needed because in the early days of the Web people’s natural social structures didn’t scale. There was too little participation for the Web to be truly representative, and truly participating — by creating information, assessing it and sharing it — was too technically difficult. Most of our meaningful social interactions took place in settings that were simpler — email, then IM and text-messaging. But that was primarily a one-to-one world that stood apart from the Web, which was a vast sea of information crying out for order. Few people had the technical chops to tackle that ordering (recall Yahoo supposedly stands for Yet Another Hierarchial Officious Oracle), the task was too big for people to handle the job anyway, and the results addressed the world in its vast entirety, not the fairly local world with which we naturally engage. Seen from this perspective, a lot of the problems and shortcomings of the Web feel like variations on this scaling problem: For years Google was a great tool for discovering weather patterns in Mongolia but a terrible way to find decent take-out within a couple of miles.

That’s now changing. The Web is not, of course, truly representative yet — too much of the world is still left out because of economic inequity, illiteracy, the repression of women and other ills. But within vast swathes of societies such as ours, we’re beginning to at least be able to make the claim that it is, and to glimpse a Web that’s accessible from everywhere, not just desks. (Which taken together will really just be the starting gun for what the Web will become — it’s still so early!)

And with participation in social media increasingly becoming the norm, we are reclaiming some of the old ways we naturally sort ourselves out into peer groups and social hierarchies. The nature of these peer groups is changing, of course — we seek out like-minded folks world-wide and build communities of interest instead of geography, we maintain weak ties instead of severing connections, and we leverage friend-of-a-friend situations in ways that were once reserved for people with a natural gift for social connections. But the trend is to return to something much closer to the social ties for which we are hard-wired.

This is why search is changing. With the ability to create strong peer groups online, and to create and share within those groups, we increasingly can use our own innate algorithms for trust and influence instead of turning to Google’s replicas of them. And we are discovering — or, really, rediscovering — that we have an unconscious knack for assembling peer groups that are as good or better at delivering a reliable “feed” of news about not only the subjects we’re most interested in, but the subjects that cross peer-group lines. Peer groups chop the Web down to size, and make the old human ways of finding and exchanging information scaleable again. If we have them, we have much less need for industrial search.

My A-Ha moment with Twitter was realizing that without even meaning to, by following people on Twitter I’d created a feed of information that was an excellent substitute not just for the sites I habitually visit about various subjects, but also for the aggregated home page I maintain for general news. I now routinely get my news from Twitter or Facebook, and reflexively turn to Twitter when news breaks. The combined efforts of all those people I follow add up to something that’s faster than news Web sites, covers more territory and is as reliable if not more so than RSS feeds and mechanized aggregation. The college kid who told a focus group that “if news is that important, it will find me” wasn’t being breezy or lazy — he was describing what social media has increasingly made reality.

That same effect is being seen other places, as people replace algorithms. “Do what you do best and link to the rest” is a strategy based around people, not search — it would work perfectly well without Google or Bing. Curation is about people, not search. Done right, aggregation is about people, not search. Email This and Digg and Share on Facebook are about people, not search.

This isn’t an unalloyed good — whether they’re centered on common interests or geography, our peer groups encourage us to create echo chambers of common creed and aligned opinion. We are correct to see this as a drawback, and to wonder which thin slice of news will find us — and if it will be news at all. But our dislike of the idea isn’t enough to prevent it from happening. We will vote — consciously or not, for good or for ill — for social search over mechanized search. It’s already starting to happen. And that means Rupert Murdoch and Dean Singleton and the AP and Microsoft and Google and everybody else are staking out positions in the last war. Theirs is a sideshow and a distraction. Whether we realize it or not, we’re already moving on.

The news will find me, because my peers will find it. It doesn’t matter whether the news gets indexed by Google or Bing or something else. My peers will find it, either through one of those search engines or more likely without visiting either. Murdoch may squeeze some millions out of Microsoft and wound Google and spark a million arguments about the civic value of how to index information, but none of that is going to make any difference to me. The news will find me.

The Hand-to-Forehead Sound

Posted in Cultural Change, Digital Experiments by reinventingthenewsroom on November 19, 2009

I’ve always thought that one way you know you’re learning is you’re surprised a lot. If I’m right about that, I’ve learned quite a bit in the last few weeks.

There was the moment I got Twitter lists by seeing what the Texas Tribune was up to with Tweetwire — a moment that was equal parts “A-ha!” and whatever sound your hand makes impacting your forehead in frustration. I’d gotten used to Twitter as a way of delivering news and to it as a way of promoting a brand (whether personal or corporate), but it hadn’t occurred to me that a news organization ould use Twitter lists as simple but powerful aggregation, putting together a mix of sources from its own ranks, other news organizations, bloggers, readers and public figures/organizations and sharing that as a real-time news feed. That was exactly what I did with Twitter as a user, but I missed the simple idea that a news organization could do it to. Hand to forehead.

Then there was the moment the light went on while I was reading the Abernathy/Foster report, with their note that print papers defined community (and prepared content accordingly) based largely on geographical and political boundaries, while the Web’s aggregators define the boundaries by special interests. That was interesting; soon after that came their advice that newspapers rebuild around specialized audiences and communities, including hyperlocal. It was that last part that really made me sit up. I’d been talking up hyperlocal because I’m keenly interested in the increasing intersection of the global Web with real-time information and locations, and in what newspapers can do to reclaim a more vital role in civic life. But in focusing on hyperlocal so specifically, I’d lost sight of the fact that it’s a kind of specialized audience. Hyperlocal’s very important — we all live somewhere — but it’s not necessarily the only way to build community, and in some situations it might not be the best way. Hand to forehead once again.

I think this is why I had such a strong reaction to the dust-up over the Columbia Journalism Review’s critique of the Spot.Us garbage-patch story: I thought some of the early criticism was defensive and dogmatic. That’s never good, and it’s particularly unfortunate given how new all the digital-journalism initiatives are. We can’t be closing ranks behind the merits of alternating current or direct current when we’re still just trying to keep a fragile carbon filament lit. We’re experimenting, and that means ruthlessly poking and prodding and questioning and critiquing, iterating and borrowing and discarding. Strong opinions are productive and essential; orthodoxies are counterproductive and distracting.

I’ve learned an enormous amount, and it’s embarrassing to look back and realize how stuck I was in a certain well-worn groove when I wrote something, or how I didn’t see you could do something slightly differently at the start and get a very different result. But given the tumult all around us, it would be worse to look back and find my opinions are exactly the same as they were when I started writing this blog, or three months ago, or even a month ago. So I hope I keep hearing that hand-to-forehead sound, even if the slap sometimes hurts.

* * *

I conduct periodic Web chats with my friend David Baker of EidosMedia. Here’s our latest back-and-forth, looking at Web metrics and publishers’ changing expectations about audiences and scale.

Comments Off on The Hand-to-Forehead Sound

An Example of Searching for the News Decoder Ring

Posted in Cultural Change by reinventingthenewsroom on November 18, 2009

In discussing why Wikipedia was beating newspapers as an information source when news breaks, I used the example of health care in illustrating how upside-down storytelling leaves readers struggling to put new developments into context, something Wikipedia handles much better by giving you the basics of the narrative. (Though as David Gerard pointed out in the comments, Wikipedia does draw on “a certain amount” of inverted pyramid — “The first sentence should be good standalone, the first paragraph should be good standalone, the lead section should be good standalone. Then you can get into a structured article. That way you’ve got something useful for everyone who comes by.”)

Here’s a more-specific example of what’s so frustrating, from this morning’s New York Times. The news is that a federal appeals court panel upheld the conviction of Lynne F. Stewart, a defense lawyer found guilty in 2005 of assisting terrorism by smuggling information from an imprisoned client, Sheik Omar Abdel Rahman, to his violent followers in Egypt. You can read it here.

I remembered this case, partly because Ms. Stewart is from my Brooklyn neighborhood, but mostly because of the controversy over what she’d done and whether she’d done something clearly wrong, or run afoul of post-9/11 terrorism fears. That was all I remembered. The news that her conviction had been upheld wasn’t particularly interesting, but I was interested in revisiting what exactly she’d done, and what the arguments were on both sides.

After reading the Times story, I still didn’t know.

The Times story is 23 paragraphs long. Here’s what those paragraphs contain (my apologies for where my frustration shows through):

  1. News — Conviction is upheld, general reminder of who Stewart is, legalese that just confuses me (what’s a federal appeals court panel?) location (“in Manhattan”) that I don’t care about.
  2. News — Stewart’s bond is ordered revoked and she must begin serving her sentence. More baffling legalese — it’s a three-judge panel of the United States Court of Appeals for the Second Circuit. As a reader I tripped over that and still don’t understand what it means.
  3. News — Trial judge must consider whether she deserves a longer sentence.
  4. Reaction from Stewart.
  5. News — Trial judge orders Stewart and co-defendant to prepare to surrender when their bond is revoked.
  6. What’s Next — It’s not clear when they’ll have to do that.
  7. Analysis/Context — The judge who wrote the ruling rejected her claim. I’m told the ruling is 125 pages. (There’s no link to it.) Her client is named. I’m told she “passed messages for him” and that she “has denied seeking to incite violence among his militant followers.”
  8. Quote from judge.
  9. More reaction from Stewart, making reference to prisoners at Guantanamo.
  10. Context — A note that Guantanamo detainees will be tried in New York.
  11. More Stewart reaction.
  12. Ditto.
  13. Ditto.
  14. What’s Next — Her lawyer says they’ll keep fighting. Spokeswoman for other side has no comment.
  15. Context — I’m told that prosecutors charged Stewart with conspiring “with two others to break strict rules that barred Mr. Abdel Rahman … from communicating with outsiders.”
  16. Context — I’m told that prosecutors charged Stewart, a translator and a third man with helping the sheik pass messages to the Islamic Group, an Egyptian terrorist organization.
  17. News — Two other judges joined the ruling.
  18. Reaction from translator.
  19. No comment from lawyer for third man.
  20. Quotes by judge from ruling explaining decision.
  21. More quotes  from judge.
  22. Ditto.
  23. Quote from another judge who partially dissented from ruling.

The conviction being upheld and the imminent revocation of Stewart’s bond is the news, with the context for the news what the judges on the panel said. I get that, and while I’m not qualified to judge, I’ll assume the Times reporters did a good job with that. (Though why can’t I read the ruling?) But that’s going to be of interest to a fairly small subset of legal-minded readers. The interesting news for most readers will be what I wanted to know — what did Stewart do, and was it wrong?

I’m told that in the lead, but the description is so general that it doesn’t help me. I’m not told about it again until the seventh paragraph, which is the first time I learn who her client was. And then I get nothing until the 15th and 16th grafs, in which I learn the name of the terrorist group, and that (according to prosecutors) Stewart and two other men helped the sheik pass messages. This is what I want to know — but again, I’m only given cursory information that’s of no help to me in forming an opinion.

I know what the institutional reasons for the lack of explanation is — it’s old news, and was covered by the Times at an earlier date. Stewart’s name was hyperlinked, so I followed that to a Times topic page, hoping for at-a-glance background information on the case. This wouldn’t have eased my frustration about upside-down storytelling, but it would at least have answered my question. What i found was an automated archive of articles about Stewart — and, eventually, the explanation I’d been searching for. It was on the second page of the 25th article linked, on the third page of search results. (By the way, Wikipedia’s page for Stewart wasn’t much more help — it’s slapdash and vague, though if the case were more high-profile I’m sure it would have attracted more editors. I went to Wikipedia in frustration halfway through the Times article, when it was obvious I wouldn’t be told what I really wanted to know.)

The Times article and approach is broken. It’s broken for print readers who only have that day’s Times article available to them. It’s broken online, where ferreting out the information I wanted to know turned into a frustrating scavenger hunt that I stuck with only to prove a point. As news it misjudges the audience for the story and ignores what that audience wants to know, and as storytelling it’s incoherent. And this is coverage from one of the world’s best newspapers, and one of online news’ best innovators.

We can do better than this. We have to do better than this.

This Is Broken: From Game Stories to, Well, Everything

Posted in Cultural Change by reinventingthenewsroom on November 16, 2009

Update: You might be interested in the follow-up to this post: An Example of Searching for the News Decoder Ring.

Maybe I’m just getting cranky, but over the weekend and into today I’ve found myself thinking about some building blocks of journalism and thinking, “You know, this is broken.” Not broken as in “this really needs to be recast for the Web” or “some kind of digital adjunct would help here,” but broken as in “this no longer works, and we need to stop doing it.”

My latest sportswriting column for Indiana University’s National Sports Journalism Center looks at ways to reinvent game stories — the day-after accounts of sporting events that tell you who won, how they won and (hopefully) why they won. In discussing how the game story could be re-prioritized, reimagined or reinvigorated, I talked with four very smart sportswriters (Buster Olney, Joe Posnanski, Chico Harlan and Jason McIntyre), and kept in mind the opinion of a fifth, my co-columnist Dave Kindred, whose plea for game stories can be found here.

I hope I surveyed the potential alternatives fairly, but re-reading my own column this morning, I realized I’d made up my own mind on the question: The game story is broken. Its time has passed, and it is an anachronism in a world of Web-first journalism. We should stop writing them. Now. (I wish I’d come to this realization a day earlier, but sometimes you’ve got to take the journey to figure out where you’ve ended up.)

The sportswriters I talked to discussed the terrible deadline pressures of game stories — pressures that can result in the familiar, tired game-story formula of lots of play-by-play and some paint-by-numbers quotes. They discussed how game stories get in the way of old-fashioned reporting — building relationships with players and coaches and other sources, allowing for more interesting reactions and sharper analysis. Their love for the form came through loud and clear, yes — but so did their enumeration of its flaws.

The question to ask about game stories is the same question to ask about everything we do in journalism: If we were starting today, would we do this? That’s the question. Not whether we’ve spent a lot of money on the infrastructure of producing something a certain way, or whether a journalistic form is a cherished tradition, or whether it still works for a niche audience, or whether it can still be done very well by the best practitioners of the craft. All of those questions are distractions from the real business at hand.

If we were starting today, would we do this?

So: If I were starting a sports site (or a sports section on a general-news Web site), would I pay a reporter or some third-party source for a summary of yesterday’s game, knowing that today my audience is much more likely to have watched the game, can get a recap on SportsCenter once an hour during the morning, can see the highlights on demand from a team or league site, and can watch a condensed game on the iPhone?

Absolutely not.

Depending what budget you gave me, I would pay for the best box score I could get, get a graph of win probability or some other interesting visual metric, and try to offer a slideshow of key photos and/or video highlights. But I wouldn’t run game stories. Instead, I would tell my reporters to write something that a reader who knows what happened would still want to read the next morning. I would work with my reporters to find a new starting point. Maybe that starting point is this idea from Chico Harlan, a quote that wound up on the cutting-room floor of my column: “Maybe there’s a way to interpret [game stories] not as the story about the game, but as being about the most interesting thing to happen to the team that day.”

Maybe this wouldn’t be an enormous epiphany, but this morning I read Steve Myers’ interview with Jimmy Wales of Wikipedia, which Jay Rosen described aptly as “a lesson in how the Web works, disguised as a Q & A about topic pages and such.”

Asked if he sees Wikipedia as a news destination, Wales replied that “people do often come to Wikipedia when major news is breaking. This is not our primary intention, but of course it happens. The reason that it happens is that the traditional news organizations are not doing a good job of filling people in on background information. People come to us because we do a better job at meeting their informational needs.”

It’s a quietly devastating indictment of journalism. And Wales is absolutely right, for reasons explored very capably a couple of months back by Matt Thompson. Arrive at the latest newspaper story about, say, the health-care debate and you’ll be told what’s new at the top, then given various snippets of background that you’re supposed to use to orient yourself. Which is serviceable if you’ve been following the story (though in that case you’ll know the background and stop reading), but if you’re new you’ll be utterly lost — you’ll need, to quote Thompson, “a decoder ring, attainable only through years of reading news stories and looking for patterns”. On Wikipedia, breaking news gets put into context — and not in some upside-down format that tells you the very latest development that may or may not affect the larger narrative before it gives you the basics of that narrative so you can understand what that news means.

There are historical reasons for this upside-down storytelling in print, but it makes no sense online. The form is broken. Yet our Web newspapers have largely kept shoveling it into pixels — if you’re lucky there will be a link (if you can find it) to a topic page that’s built along Wikipedia’s lines. But odds are you already went off to Wikipedia before you saw that page.

Why didn’t we change? Journalists are masters at filtering, synthesizing and presenting information, yet we’ve spent more than a decade repurposing a 19th-century form of specialized storytelling instead of starting fresh with the possibilities of a new medium. Newspapers could have been Wikipedia, instead of being left to try and learn from it. And what are we learning? The news article is in some fundamental ways just as broken as the game story — if it weren’t, Jimmy Wales wouldn’t see a surge of traffic to Wikipedia in the wake of any big news event. We need to rethink the basics: If we were starting today, would we do this? But when will we unshackle ourselves from print and really ask the question? And at what point will the answer come too late to matter?

A follow-up to this post is here.

Digging Into the Abernathy/Foster Report

Posted in Communities, Cultural Change, Going Local by reinventingthenewsroom on November 13, 2009

The latest attempt to summarize the challenges facing newspapers and recommend a course of action is out, with the alarm bells being sounded this time by veteran media executive Penelope Muse Abernathy and former McKinsey director Richard Foster.

The study (linked from Bill Mitchell’s overview as a PDF) struck me as a fairly familiar overview, though the writer/editor in me appreciated that it’s admirably succinct, and written with a welcome bite. (And I laughed out loud at the examination of Hindu and Judeo-Christian demises.) Certainly Abernathy and Foster find the right targets and hit them hard.

For instance, they nail the industry’s major disadvantages in the digital era:

  • the high cost of printing and distribution
  • the loss of geographically protected market dominance
  • the loss of high-margin advertising to online competitors

And their proposed plan of action seems sound as well:

  • shed legacy costs as quickly as possible
  • recreate community online in an effort to regain pricing leverage
  • build new online ad revenue streams

For me the best section of the plan is the one concerned with community, particularly how it’s defined and how it should be approached. A theme of the report is that news organizations keep using new digital tools in an effort to repurpose old models, when they ought to be reinventing things from the ground up. For instance, Abernathy and Foster note that pre-digital newspapers aggregated content and defined community largely based on geographical and political boundaries, but the new aggregators — search engines and commerce sites — do so around special interests. That simple, essential shift may be obvious to Web-business types, but I think it’s a blind spot for newspaper veterans.

Their advice: Rebuild newspapers around specialized audiences and communities (including hyperlocal), instead of continuing to try and reach a single mass audience or community. Start with niche audiences that papers are already serving. Become their aggregators, and customize stories for them — for example, instead of writing one big story about the health-care debate, write different versions tailored for those different specialty audiences. Such reinvented papers, they say, might be able to charge advertisers a premium to reach those communities, and charge customers for unique information.

An interesting point I hadn’t encountered before is that Abernathy and Foster say there’s a precedent for this — magazines responded to the threat posed by television by migrating to serve specialized niches or interest groups and charging advertisers a premium to reach them. Newspapers, on the other hand, have largely reached for eyeballs, putting themselves in competition with better aggregators such as Google.

There are some rather searing quotes in the report. Here’s one: “Unless news organizations simultaneously invest in re-imagining and re-inventing the online edition, there is no transformation of the traditional newspaper and the industry dies with its aging loyal readers, who pay an ever-increasing price to receive the ‘last’ printed copy of the newspaper.”

Ouch. And the report is nicely short on Pollyanna-ism, as this warning makes plain: “[a]n enterprising executive may accomplish all three goals … and not achieve the operating margins typical of news companies in the last quarter of the 20th century, since those profit levels were largely the result of being de facto geographic monopolies.”

Abernathy and Foster are sympathetic to companies that know they need to change, but find those changes difficult to implement. As an example of how to escape that trap, they cite Intel, and its change from making DRAMs to microprocessors. That difficult transition was finally made, they write, when Gordon Moore and Andy Grove asked themselves a brutally simple question: “If we got kicked out and the board brought in a new CEO, what do you think he would do?”

It’s a good question. Here’s hoping it gets newspaper executives nodding, and causes them to take action.

Spot.Us, the Times, the Garbage Patch and the Critics

Posted in Digital Experiments by reinventingthenewsroom on November 12, 2009

Update: Lindsey Hoshaw has published a wise and gracious blog post about her Spot.Us story, the blog vs. the Times, and the CJR criticism. Recommended. Regarding the Times story, she writes that “I wrote what I believed the Times wanted though they never specified the type of article they expected.”

If so, that takes the Times off the hook somewhat, though I still think a potentially rich story was made very flat. Whatever the reason, that’s a shame.

Original post is below.

* * *

I’m late to this party, and something tells me I’m going to regret weighing in, but the furor over Megan Garber’s Columbia Journalism Review critique of the New York Times/Spot.Us garbage-patch story keeps bothering me, and maybe getting some thoughts about it down here will help with that.

To briefly review, on Tuesday the Times ran a story about the Pacific garbage patch written by Lindsey Hoshaw and funded in part through the Spot.Us model. That afternoon, CJR’s Garber offered a critique of the Times story, which she found disappointing. Garber’s chief criticism was that other than some color and some nice photography by Hoshaw, the Times story leveraged little of Hoshaw’s experience spending a month at sea. The idea of a garbage patch that may be twice the size of Texas is a difficult one to get your arms around, and the Times story doesn’t capture that — Garber notes that much of the reporting is of the “could-be-done-from-anywhere variety: reporting, in other words, that could have been done over the phone or via email”.

Part of Garber’s frustration is that there’s a vehicle that delivers that: Hoshaw’s own blog (linked above) delves into the trip, the garbage patch and more. It does a better job of giving you a sense of the problem, and the dropoff from it to the stolid, by-the-numbers Times take is unfortunate.

Garber offered her criticism and promptly got pilloried for it. Spot.Us founder David Cohn didn’t even read the entire article (it’s only 1,300 words) before ripping into Garber and asking how many Pulitzers she’d won. Others piled on, criticizing Garber for burying her lead, for using a “standard journalistic frame,” and all but demanding that she do a wholesale rewrite, complete with a condescending lesson about the use of strikethrough and italics. The tone of the early criticism ranged from thin-skinned and defensive to bullying and insulting.

Cooler heads have since prevailed, and as most involved have noted, the conversation is well worth having even with some bumps and bruises. But as it unfolded, it sure left a nasty taste in the mouth.

I did agree with a couple of criticisms of Garber’s critique. Her take was improved by adding a note about Hoshaw’s blog higher in the piece, though griping that it originally came “after the jump” was an oddly printy criticism — my brain doesn’t shut down if I have to click on 2 or “single page.” And her summary — “The NYT’s ‘Pacific garbage patch’ story: a Spot.us ‘deliverable’ that doesn’t quite deliver” — puts the onus on Spot.Us in a way that the critique itself does not. It’s often thus — in my columnist career I suffered far more agita as a result of headlines, summaries and subheds that were slightly off the mark than I did because of missteps in the actual reporting or writing. This stuff gets left for last and done when you’re tired, and it can undermine everything else you’ve tried to do.

But Spot.Us and its partisans seemed to want to have it both ways, starting out by claiming the story for the group (Cohn first referred to it as “our NYT story” in tweets) and then backing away from it (later it’s “the NYT piece”) in favor of Hoshaw’s blog and the overall effort. Cohn emphasized that Spot.Us is a platform, not a news organization, but that emphasis came after criticisms of the Times story — as Chris Anderson notes lower in the comments, if Hoshaw’s story won a Pulitzer the group would certainly take credit for it as if it were a news organization.

The excitement and the muddied message is understandable given the circumstances — a Spot.Us story in the New York Times is big news for the model, and it’s great to find the Times as part of an innovative experiment in funding and producing stories. I think it’s safe to say that everybody wants Cohn and Spot.Us to succeed. Certainly I do. But excitement can’t lead to closing ranks against anybody who dares to be critical about the final product, and interest in experimentation can’t harden into dogma about the outcome.

And now I’m going to risk getting told that I buried my own lead. (I’m not writing this as an inverted pyramid, but whatever.) The real problem here seems to lie with the New York Times — and it feels like nobody wants to talk about that.

It’s great that the Times worked with Spot.Us. But reading Hoshaw’s blog and looking at her photographs, you get the feeling that the Paper of Record took an interesting square peg of a story and made it fit into a rather dull round hole. The only interactive component is the slideshow, and it’s lame — as Times slideshows too often are. (I want to throw things every time I find captions that are just bits plucked from the story.) The paper’s interactive wizards do wonderful things, but none of them are visible here. The sheer scope of the garbage-patch problem cries out for a different way of approaching the narrative, for the personality and shifting point of view evident on Hoshaw’s blog. The Times story doesn’t even offer a link to that blog, which would at least help readers unacquainted with the inside baseball of new media uncover this rich material. That’s not the fault of Hoshaw or Cohn or Garber.

The Times gets well-deserved credit for an enormous amount of Web innovation, from its open APIs to its rich, addicting interactives. But it doesn’t get a free ride either. Hoshaw’s final product shows that the basics of how stories are produced and executed could really use an infusion of that same spirit of Web innovation.

This isn’t to say that the Times should have approached the garbage-patch story differently just because Spot.Us was involved. That would be a different way to make the mistake of conflating the journalism with the business model. Rather, it’s to wish that the Times had taken a different approach, because a journalist had a richer story to tell. From what I can see, Hoshaw gave the paper a lot, and fairly little was made of it.

Of Search and Social Search

Posted in Communities, Social Media, Twitter by reinventingthenewsroom on November 10, 2009

Regarding Rupert Murdoch, we can all agree on one thing: He sure knows how to get people’s attention.

The News Corp. chief executive (who was briefly my ultimate boss at the tail end of my WSJ.com tenure) sparked a furor by saying, in response to an interviewer’s suggestion that News Corp. choose not to have its content indexed by Google, that “I think we will.” Added Mr. Murdoch: “We do it already, with The Wall Street Journal. We have a wall, but it’s not right to the ceiling. You can get the first paragraph of any story, but if you’re not a paying subscriber to WSJ.com, you get a paragraph and a subscription form.” (As many have gleefully noted, that’s not correct. I felt for my old colleague Andrew LaVallee, who got the unenviable task of setting the record straight in WSJ.com.)

The general reaction has been to wonder about Murdoch’s motives and argue that he would be crazy to entertain such a notion, with the strongest evidence for that position coming from Bill Tancer at Experian Hitwise, in graphical form. According to Mr. Tancer, Google and Google News account for more than 25% of WSJ.com’s traffic on a weekly basis, and more than 44% of visitors from Google haven’t visited the domain in the previous 30 days. That suggests that being erased from Google would cost WSJ.com an enormous chunk of traffic and a valuable source of potential new customers.

Enter Mark Cuban, who took delight in goring a sacred cow or two.

Cuban argues that Twitter is surpassing Google as a destination for finding breaking news. But more than that, he notes that Twitter doesn’t threaten destination news sites the way Google does — its 140-character limit short-circuits the rancorous link-economy debate about excerpts standing in for entire articles and “parasitic” aggregators. Finally, Cuban notes, Twitter and Facebook are platforms that news organizations can make their own, while Google is a news destination they see as competing with them.

Cuban’s conclusion: “Having to search for and find news in search engines is so 2008. … News sites blocking Google ain’t what it used to be.  Rupert is right. Deal with it.”

To which the general reaction has been that none of this will matter if sites put up paywalls, shutting people out regardless of the avenue that brings them to a news site.

But what if news organizations didn’t do that? What if they kept open paywalls for stories accessed via social media and link shorteners, but removed themselves from search-engine indexing as Murdoch has threatened? (WSJ.com already has such a leaky paywall — leakier than Murdoch seemed to think, in fact.)

What would happen then?

Certainly there’d be a significant loss of traffic. But how valuable is that traffic? There’s growing debate about that within the news industry — witness my recent discussion with Greg Harmon, who contends that publishers have sabotaged their own ad revenues by chasing empty “reach,” and would do better by pursuing loyal, repeat users who are much more valuable to advertisers. See also MinnPost’s Joel Kramer and Slate’s David Plotz on the value of core loyalists. Murdoch himself told his interviewer that those arriving via search “don’t suddenly become loyal readers of our content. We’d rather have fewer people coming to our Web site but paying.”

Could social media replace search as a way of discovering news? At least for me, to a startling extent it already has. Like many others, I’ve been surprised to realize that I now get an enormous amount of my news via Twitter and Facebook, and now reflexively turn to Twitter — not Google News — as my first stop when news breaks. A year ago, I would have scoffed at that, to say nothing of the idea that I could craft a satisfactory news feed out of links passed along by my peers. But even a cursory look at my own habits shows me that both these things have come to pass.

Now, let’s add a new wrinkle: Could social-media traffic be more valuable than search traffic? I’d argue that the answer is quite possibly “yes” — or will soon prove to be as social media matures, and “if the news is important, it will find me” goes from novel concept to routine truth. Finding information from a search engine feels different from finding it from one’s friends and peers — the latter is already a social situation. Particularly as papers further pursue social-media integration (as the Huffington Post is doing with Facebook Connect), isn’t that a better starting point than search for converting first-time visitors to repeat viewers and members of a community? (And if it isn’t, Cuban’s point about papers not seeing Facebook and Twitter as competitors still stands.)

Murdoch and Cuban are both showmen with a gift for provocation. But they’re also smart guys. And I think they’re on to something that shouldn’t be dismissed so readily.

The Real Obstacles to Paying for Content

Posted in Paid Content by reinventingthenewsroom on November 6, 2009

With talk of payment plans and paywalls intensifying (see this take on Journalism Online, and Steve Outing’s question about what works as premium content), I found an interesting bit about last month’s World Media Summit in Beijing that I hadn’t encountered before.

According to TVNewsLab’s Deborah Potter, one of the speakers at the summit was Jeff Gralnick, a special consultant to NBC News for Internet and New Technology. Here’s what Mr. Gralnick had to say: “I am convinced the Web has become so democratized that its users expect that content and access to it will be free.  And when faced with charges, those users will [find] another source that is free. And if you engineer a workaround, some smart 12-year-old will find a way to work around that.”

To put Gralnick’s remarks into a larger context, his position is that it’s wiser to take a page from MSNBC, which he says has had great success with “a multi-platform push to achieve scale that turns pennies into dollars.” I have no quarrel with that, but I disagree with him — and many others — about the obstacles to charging for content.

I don’t think that the democratization of the Web has reached some tipping point beyond which charging for content is impossible. It’s really early in the evolution of the consumer Web, and we’ve seen before that consumer behavior is not immutable. I’m always reminded of the late 1990s, when it was accepted as a law of Web physics that e-commerce sites couldn’t charge for shipping — because if any of them tried it, consumers would simply jump to a competitor that still shipped for free. Because of this, e-commerce sites were doomed, shackled to impossibly low margins.

As it turned out, the insanity wasn’t to think that consumers might pay for shipping — it was to think there was a viable business model in shipping giant bags of dog food across the country for free. Consumers eventually accepted that the days of free shipping were no more, and the world kept turning. And note that this same drama has played out in recent years with assessing sales tax on online purchases: It’s gone from perceived impossibility to annoyance to accepted part of life.

The other issue I have is thinking that clever 12-year-olds getting around technological barriers dooms such safeguards, for the simple reason that not everybody online behaves like a technologically inclined 12-year-old. There’s a blind spot here common to a lot of people who think and write about digital journalism –we’re technologically inclined and like to tinker with things, and so we miss that lots and lots of people don’t act like we do. I struggle with this myself — it never fails to amaze me that people enter Web addresses and straightforward site names in search boxes, rather than using bookmarks or sticking a .com on the end. It’s backward and inefficient and frankly weird. But people do it.

Will some technically adept people evade payment mechanisms? You bet they will. But as long as the price for content is set at a level people find reasonable, many won’t even if they can. And you can see that today. You can read most everything on the Wall Street Journal Online for free if you play around with URLs, but WSJ.com’s subscription model remains solid. You can steal all the music your hard drive can hold, but Apple and Amazon and others have still built pretty good businesses selling songs for $1 or less, because to many people a buck a song seems fair and the hassle of getting something for nothing just isn’t worth it. (Note that I’m not arguing that the iTunes model works for news — the equation of songs with articles is one of the more risible arguments out there, in fact.)

I don’t think pay schemes are a slam dunk. Far from it, in fact. There are two huge problems here, as I see things. The first is that with geographic isolation no longer protecting newspapers from competition, readers are awash in a glut of commoditized news, driving the price for a lot of that content to zero. Gralnick’s Web democratization strikes me less as some kind of social truth than as sound economic judgment. The second problem is that many newspapers have been cut so deeply that they may lack the resources to produce unique, compelling content that people would pay for.

The first problem is being solved in part by the relentless downsizing of the industry — journalists are being laid off by the tens of thousands, and the ocean of commoditized news will slowly dry up as papers replace me-too coverage with aggregation, the repurposing arm of the AP withers, and more papers go under. I wish there were a less brutal way of solving the problem (I was one of the tens of thousands of journalists sent packing), but wishing won’t make it so.

As for the second problem, frankly it keeps me lying awake at night.

These are the real obstacles to paid content, not social attitudes or smart preteens. Unfortunately, they’re harder to solve. And until they’re solved, I fear payment schemes will do little or nothing to help news organizations.

The Push and Pull of Twitter Lists

Posted in Social Media, The Journalist as Brand, Twitter by reinventingthenewsroom on November 4, 2009

When I started using Twitter, one of the things I liked best about it was that I had no idea what to do with it.

It felt like a throwback to earlier days of the Web, when you might toss up a Web page or a blog and see what happened, when you got ideas from looking around and seeing somebody had done something you hadn’t thought of, and immediately thought, “Hey, I could do that too.”

Eventually, as I’ve written before, the light bulb above my head went on and I got what made Twitter useful — as well as the realization that without my noticing, Twitter had changed the way I searched for and received news in fundamental ways. When Twitter lists arrived, I wasn’t quite sure what to do with them either — but my initial Twitter experiences had taught me not to stress about it. Relax. It’s Twitter — it’ll come to you.

This week the a-ha moment for lists came — it was what the new Texas Tribune was doing with Tweetwire. (Hat tip: I found Tweetwire through Martin Langeveld’s very fine guided tour of the Tribune at Nieman.) In thinking about Twitter’s usefulness for newspapers and other publishers, I’d gotten stuck thinking of Twitter as primarily a method of delivering news and secondarily a way of enhancing one’s brand, with the two ideally working hand-in-hand. The idea of using it as a way of aggregating news from a multitude of sources hadn’t occurred to me — even though that was exactly the way I was using Twitter as a reader. Making lists and presenting them to readers was an easy way for publishers to play curator and aggregator. Lists would let them create a dynamic, real-time news feed — with some welcome personality in the mix and the ability to include user-generated content. Think of a New Orleans Saints beat writer setting up a tweetwire for his paper that includes his own tweets mixed in with those of other beat writers, the best independent Saints bloggers and some consistently wise and/or entertaining fans. Or think what a reporter covering global warming or campaign-finance reform could do with it. (On Mashable, Vadim Lavrusik explores that idea and others in more detail.)

So, yes, lists are a valuable tool that publishers will hopefully make use of. But there’s another side to the change, one explored very nicely by Megan Garber on CJR. (Though goodness, somebody give that headline another try.)

Garber puts her finger on something that’s bothered me, too:

Lists are limiting not only in the physical sense, but also in the definitional: in categorizing Twitter users—‘Funny People,’ ‘Smart People,’ ‘People Interested in the Mating Habits of Short-Nosed Fruit Bats,’ etc.—they generally highlight only one aspect of a user’s personality and then define the user according to it. But that narrowness could, in turn, encourage people to conform their tweets to the lists they belong to: for ‘Funny People’ to limit their tweets to funny things, ‘Smart People’ to smart things, etc. Rather than a hodgepodgy amalgam of people’s thoughts about whatever they happen to come across in their jumbled, chaotic, and category-resistant daily lives, we may soon start to see stratification.

My tweets are generally about one of three things — digital journalism, the New York Mets, or Star Wars. (I’m acutely aware that I may be the only person in the world interested in all three.) When I first started tweeting, I wondered if I should create three separate Twitter personae. But it seemed like a lot of work, and I couldn’t figure out which persona I would use for the occasional tweet that was just random or personal. Would I cross-post those to all three? Rotate them? Worrying about this, all of a sudden I felt like a candidate’s image handler, and that was no fun at all. It’s just Twitter. Oh, and get over yourself.

So I let my Twitter account reflect who I really was (well, as much as any online persona truly does), and trusted anyone who cared to sort out my various interests and contradictions. (Speaking of which, you can follow me if you like.) But this worry returned with lists. Every time I got added to a list, my first reaction was to be happy, in an invited-to-an-7th-grade-party way that I wish motivated me less than it does. But my next reaction was always: Oh no, this person’s expecting tweets about journalism/Star Wars/the Mets, and my tweets about something else will screw up their list.

Garber’s thought about the same thing, and her worry is that she’ll censor herself to better fit lists’ expectations, with other people doing the same. She notes that she’s found she likes the off-topic stuff — “the little quirks of people whose ideas I admire, whose work I follow.” As do I.

It’s impossible to say how this will sort out, other than that it will be determined through the kind of ongoing, uncontrolled social experiment that’s shaped almost everything else about Twitter practices. My first thought was a technical solution: Let users modify their inclusion in lists with an internal hashtag. But that seems both complicated and like it would eat up even more of our already-precious 140 characters. No, the answer will be a social one, worked out tacitly over time. I just hope that the minutiae remain in the mix. Twitter’s a wonderful information source, but it comes with a welcome seasoning of personality and a gleeful sense of being just slightly out of control. It would be a better information source without those things, perhaps, but it would also be a less interesting place.

Spider-Man and Social Media, and Other Monday Reads

Posted in Communities, Cultural Change, Social Media by reinventingthenewsroom on November 2, 2009

My list of interesting Monday reads begins with an article I clicked on only because I found the headline amusing: “Everything I Needed to Know About Social Media I Learned From Spider-Man.” But lurking behind that teaser is a very smart article looking at how Stan Lee built Marvel Comics into a powerhouse by interacting with his readers in a way any blogger or forum regular will recognize. Lee was protoblogging in print a good three decades before the digital boom. All the hallmarks of blogging and community are there — the direct, colloquial, personal writing style; encouraging readers to engage each other as well as the person providing the forum; acknowledging smart comments and building on them; and rewarding frequent writers with ranks. A very smart take by Sven Larsen, of Zemoga.com.

Judy Sims of SimsBlog (which has one of the more awesome taglines I’ve seen on a blog) passes along six hunches about the future of journalism. I agree with them all, particularly her hunch about journalists becoming their own brands (the subject of my very first post here), but what really jumped out at me was something I hadn’t encountered before: Reuters CEO Tom Glocer’s dividing of new-media companies into three categories. To Glocer, those categories are seeders of clouds (generate high-value content for links/comments), providers of tools (along the lines of the Guardian’s work) and editors and filterers. That’s an interesting way to think about the challenges facing newspapers as they transform themselves. Should they make sure they have editors and products that cover all three of those missions? Or are they better off redefining themselves as one of those things? And how do you decide which path is the right one?

Then there are two stories I read, found fascinating and need to think about some more.

The New York Times’s Bill Carter writes that everything you probably think about DVRs and their effect on how many ads consumers watch is wrong. It turns out that many more people than networks expected watch the ads rather than fast-forwarding through them — according to Nielsen, 46% of viewers 18 to 49 years old do so. Carter explores what this means for free TV’s business model, and what might be behind such counterintuitive behavior.

Finally, Robert Scoble examines why chat rooms and forums get less interesting over time, while blogs get more interesting. Required reading for anyone thinking about Facebook, Twitter and any other form of community — which today means all of us.