I’ve been doing some work with my old friends and colleagues at Poynter, and wound up pitching in with their coverage of yesterday’s terrible events in Boston. Which got me thinking about breaking news and how it’s changing with readers seeing each step of the newsgathering process. My take is here.
For anyone who stumbles across this, this blog is now updated fairly rarely. You can keep up with my adventures writing, editing and (occasionally) consulting over at my Tumblr, Jason Fry’s Dorkery. Or follow me on Twitter.
The folks at Nieman Labs were kind enough to ask me to contribute some predictions for what 2011 has in store for digital journalism. I mostly played it straight, which means I predicted a muddle. But I couldn’t resist a prediction — or perhaps it’s a bit of wishful thinking — about content farms. Where Demand Media and their ilk are concerned, I think Google is on the horns of a dilemma. They’re not happy about the content farms, which they view as gaming their algorithms, yet it’s a basic part of the Google ethos to leave qualitative judgments to users in aggregate. If Google starts making qualitative judgments about content, where will that end?
Hence my prediction: that Google will do something to drive content farms’ results way down in search results, but will be stubbornly quiet about what exactly that was, which will cause all sorts of kerfuffle about secrecy and power and Don’t Be Evil. We’ll see. (Meanwhile, it’s interesting to see Yahoo looking to remake Associated Content as an engine for hyperlocal contributions.)
Still, the Nieman prediction that was dearest to my heart was about Facebook and social media. I think the most promising efforts to make hyperlocal scale will be based on extracting relevant information from the likes of Facebook, Twitter, Foursquare, Yelp and other services. As I told Nieman,”the most promising startups and efforts from established social media companies will center around creating quiet water that draws from the river of news without leaving us overwhelmed by the current.”
That prediction emerged from an exchange I had back in March with my old friend and mentor Roy Peter Clark about Facebook and refrigerator journalism — local articles and photos about kids’ sports, school recitals and so forth that get cut out and put on the fridge and then saved to be unearthed years later. Our conversation made me realize something that I’ve thought about a lot since: There’s an impermanence to social media that undermines its sense of connection.
As I wrote back then:
[W]hile Facebook is wonderful for sharing, it’s lacking something: The sacramental aspect Roy talks about isn’t there. The things we share on Facebook are soon swept away by newer things and lost from view. They’re part of a rich stream of shared experience, but with the exception of photo albums, most of that shared experience is carried off into the realm of “older posts” and effectively lost. Our real-world fridge is like a lot of people’s — magnetic letters hold down a mess of to-do lists, old notes, amusing junk-mail misfires, cartoons, drawings by our son and of course photos, some of which date back to 1990. It’s a rich record of our family. So is Facebook, but there the richness can only be seen over time. It’s like everything gets cleared off the fridge and replaced every 18 hours.
Fast-forward to December and my prediction that Facebook (or maybe someone else) might offer users a way to preserve the sacramental. So I was intrigued when a few days after I sent off that prediction (and before it appeared), Facebook accidentally gave users a peek at something called Memories. Memories wasn’t available long enough to fully grasp what it is, but I found myself hoping that it might be a scrapbook service — a way to preserve social-media bits so they can be easily retrieved later. Why hasn’t Facebook done this? My pet theory is it’s because their key product-development folks are young — they live happily in the ceaseless river of news, and don’t yet grasp that the passage of time will come to seem bittersweet. That’s where the sacramental aspect comes in — with being aware of that whirl, and wanting to be able to stop it and steal back a few moments.
I don’t know if content farms will really have a day of reckoning in 2011. But I do think that social-media scrapbooking will emerge — if not from Facebook, than from somebody else. It’s become too big a part of our lives for that not to happen. And once it does, all sorts of intriguing possibilities will emerge.
Like most everybody else concerned with digital journalism, I spent a good chunk of Tuesday morning reading James Fallows’ cover article in the Atlantic Monthly. (I read it very happily on the iPad’s browser — but that’s another column.)
I thought the big-picture appraisal of journalism’s prospects felt right — I agree that technology will continue to improve online display ads, and that more publishers will have success asking readers to subscribe. (I should disclose here that I recently began consulting for Journalism Online.) But at the same time, I wasn’t particularly convinced by Fallows’ thesis that a lot of journalism’s current business-model woes will be solved by Google because it’s in Google’s interests to solve them. Even granting the premise that Google wants to solve them, I think Fallows put too much faith in Google’s ability to do so through its own devices. I think he isn’t sufficiently worried about how uncertain publishers are about what path to take, or how much freedom they have to maneuver. It seems to me that this fragmentation and uncertainty will make finding and implementing a solution slow and difficult. (All Things Digital’s Peter Kafka dissects what he sees as Fallows’ leaps of faith in a brief commentary.)
Stowe Boyd offers an interesting take on the evolution of social media, from blogs (which he sees as more personal publishing than social media) to social networks and real-time streams. The idea that really grabbed me is that real-time streams such as Twitter appeal to use because they’re conversational, and use personal publishing — as well as the old-fashioned institutional kind — as raw material for the commentary and reactions that make up that conversation. I found that a new, intriguing way of thinking about social media and its symbiotic relationship with blogging and traditional content.
Finally, I found the startup Newsy interesting, as explored here by ReadWriteWeb. Newsy is basically a curator/aggregator that stitches video from different sources together (with careful attribution) into brief narratives about trending stories. It’s done on the cheap but still pretty slickly produced. Curation and aggregation isn’t new, of course, but extending it into video is a wrinkle that might yield good results for news organizations.
This post originally appeared at Nieman Journalism Lab.
Information Week posted an interesting account of an academic paper presented at the International World Wide Web Conference last month. The paper, written by four Korean researchers, analyzed 41.7 million Twitter user profiles, 1.47 billion social relations, 4,262 trending topics and 106 million tweets to examine the relationships between tweeters and the distribution of information across the microblogging network. (The paper is available here as a PDF.)
Their conclusions: Twitter isn’t a social network, but something more akin to traditional news media.
Why isn’t Twitter a social network? The researchers noted that Twitter relationships don’t have to be reciprocal — there’s no need to follow someone back who is following you, while Facebook relationships are two-way “friendships.” (Though that’s changing with the capability to “like” something.) Only 22.1% of Twitter user pairs follow each other, the researchers said. Moreover, they noted that most follower-followed relationships on Twitter are more akin to traditional-media relationships between subscribers to information and distributors of that information, with subscribers consuming information but having little contact with distributors. A relatively small number of users are the primary sources of news, with others redistributing that news; most tweets are related to timely topics; and retweets typically come very quickly — 35% in the first 10 minutes.
The researchers’ rationale for saying Twitter isn’t a social network strikes me as more a question of definitions than anything else, but it’s an intriguing discussion nonetheless — one I think touches on deeper questions. Is a social network still a social network if reciprocity is largely theoretical? Does that untapped reciprocity undermine its value? Will any large grouping of people exchanging information settle into a pattern akin to that of traditional media?
I have nearly 700 Facebook friends, but I’ve never communicated with the majority of them. They’re folks who walk in the same digital-journalism circles, or know my writing about sportswriting, baseball or even Star Wars. I’m happy to have them as friends (I could never get past the squick factor of setting up a fan page for myself) and I respond to their messages and comments. But I don’t get very many of those — those relationships are largely one-way. I’ve initiated such relationships myself, reaching out to be Facebook friends with people whose activities interest me, but whom I’ve never contacted. Yes, we’re linked in a social network. But in many cases the friendship is really just a vehicle that allows information to flow, and that flow is largely one-way.
Moreover, that’s a pattern on all social networks — and probably all networks, period. Last summer, a Harvard Business Review study found that 10% of Twitter users accounted for more than 90% of tweets. The researchers noted that was a more concentrated level of activity than is typical for social networks, in which the top 10% of users produce 30% of content. But again, the difference strikes me as one of degree, not kind: When you put people together in a network and let them create information, you get a few producers and a lot of consumers, just as discussions get a handful of engaged commentors and a lot of silent (but interested) lurkers. Social networks may move the percentage needle this way or that way depending on their parameters, but the pattern holds.
This can strike us as a shame: Why should two-way media produce mostly one-way interactions? But I don’t think it’s anything of the sort — because “mostly” is not the same as “entirely.” Social media carries with it the potential for reciprocation, replies and for conversation and connection. That potential lies fallow, waiting to be used — but it can be used instantly. And social media carries with it the expectation of response or at least acknowledgment — perhaps not to everybody, but to enough people to demonstrate that one is listening and not just talking. That’s a sea change from traditional-media information flows, even though they may look the same when transposed to social networks.
I’m still amazed at how thoroughly 140 characters and an @ sign level the playing field on Twitter, erasing relative status and power. When I think of my Twitter and Facebook experiences, I think not of the many relationships that haven’t yielded conversation and relationships, but of the few that have — and I know that those other relationships have that potential too.
My first post for Nieman Journalism Lab appears today. Starting now, selected posts from Reinventing the Newsroom will appear at Nieman before they run here, and I’ll provide links to them when they do. Other posts will remain unique to Reinventing the Newsroom.
I’ve been an admirer of the Nieman folks and their work for some time, so I’m honored to be considered worthy of joining their ranks, and look forward to working with them.
My latest column for the National Sports Journalism Center looks at the question of Twitter and whether personal tweets are a welcome bit of color in a news feed or noise that threatens to crowd out signal.
The genesis of the column was something odd that happened in Major League Baseball last week: A number of beat writers for MLB.com tweeted that they’d been told to limit their tweets to baseball. Those tweets were then deleted — as were tweets by some of the writers pointing out that they’d created personal accounts. That touched off a row about heavyhanded control, with MLB officials insisting that an email reminder had been mistaken for a change in policy.
Whatever the case, the furor did get at an issue that journalists and news organizations will have to grapple with: How much personality is too much in someone’s Twitter stream? (Particularly now that tweets are often funneled into news feeds based on lists or hashtags, exposing them to people who don’t necessarily follow a given journalist.) I wish I had answers, but I don’t: Twitter is so new that there isn’t broad agreement about best practices. It will be fascinating to see what accepted standards emerge, and why.
As a companion to my own initial, admittedly still-disjointed thoughts on Facebook’s announcements, here’s a roundup of articles I found particularly insightful or helpful in starting to make sense of things.
Chris O’Brien of the San Jose Mercury News has an overall take that’s a great place to start.
Robert Scoble captures what Facebook’s ambitions amount to here, notes that Facebook just built the social Web version of the transcontinental railroad for the social Web, and digs into some technical aspects of the plan. Very smart, not instantly paranoid, and he talked to Zuck.
This is interesting about Facebook Credits, Facebook’s embryonic payment system. Remember Dave McClure’s prediction that Facebook would become the default payment system for (among many other things) subscriptions because of the lack of sign-on friction? Here’s how it starts.
Paul Gillin has a great take on why the new Facebook features will be good for average Web users.
ReadWriteWeb explains the dangers of one company having this much power. Like everything else, this will be explored extensively in the coming weeks and months. Privacy advocates need time to sift through everything that got announced yesterday, too.
On Mediate, Philip Bump puts the privacy concerns in perspective, connecting them back to Facebook’s ill-fated Beacon. Valuable context for thinking along with Facebook as they plot their strategy.
For a roundup of articles I found helpful and insightful about Facebook’s announcements, go here.
So yesterday was one of the more interesting days to be a Web guy in a long, long time. Reading about what Facebook had rolled out, I had to fight a sense of frustration that there was no way to take it all in and reach even a tentative conclusion — things are going to change so thoroughly that it will take weeks or probably months to even start sorting through all this.
I know what Facebook plans worries a lot of really smart people, but at first glance it doesn’t particularly bother me. If anything, I was excited to see it.
As I’ve dived deeper and deeper into social media I’ve seen a curious split in how I think of my Web habits: There’s time I spend on the wild, ever-changing, dynamic social Web and time I spend on the essentially static non-social Web. I know on some level it’s ridiculous to talk about Web sites websites as static, particularly in comparison with newspapers or books, but after Twitter that’s the way they feel. I increasingly find myself spending more time in the social precincts of the Web, and have eagerly adopted most everything that pushes the static Web that way. (When I’m searching, it’s now practically second nature for me to set “Latest” or “Past 24 Hours” instead of doing a default search.) This is a logical and welcome next step. (And in case you didn’t think Facebook is poised to eat the consumer Web, it has the infrastructure in place for location-based services and a payments system. This is going to get bigger.)
As a Web user, I love the idea of “socializing” the static Web so that my friends and peers essentially ride along with me. I’d be happy to see which of my friends and peers liked or have read something on the Washington Post or the New Yorker or most anywhere else. I didn’t get a lot of use out of that functionality on Huffington Post, which pioneered it, but that’s because I’m not a HuffPo loyalist. Though to that I’ll add that I would demand the ability to manicure my history — I remember the moment when I realized I’d just broadcast reading some HuffPo slideshow about half-dressed starlets and immediately went looking for how to make that vanish. (Dishonest? Sure. But it should no longer be news to anybody that our social-media selves aren’t the same as our real selves.)
A jolt of Facebook makes a service such as Yelp much more valuable — like anybody else, I find recommendations much more useful (or at least more compelling) when they come from a smaller group of friends and peers. Ditto for music — Pandora is a great music-discovery engine, but feeding my friends’ listening habits into it makes it far better. I don’t shop socially, but I see how a lot of people would have the same reaction to socializing their shopping habits.
Privacy is obviously a big concern here — I was amused by EPIC counsel Ginger McCall’s reaction, as given to TechNewsWorld: “this gives me lots of interesting work in the days ahead.” But perhaps this is naive, but I don’t get up in arms about being the target of behavioral advertising. Or rather, what I dislike is when that targeting done badly or dishonestly: The former wastes my attention, and the latter angers me. Truly well-targeted ads would be fine with me — a band I’ve been listening to a lot just announced a gig in Brooklyn, there’s a new throwback Mets jersey I might like, and so on. More complete information would help that along, as — again — would the ability to prune things from my history, as I do with TiVo and Amazon recommendations. (Yes, this is tending an idealized self-image once more. I know that. Marketers are learning it too.) If the targeting doesn’t work, I’ll just tune it out — the last decade has given us all superb filters for marketing bullshit.
As for the perils of centralizing so much of this information, acknowledged. But to that, I’ll note two things. One is that Facebook has been responsive to users’ complaints — it pushes users, yes, but it also can be pushed back. Another is that I’d rather have a single place where I can see what’s being shared and with whom than have to monitor that across hundreds or thousands of websites and services, similar to how there’s now one site for accessing the various credit-report agencies.
As a publisher, meanwhile, I’m eager to use all this stuff here and on my baseball blog. I want to add Like buttons and Facebook sidebars and all these things. (And I’m sure publishers big and small feel the same way — this stuff is going to spread really quickly.) Partially that’s because I of course want to see more about who’s reading and better understand how things are connecting. But it’s also because I like the idea as a reader and think a lot of readers will feel the same way. There’s a lot more to understand about how Facebook will use this information, how accessible and visible it will be to us and how much control we’ll have over how it’s shared and used. It may be that I don’t like the answers as I begin to discover them. But for right now, I’m excited about all this as a publisher and as a reader. That seems like an encouraging sign.
David Eaves has a terrific post up about what he sees as myths held up by “old media” about new media. It’s well worth reading for seeing all the different places we’re talking past each other and the intersections where fear, uncertainty and doubt are choking off brighter possibilities.
What really jumped out at me was the first myth Eaves tackled — the myth of the “average blogger.” As he sees it, print journalists think they are competing against the average quality of online content, and when they see that most of that content is frankly poor, they are lulled into a false sense of security. In Missing the Link, a collaboration with Taylor Owen, Eaves wrote that “those in the print media who dismiss online writing because of its low average quality are missing an important point. No one reads the average blog.”
This is a critical insight, and I agree with Eaves that it leads to all sorts of misapprehensions. He cites a false sense of security. To that, I would add a retinue of sins imagined and overblown: Bloggers making errors that go uncorrected, Web writers driven by antisocial behavior and personal animosity, and the idea that these peddlers of hateful, subpar content are leading legions of readers astray.
The world has profoundly changed. Not so long ago, gatekeepers determined what would be published in newspapers, magazines and books, and if you didn’t pass muster with those gatekeepers, it was very difficult to reach an audience of any real size. This didn’t ensure that all content was excellent, or even good — insert the name of whatever trashy novel you thought foretold the death of literature here — but it did have the effect of confining a lot of dreck to fliers and mimeographs. Today, it’s child’s play to publish, and anyone who publishes has a huge potential audience awaiting them.
But the key word there isn’t huge or audience — it’s potential. As Eaves notes and many an eager new blogger has discovered to his or her dismay, no one reads the average blog. As Eaves notes, print media aren’t competing against the average blogs, but against the best ones. Writers of average blogs have discovered a hard truth: Publication does not guarantee an audience, and the existence of something online does not mean anyone is reading it. And the failure to grasp this drives a lot of the hand-wringing about blogs and the Web.
One reason this misapprehension is hard to shake is basic human nature: We always think our enemies are united, powerful and implacable, when in fact most of them are every bit as divided, inefficient and careworn as we are. But as I’ve written before, another reason has do with the way search works online.
In the physical world, commonly accepted information that a lot of people consume is easy to find, while obscure or problematic information is hard to find. But online, it’s all one. If what you search for is out there, you will find it very quickly, no matter how wrongheaded or cruel or otherwise flawed it is. And this instant response leads us to an error borrowed from our real-world experience: Because we quickly find what we’re looking for, we assume many other people are looking for that information too, and are reading it. (Particularly when it’s something erroneous or cruel about ourselves or something we care about.) But this isn’t necessarily true: It’s often the case that no one is reading that information at all. It is only our search that fit the lock that plucked it momentarily out of obscurity.
You can now do an end run around the old gatekeepers, but people don’t have substantially more time to consume information than they ever did. And so there are new gatekeepers springing up everywhere, to reduce the torrent of information to a manageable flow. Readers make use of technological tools to help them filter information. With the rise of the social Web, they are increasingly able to make the collective judgment of their peers serve as filters and gatekeepers. And journalists and other experts have an invaluable role to play here as well, curating information and bringing the good stuff to a wider audience. The gatekeepers now operate downstream of publication, but they still exist. If anything, their roles are more important than ever.