Reinventing the Newsroom

Algorithms Aren’t Evil

Posted in Content Farms, Cultural Change by reinventingthenewsroom on July 21, 2010

Recently I’ve been quoted a fair amount about Demand Media and other so-called content farms, and come to accept that my initial description of Demand Media as “how our profession ends” will follow me around forever. (The Web is the end of forgetting, after all.) My views of content farms and what they do has become more nuanced since then, but so be it — I wrote it, after all.

That said, there’s one thing about content-farming I don’t object to in the least, and that’s their use of algorithms to create story ideas. News organizations are beginning to use the same tools and taking flak for it, which is reactionary and silly.

The “algorithm-as-editor” meme started with this article by the New York Times’ Jeremy W. Peters about how Yahoo is using search queries to help guide its writing and reporting for The Upshot. Peters’ take on what Yahoo is doing and how such efforts are shaping editorial agendas is pretty nuanced overall, but that got lost because of his rather breathless lede: “Welcome to the era of the algorithm as editor. For as long as hot lead has been used to make metal type, the model for generating news has been top-down: editors determined what information was important and then shared it with the masses. But with the advent of technology that allows media companies to identify what kind of content readers want, that model is becoming inverted.”

Beyond the fact that that’s not true — the article explores algorithms as a tool used side-by-side with traditional brainstorming — the emphasis angered Upshot editor Andrew Golis and writer Michael Calderone, who took their grievances to Twitter, as chronicled by Business Insider’s Joe Pompeo. (Sample tweet from Golis: “OMG, online journos periodically use data to figure out what readers are actually interested in! PANIC! HANDWRING!”) Today, Lacey Rose of Forbes chatted with Yahoo’s Jimmy Pitaro about the issue. “First off, the algorithm and the automated approach are one component of how we’re identifying topics and programming sites,” Pitaro told her. “We’re sitting on all of this [audience] data where our users are telling us specifically what they want and we need to take all of it into consideration as we program both video and text on our site. The way I look at it is we need to be feeding our users both what they want and what they need. If you cover both then I think users will be kept well informed.”

Pitaro is exactly right. That top-down model of news discussed by Peters — “editors determined what information was important and then shared it with the masses” — is obsolete, a product of an age in which we couldn’t know what readers thought was important in any timely way. And good riddance to it. Seeing that top-down model as a hallmark of journalism instead of as a technological limitation was a trap, as the implicit arrogance of that model won us few friends among readers and obscured plenty of the good work we do.

Journalists worry themselves, sometimes to distraction, about the idea that traffic and studying search queries will wind up driving editorial priorities and assignments, squeezing out substantive stories in favor of, I don’t know, lurid tales, gadget reviews and pet pictures. Peters quotes Perfect Market’s Robertson Barrett as saying that “there’s obviously an embedded negative view [in newsrooms] toward using any type of outside information to influence coverage.” Which there is. But which possibility is more likely, and thus more damaging to journalism: That a news organization given search queries will be run with them and devolve into Funny Cat Pictures Daily Times, or that worrying about that will lead editors to reject valuable insights into what their readers are looking for? It may strike an editor as noble to stop his ears to the guy from some outside service who’s crunched the numbers on search terms, but what that editor is really doing is refusing to listen to his own readers.

Pitaro tells Peters of one Yahoo story that emerged from looking at traffic: Why do Olympic divers shower after they get out of the water? You know what? It’s a good question.

In his Wired portrait of Demand Media, Daniel Roth asked Demand Media’s Byron Reese, creator of the algorithm the company uses to generate story ideas, what Demand’s most valuable query was. Reese’s answer: “Where can I donate a car in Dallas?” Reese didn’t know why so many people in Dallas were looking to donate cars, but if I were a Dallas-area editor, I’d sure want to know — and I’d be glad for whatever mechanism had brought a good story idea to my attention.

In the spring, I spoke on a panel at the MIT Sloan Sports Analytics Conference, and ESPN.com Editor-in-Chief Rob King told a funny story about one of his first days on the job. (You can listen in here, starting at 47:50 in the video.) Listening to the radio in the car on his way to work, King heard India and Pakistan were playing a big cricket match. So he asked about it in the morning meeting — where it hadn’t come up — and said ESPN ought to do something. Later in the meeting, a staffer ran through a list of sports search terms were trending, and something called “20 20” was at the top of the list. What was 20 20? Nobody knew. It turned out to be the India-Pakistan match — in fact, King said, cricket-mad ESPN employees were holding a big party elsewhere on campus to watch. King’s reaction? “That was the moment where I thought, ‘We really need to pay attention to what our audiences are into, because they’re telling us where the traffic is.’ ”

My issue with content farms has to do with their business model, which I think all but ensures the production of low-quality content when they stray from generating simple, straightforward tutorials. But it doesn’t bother me that they comb through search terms or make use of algorithms that point out potential story ideas. Why would it? Those tools work for any newsroom, and using them will help us serve our readers better.

6 Responses

Subscribe to comments with RSS.

  1. […] This post was mentioned on Twitter by stevesilberman, Chanders, Brian Baresch, Jason Fry, Robert Janelle and others. Robert Janelle said: RT @stevesilberman: "Algorithms aren’t evil" – OK [http://bit.ly/9kARUz ], but underpaying writers is. Poor pay yields shallow content. That algorithm matters. […]

  2. Michael Tippett said, on July 22, 2010 at 4:26 pm

    Totally agree with you. I recently wrote a piece about how these sites should not be seen as ‘content farms’ but rather as ‘attention mines’.

    http://www.tippett.org/?p=1926

    Michael.

  3. […] Read the rest of this post on the original site Tagged: Voices, digital, media, software, content farms, Demand Media, Jason Fry, Reinventing the Newsroom | permalink Sphere.Inline.search("", "http://voices.allthingsd.com/20100723/algorithms-arent-evil/"); « Previous Post ord=Math.random()*10000000000000000; document.write(''); […]

  4. mankenlik ajansı said, on July 30, 2010 at 1:54 pm

    King heard India and Pakistan were playing a big cricket match. So he asked about it in the morning meeting

  5. […] 1)“Algorithms aren’t evil” (gli algoritmi non sono il Male), ricorda Jason Fry sul suo blog. Il fatto di conoscere i bisogni e le richieste dei lettori non è cattivo in sé. Rispondere alle domande che si pongono i lettori è una strada che può permettere di migliorare la qualità della produzione editoriale, andando più vicino alle attese dell’ audience. Tutto dipendente naturalmente dal trattamento che se ne fa. […]

  6. Lyn Headley said, on August 9, 2010 at 3:12 pm

    You separate the issue into one of business models that insure low quality production vs the use of search and other demand metrics as a component of the production process, but the two issues are fundamentally related in my opinion and, I assume, in the opinion of those who fear that “algorithms are evil.” The relevant question is not whether data about user preferences should be used at all (after all, even the most cloistered old-school journalist cares about what the public hungers to know), but how it should be used, where it should come from, and how it should be balanced with other factors. The fear is that a culture may (gradually) emerge in which the ability to quantify user preferences leads to an imbalance in these factors which skews news judgment toward coverage of things that will rank highly according to user demand metrics. Coupled with the apprehension that users/readers/audiences as a whole often manifest emotional and shallow demands, and that this habit leaves them vulnerable to easy manipulation away from the public interest, the fear that “algorithms may be evil” is in fact a fear that a business model ensuring or at least prioritizing low quality production practices may achieve an enduring vitality in the internet age.


Comments are closed.