:: By Jeffrey Broesche, Mediander ::
Content aggregators collect, analyze and repackage vast amounts of information and data, and there are many ways in which they do it. Sites such as The Huffington Post employ editors to review content and decide which pieces to republish. Digg and sites like it rely on editors’ choices but overlay a community vetting process that raises or lowers an article’s profile through user voting. Reddit is user contributed and user vetted. What all these sites share Jeffrey Broesche, Mediander is human intervention somewhere in the aggregation process. Be it via an editor or the crowd, these sites depend on human evaluation to select and prioritize what their users see.
Automated aggregation, although less labor intensive day to day, is in many ways more difficult to pull off. Whereas HuffPo and even Reddit can rely on human intelligence to prevent, spot and correct simple factual errors or typos, machine aggregators must be designed with adequate foresight to ensure quality results. Techmeme, a site that aggregates articles about technology, is an interesting case in point. The site began in 2005 with automated aggregation but in 2008 added editors to handle the mix of headlines. Techmeme founder Gabe Rivera wrote, “Automation does indeed bring a lot to the table—humans can't possibly discover and organize news as fast as computers can. But too often the lack of real intelligence leads to really unintelligent results. Only an algorithm would feature news about Anna Nicole Smith’s hospitalization after she’s already been declared dead, as our automated celeb news site WeSmirch did last year.” Today the site’s nine editors collaborate with the machines in a hybrid process that, according to Rivera, serves their particular users better than either methodology could independently.
And then there’s Google, which recently even removed the human oversight role from its driverless car concept. The bottom of any Google News page reads, “The selection and placement of stories on this page were determined automatically by a computer program.” This is true for both the standard and personalized versions of Google News. Knowing that this content is automatically aggregated, users have different expectations of it than they do of the aggregated (and original) content on HuffPo, Techmeme or Reddit. Although we may actively look for insight into who edits HuffPo or who contributes to Reddit, we don’t ponder the character of the Google computer program—and we don’t seek a personal connection with it either. For better or worse, human involvement in aggregation draws a more emotional reaction from users. People—fallible and inattentive as we may be—add something to the content aggregation process that pulls us in and makes us care.
Like HuffPo, my company publishes both original and aggregated content. Here is what we’ve learned about how automated aggregation can satisfy users in the way human aggregation does:
1. Results may vary. Acknowledge, as Google News does, that your information is aggregated automatically and that automatic processes may result in some less than ideal results. Your users will appreciate your honesty, and you will have a baseline of expectation from which to build a good impression.
2. Embrace transparency. In bringing under one roof disparate sources and types of information, make sure the collected pieces fit together into a sensible organization that is perceptible to the user. Google News has categories; some content sites have topics and connections. Both organizational frameworks indicate that people are implicitly managing the content through the algorithms.
3. Too much of a good thing is a bad thing. Don’t present ALL the information you find just because you can. Think of your aggregation algorithms as curators. The curator’s job is to weed out the unnecessary; your job is to exercise control over your aggregation and to keep its output comfortably digestible.
4. Be transformative. Recontextualize your aggregated information by performing a unique operation on it. Consider how WolframAlpha transforms its aggregated data by focusing not on the data itself as much as the computational processes the company has developed to reorganize it into new presentations.
Getting your aggregation’s presentation correct is just as important as building and maintaining the nuts and bolts of its data pipelines. Ultimately, to connect with humans you have to deliver a human product, one that both saves time for the user and uniquely transforms existing information into something helpful and, ideally, conscientious and compelling enough to build user trust and generate repeat visits. And our increasingly cloud-based future presents new challenges. With “everything in the cloud,” what is true for computing services is also true for information. Mining and curating an ever more comprehensive data set, aggregators will only become more indispensable. Respecting the user’s need for a human touch will earn your aggregation site emotional legitimacy and success.
Jeffrey Broesche, editorial director for Mediander, provides direction for a staff of talented editors and designers. Jeffrey has more than 15 years of publishing experience and was managing editor for the Barnes & Noble Classics series. He is an original member of the team that conceived Mediander, and he holds an MA in Philosophy from Columbia University.