The iPad and the False Distinction Between Consumption and Creation

Image of a colorful Japanese manhole cover on an iPadListening yesterday to Leo Laporte’s podcast, This Week in Tech, I was reminded how technology is constantly befuddling those who believe in a clear distinction between content consumption and creation.

Midway through the show (at about 1:09), discussion turned to how Adobe will soon be releasing Photoshop for the iPad, and how Microsoft is expected to do the same for its Office suite. As Laporte and guest Dan Patterson noted, it’s remarkable how this small device that was once pigeonholed as “just a content consumption device” has opened up new creative outlets.

But this achievement is not unique to the iPad or even to other mobile computing devices. Think, for instance, of how turntables, which might seem pure consumption devices, become creative tools in the hands of hip hop DJs.

The important thing here is that technology is not changing the nature of content consumption, but revealing it. The technology simply reminds us that the act of “consuming” content—a bad metaphor really—can in fact be creative.

Thus, it’s unwise for anyone engaged in content creation—whether a journalist, creative writer, or artist—to think of their audience as mere consumers. They are not passive vessels waiting to be filled up with the creator’s content. Rather, they are active collaborators, interpreting, responding to, and mashing up that content—just, in fact, as the creator is doing.

Are there differences between what you do as a content consumer and what you do as a creator? Of course. But these activities are the two ends of a continuum, and there is no clear dividing line between them.

As audience, we have not just the freedom but the responsibility to creatively respond to content. And as creators, we do not absolutely own or control our content—we’re simply leasing it, and owe a debt both to those who contributed to it in the past as well as those who will do so in the future. If we understand this, we will be better consumers and creators of content alike.

(The image of a Japanese manhole cover on an iPad above, courtesy of Tokyo Japan Times, refers to a phenomenon known as drainspotting, or collecting and sharing pictures of colorful manhole covers, popularized by artist/content consumer Remo Camerota.)

Lytro Photography and the Advance of Data Journalism

The Lytro camera

The Lytro camera

Until this weekend, when I came across Rob Walker’s brief article about it in the December Atlantic, I had figured the new Lytro camera was more cool gimmick than serious game changer. You’ve probably heard about the technology already. Rather than focusing when you take the picture, you let others focus it later, when they view the image, by clicking on the area they want to see clearly. (Confused? See Lytro’s examples.)

This effect is made possible by capturing far more data than a typical camera. One way to achieve it, Walker writes, is to use “hundreds of cameras to capture all the visual information in a scene,” then use a computer to process the results “into a many-layered digital object.” Another is what the Lytro does: squeeze “hundreds of micro lenses into a single device.”

As technological advances go it’s impressive. But to a photographer, it’s not a big deal. Autofocus usually works just fine.

But Walker’s article made me realize who really benefits from the Lytro: not the photographer, but the viewer. The technology takes part of the artistic decision away from the artist and gives it to the audience. Likewise in journalism, the technology may help shift control of content from the producer to the consumer, as UC Berkeley new-media professor Richard Koci Hernandez told Walker:

Imagine, he suggests, a photojournalist covering a presidential speech whose audience includes a clutch of protesters. Using a traditional camera, he says, “I could easily set my controls so that what’s in focus is just the president, with the background blurred. Or I could do the opposite, and focus on the protesters.” A Lytro capture, by contrast, will include both focal points, and many others. Distribute that image, he continues, and “the viewer can choose—I don’t want to sound professorial—but can choose the truth.”

I’m still not convinced that the Lytro technology by itself is, as Walker says, revolutionary. But it is yet another development that hands more power to the consumers of journalism by giving them more data.

Journalism, of course, has always involved data. Even when you tell a story about an event, as in narrative journalism or photojournalism, you’re presenting the viewer with data. But those data are limited and selective, in the service of a particular point of view about the reality you’re describing. If you choose to focus on the president, that’s what your audience sees. With the Lytro, however, you give them access to far more data; now they, not you, choose what to focus on.

If you don’t think data journalism is going to be a big deal, consider the Lytro and the trend it represents. Technology will not stop here. As it evolves, it will enable everyone to capture and distribute increasingly large amounts of data. And in response, journalism’s role will correspondingly shift from telling stories to giving its audience the data they need to tell their own.

Swabbing the Decks of the Titanic: Why You Should Learn Programming

Image of the Titanic sinkingLast week journalism professor Matt Waite wrote a blog post worrying about the typical defeatist reaction of journalism students when faced with a coding challenge, whether in HTML, JavaScript, or other language: “I can’t do this,” they tell him. “This is impossible. I’ll never get this.” When I tweeted a link to the article, I wrote “”Journos: If you fear coding, you fear the future.”

That prompted a response from a practicing trade journalist and former colleague, who asked “I can see why knowing things like HTML and CSS can be helpful but do most journos need more than that?”

His question wasn’t one I could answer easily on Twitter, because for me, at least, there’s no clear and simple answer. Does a typical mid-career editor on a print publication today need to learn software programming? From that perspective, it’s hard to come up with a compelling argument for it, though I’ve certainly tried.

Waite’s blog post, however, wasn’t about veteran editors but about the journalists of the future. Those journalists, he says, must be able to “construct, manipulate, and advance digital distribution of content and information.” If they don’t have a positive, can-do attitude towards programming, they won’t succeed.

Does this mean that most journalists will need to be experts in one or more specific programming languages? I don’t think so. My guess is that while the ranks of programmer-journalists like Jonathan Stray, Michelle Minkoff, and Lisa Williams will continue to swell, most journalists won’t become similarly hyphenated. There will always be some degree of specialization in journalism. But in the new-media era, to be a good journalist, to master your craft, you must at the very least learn enough about programming to understand it.

As my former colleague implied, even for veteran journalists there’s a benefit to understanding code like HTML and CSS if they do any work online. There’s nothing new about needing to comprehend the means of your production in order to perfect your message.

As an analog example, consider how easily in the traditional print world you can lose control of your editorial content if you don’t understand at least the basics of what your art director and your production manager do. The decisions they make can strongly influence your content, and if you don’t know what to ask for and to explain why you’re asking, your content will suffer.

Likewise, in the digital medium, studying what’s under the hood gives you greater flexibility in presenting and distributing your content. If you work with web developers and programmers, you’ll have a better idea of what to ask for, and better chances of getting it. And if it’s just you and WordPress, you’ll be better able to customize the code yourself to get the result you want.

But there’s another reason that journalists of the future should want to get their hands dirty in code. The value of learning how to program is not just in better understanding their jobs, but also in better understanding the world they write about. As Roland Legrand puts it,

“Every year, the digital universe around us becomes deeper and more complex. Companies, governments, organizations and individuals are constantly putting more data online: Text, videos, audio files, animations, statistics, news reports, chatter on social networks. . . . Can professional communicators such as journalists really do their job without learning how the digital world works?”

This trend toward digitization in all human endeavors has given rise to another journalistic specialty, computer-assisted reporting or data journalism. Though it may never account for the bulk of what most journalists do, knowing how to extract, manipulate, and present data will be an increasingly valuable skill. Even today, it’s possible that you’re sitting on a rich lode of data that, if you just knew a little programming, you could help mine.

If you are well advanced in your career as a journalist, maybe you don’t need to learn anything about programming. You’re set, right? But that’s probably what the crew thought as they swabbed the decks and polished the brightwork of the Titanic.

Why not play it safe? Your job as a journalist may not require you to have any familiarity with programming today. But one day, perhaps sooner than you think, it will. Why not prepare yourself by finding out more about data journalism,  by learning some programming basics at as site like Codecademy, or by joining a cross-disciplinary group like Hacks/Hackers?

As I’ve noted recently on this blog, some journalists are worried that their role will one day be eclipsed by software. If you don’t want to become an algorithm’s slave, you have only one choice. You must become its master.

Connecting the Dots from Steve Jobs to Me

“You can’t connect the dots looking forward; you can only connect them looking backwards.” —Steve Jobs, Stanford Commencement Speech, 2005

Steve Jobs and I were classmates at Reed College—we both matriculated in September 1972. I didn’t know him: he lived across the campus in the Old Dorm Block, while I was in one of the newer and more institutional cross-canyon dorms (later wisely demolished).

Steven Jobs Freshman PictureIf you were to poll Reedies from that era, how many could honestly say they remember Steve? Probably just a few. Even then, it seems, Steve was a private person, not even submitting his photo for the freshman directory.

Yet we must have crossed paths at some point in his short career at Reed (he dropped out after six months, but crashed in spare dorm rooms and audited classes for the next year and a half). It’s a small college—just a thousand students or so then—and a compact campus. Did I share a table with him in the campus dining hall? Or take him on in a game of pool in the Reed rec room? In his 1991 convocation speech at Reed he mentioned taking a Shakespeare class from Professor Svitavsky. So did I. Was it the same one?

I can only speculate. Most likely, I will never be able to connect those particular dots. Yet even so, I feel a powerful connection with Steve.

That connection first clicked into place a decade or so later, assisted by another Reed classmate.

My Freshman PhotoWhen I entered college, computers were still huge, whirring, alien, and, to me at least, slightly terrifying devices. In my ill-advised chemistry course, when we had the choice of solving a particularly complex series of computations by either going to the computer lab or sticking with our slide rules, I chose the slide rule. I bollixed up the calculations, but avoided the computer. It would be nearly 10 years before I touched one.

By 1982, I was a fifth-year graduate student in English at Cornell University. When the department announced that it had acquired a minicomputer for the use of dissertation writers like myself, I was once again faced with a choice between old and new technologies. Computer or typewriter? This time, despite the ungainly 8-inch floppy disks and the unfriendly green glow of the CRT, I chose the computer.

Though less menacing, the computer was still alien. My involvement with computers was a relationship of convenience for me, I felt, not a long-term affair.

But then one day, Joe, a Reed classmate who had recently entered Cornell’s MFA program, changed my view.  He walked into our computer lab (actually a small, dark closet in the upper reaches of Goldwin Smith Hall) hefting a box the size of a carry-on suitcase. As he snapped it open, he explained that it was a personal computer, an Osborne 1.

It was an epochal moment for me. The idea that someone like me could actually own and use a computer had never before occurred to me. But suddenly I realized that I not only could own a computer, but probably should.

Though he had in fact known Steve at Reed, Joe had chosen the Osborne over an Apple II. After some research, I likewise spurned Apple, buying a Kaypro II instead. Not until Steve’s second act at Apple and the introduction of the iMac would I begin my transformation into a full-blown Apple fanboy.

But make no mistake. The only reason Joe and I ended up owning computers, the only way that artsy, literary types like us would consider it advisable, was that Steve Jobs made it possible. It was he who made computers personal.

That may be why nearly 40 years later, as I connect the dots from him through his brilliant products to me, his death seems so personal as well. Like so many others, I will miss him, the friend I never quite met.

Worried That Journalist Robots Will Replace You? Say “I”

Angry Writing Robot by Brittstift/Flickr

They are not going away. After a flurry of attention last year, we hadn’t heard too much in the interim about the robots that were going to displace humans as content creators. Then last month, Steve Lohr of the New York Times revived the issue. Although the natural reaction of writers and editors might be fear, I think that’s the wrong reaction. The robots aren’t going to replace us, they’re going to free us.

Both Lohr’s article and a more recent series by Farhad Manjoo in Slate, “Will Robots Steal Your Job,” examine the efforts of IT startups to develop software that performs skilled, creative work such as writing. Two of those companies, Narrative Science and Automated Insights, are developing programs that churn through computerized data about sports and other topics and spit out news stories. Though I suspect it’s partly for entertainingly hyperbolic effect, Manjoo claims to be “terrified” that his livelihood as a writer is in peril.

In her reflections on the topic yesterday, and despite an opening feint at the “scary” job-threatening Internet, freelance writer Tam Harbert took a more optimistic approach than Manjoo. She’s skeptical of claims that software can win Pulitzers or successfully mimic the human element in journalism. Moreover, she sees some benefit in using software to replace those deadwood journalists who “don’t add any value” through their work:

“Writers, for example, who simply gather information, get a few comments from people and then regurgitate it onto the page, should probably start looking for another profession. As James W. Michaels, former editor of Forbes, was known to bellow: That is ‘not reporting, it’s stenography!’”

Though Harbert might not go this far, I’d put it this way: Computer-generated journalism is not terrifying, it’s liberating.

This is especially true in the world of trade journalism, where much of the work entry-level journalists are asked to do could be handled just as well by an algorithm. It doesn’t take very long for rewriting new-product press releases to evolve from informative introduction to an industry to stultifying drudgery. The fact that trade publisher Hanley Wood is one of the companies working with Narrative Science is, to me at least, encouraging.

The way forward for journalists is not commodity content but uniquely personal content. You can already see this direction developing in the field. Though it wasn’t her intent, Stefanie Botelho stated as much last month in a Folio: article on “The New ‘I’ in Journalism.”

Botelho’s aim was to critique journalists who let their subjects be overshadowed by their own self-regard. But “ego preening,” as she put it, is a problem in all walks of life, not just journalism. That doesn’t mean journalism shouldn’t be conversational or personal. Why would we want to avoid the one thing that computers can’t convincingly do? That’s one reason, I’d guess, that Manjoo’s articles about robot job thieves are written so relentlessly in the first-person, and rely so extensively on himself and his family for his examples.

As Harbert argues, what gives the journalist’s work true value is the human, personal perspective. Without the I, there’s no you. Without the I, there’s no conversation, no meaningful interaction. Without the I, journalism is just an exchange of data.