I'm disturbed by a report in the New York Times about a Chicago company, Narrative Science, and its efforts to use computer-based artificial intelligence to replace human journalists and commentators.
The company’s software takes data, like that from sports statistics, company financial reports and housing starts and sales, and turns it into articles. For years, programmers have experimented with software that wrote such articles, typically for sports events, but these efforts had a formulaic, fill-in-the-blank style. They read as if a machine wrote them.
But Narrative Science is based on more than a decade of research, led by two of the company’s founders, Kris Hammond and Larry Birnbaum, co-directors of the Intelligent Information Laboratory at Northwestern University, which holds a stake in the company. And the articles produced by Narrative Science are different.
“I thought it was magic,” says Roger Lee, a general partner of Battery Ventures, which led a $6 million investment in the company earlier this year. “It’s as if a human wrote it.”
. . .
The innovative work at Narrative Science raises the broader issue of whether such applications of artificial intelligence will mainly assist human workers or replace them. Technology is already undermining the economics of traditional journalism. Online advertising, while on the rise, has not offset the decline in print advertising. But will “robot journalists” replace flesh-and-blood journalists in newsrooms?
. . .
Hanley Wood, a trade publisher for the construction industry, began using the program in August to provide monthly reports on more than 350 local housing markets, posted on its site, builderonline.com. The company had long collected the data, but hiring people to write trend articles would have been too costly, says Andrew Reid, president of Hanley Wood’s digital media and market intelligence unit.
Mr. Reid says Hanley Wood worked with Narrative Science for months to fine-tune the software for construction. A former executive at Thomson Reuters, he says he was struck by the high quality of the articles.
“They got over a big linguistic hurdle,” he observes. “The stories are not duplicates by any means.”
He was also impressed by the cost. Hanley Wood pays Narrative Science less than $10 for each article of about 500 words — and the price will very likely decline over time. Even at $10, the cost is far less, by industry estimates, than the average cost per article of local online news ventures like AOL’s Patch or answer sites, like those run by Demand Media.
. . .
Mr. Hammond cited a media maven’s prediction that a computer program might win a Pulitzer Prize in journalism in 20 years — and he begged to differ.
“In five years,” he says, “a computer program will win a Pulitzer Prize — and I’ll be damned if it’s not our technology.”
Should it happen, the prize, of course, would not be awarded to abstract code, but to its human creators.
There's more at the link.
I don't mind the use of AI in this way, of course. I've used early-generation AI software for systems development and other tasks, and I'm comfortable with allowing such artificial assistance to improve one's product. The question I have is how far this can be allowed to go without informing the user, or listener, or reader, that the 'person' at the other end isn't a person at all. For example:
- How will I know whether a specialist is interpreting my X-rays or blood tests, or whether my doctor is simply feeding the test results into such a program, that will spit out an automated, 'generic' report? What about anomalies in the test results? What about second opinions? What if surgery or other invasive treatment is prescribed as a result of such a report, only to find that it was in error? Who's responsible - the doctor who used it, or the programmers who designed it? How will I ever know what went wrong?
- What's to stop an insurance company feeding an assessor's information into a program, which then automatically spits out a form letter telling me how much I'll be paid for my claim, without the possibility of human intervention in the process of calculation?
- If I'm paying a subscription to a financial services company for a newsletter upon which I depend to make investment decisions, will I be told that the newsletter is, in fact, not written by the well-known analyst whose published opinions I've learned to trust, but by a computer program using his name?
Troubling questions. We'll have to see how this technology develops.
Peter
All I can think of is Max Headroom... sigh
ReplyDeleteYour comments about your doctor interpreting your X-rays are interesting. Once technology enables an X-ray to be assessed by a machine, logically there is no need for a doctor to be involved until the treatment. A tech working for the insurance company would be the only person involved. A second opinion would be possible, but if they used the same software the same result would be almost certain.
ReplyDeleteA second opinion would be possible, but if they used the same software the same result would be almost certain.
ReplyDeleteThe potential exists for that even now. Medical education teaches the same thing to every student, at least in this country. So every doctor will give the same diagnosis given the same symptoms and test results. (Not completely, but doctors may be reluctant to "ad lib" a diagnosis because of the malpractice liability if they're wrong.) That's fine if they're right, but if they're wrong, they're all wrong.
That said, I like the way the National Weather Service does it. They have a number of different software models that all interpret meteorological data differently to make a forecast. When the majority of models agree, they have high confidence in the forecast.