Monday, November 2, 2020

A sobering and scary look at where artificial intelligence is going

 

Bruce Bethke is a well-known American science fiction author and publisher.  He also develops software for supercomputers.  He was recently interviewed by Chris Morton, and had some worrying comments on how he sees artificial intelligence developing from this point onward.  (A tip o' the hat to Vox for providing a link to the interview.)


CM: ... In your ‘day job’ you work as a developer of supercomputer software. Any cool (or scary) ideas of what further supercomputer development could lead to?


BB: Most people misunderstand how supercomputers work and what supercomputers really do. We hit peak CPU speed about 15 years ago. More processing speed equals greater power consumption equals formidable heat dissipation problems, so unless there’s some kind of radical breakthrough in processor technology—quantum processing has been coming “next year” for as long as I’ve been in the industry; I’m not holding my breath—the way we increase computer power now is by building ever more massively parallel processor architectures.

The result is that the majority of the work being done on supercomputer systems now is just plain old computational fluid dynamics. Admittedly, we’re talking here about crunching through data sets measured in petabytes or exabytes, but deep down, it’s still just engineering. You may think it’s a dead language, but most of these programs are written in Fortran. While Fortran 2018 ain’t your grandaddy’s Fortran, it’s still Fortran.

There is interesting work being done in artificial intelligence and machine learning on supercomputers now, but it’s more in line with pattern recognition and data mining. For now, most AI work doesn’t need the kind of brute force a modern supercomputer brings to the table.

Ergo, for me, the most frightening possibilities are those that involve the misuse by unscrupulous politicians or corporations of the kinds of insights and inferences that can be drawn from such extensive data mining. The things that are being done right now, and that will be coming online in the next few years, should scare the Hell out of any civil libertarian.

AI's on their own seem to be best at finding flaws in their developer’s assumptions. I’ve seen AI's tasked with solving problems come up with hilariously unworkable solutions, because their developers made assumptions based on physical realities that did not apply in the virtual world in which the AI worked.


CM: Could you elaborate on your comments about data mining?


BB: Sure. What we’re talking about here is a field generally called “big data”. It’s the science of extracting presumably meaningful information from the enormous amount of data that’s being collected—well, everywhere, all the time. “Big data” tries to take information from disparate sources—structured and unstructured databases, credit bureaus, utility records, “the cloud”, pretty much everything—then mashes it together, finds coherences and correlations, and then tries to turn it into meaningful and actionable intelligence—for who? To do what with it? Those are the questions.

For just a small example: do you really want an AI bot to report to your medical clinic—or worse, to make medical treatment decisions—based on your credit card and cell phone dutifully reporting exactly when and for how long you were in the pub and exactly what you ate and drank? Or how about having it phone the Police, if you pay for a few pints and then get into the driver’s seat of your car?

That’s coming. As a fellow I met from a government agency whose name or even acronym I am forbidden to say out loud said, “Go ahead and imagine that you have privacy, if it makes you feel better.”


CM: Scary stuff.


There's more at the link.

Scary stuff indeed, if you value your privacy (as I do).  What can any of us do about it?  Probably not a lot . . . unless you don't carry your cellphone with you at all times, and pay in cash rather than use credit or debit cards, and take other measures to prevent the data-gatherers from having as much data to gather.  As a matter of principle, I'll be doing that even more in future.

Peter


7 comments:

  1. AI's on their own seem to be best at finding flaws in their developer’s assumptions.

    This one made me laugh out loud.

    35 years ago, I had Physics professor say something about AI and image recognition that is still absolutely true. As best as I can recall, the quote is, "they train AI in image recognition by showing it millions of images, then you still have the best AI in the world mistake a stool for a dog. No dog ever makes that mistake."

    What he's saying is the patterns that the AI sees and reports will be just as valid as mistaking the stool for the dog. Yet people will believe it because any pattern that comes out of a computer is somehow imbued with a credibility far beyond what it deserves.

    ReplyDelete
  2. You can train a neural network to perfectly recognize cats, but that system can't tell you what they eat or what happens if you rub their belly once too often :-)

    We have the same problem with "self-driving" cars: the computer codes are only as good as the training.

    HPE Labs has published a bunch of videos with Dr. Eng Lim Goh, and he is very good at making the technology understandable.

    ReplyDelete
  3. Whenever people speak of "artificial intelligence", I encourage them to replace the wording with "machine learning" which is the actual truth. There's no intelligence there in the slightest.

    ReplyDelete
  4. One of the things I would be interested in is whether anyone has looked for strategie to invalidate privacy-violating technology by seek ways and means to massively introduce junk data into the valid data gathered, rendering it useless to anyone. GIGO. Perhaps even using technology creatively to automate the entering of bogus activity alongside whatever is accurately being gathered. Is this nuts? It could be possible to filter out the false data, but if the bad data were cleverly managed to not be obvious but subtle....countermeasures.

    ReplyDelete
  5. Just tripped over this excellent example as to why AI is merely "machine learning" and not intelligence at all. TV camera bot kept mistaking the shiny bald head of a referee for the ball that it was assigned to auto-track. Cue the hilarity.

    See https://futurism.com/robot-camera-mistakes-soccer-refs-bald-head-ball

    It includes video highlights from the soccer match.

    ReplyDelete
  6. Current gen AI is good for abstruse data management and pattern recognition, but there's a reason the trope is called 'Artificial Stupidity'.

    ReplyDelete
  7. On the assumption that a major use of AI is for companies to make more money, I won't be worried until Google and Amazon start showing me ads for what I want today rather than what I searched for yesterday.

    And the pattern recognition comments are quite true. I still don't see better than 80-90% accuracy on voice recognition, making it usable for notes assuming I can edit and verify them a short time later.

    ReplyDelete

ALL COMMENTS ARE MODERATED. THEY WILL APPEAR AFTER OWNER APPROVAL, WHICH MAY BE DELAYED.