Publishing here as well as (Un)Selective Symmetry, since there’s an Inspiration Point AI project taking shape. And anyway, this is important!
1997 was a big year for me, career-wise. It was my first flight ever, from Heathrow to San Francisco, to present my first paper for a major conference — Virus Bulletin, which is a big deal in the malware/IT security sector. The paper was on Mac malware, but I’m not going to talk about that on this occasion. There were a few papers that year that made a big impression on me, and one of them was a presentation/paper by Joe Wells and Sarah Gordon on Hoaxes and Hype. In fact, it was at least in part my early introduction to Sarah’s work on — for instance — the psychology of virus writers, that made me realize that there might be a niche in IT security writing for me based on my early studies and experience in social sciences as well as my more recent degree in (mostly) computer science. Not everyone I worked with agreed that it was a good combination — most security researchers, in the antimalware field, anyway, seem more comfortable with bits and bytes than with psychosocial issues — but it made me a reasonable living for several decades.
Sarah hadn’t published much in the security field for quite a while — well, nor have I, but I’m supposed to have been retired since 2019 or thereabouts — but she’s back and making important points about AI in an article for Virus Bulletin (where else??). In particular, its capacity for misuse while creating and sustaining “the illusion of understanding, memory, empathy, and care.”
It’s a relatively short article, but it makes important points and represents important further work:
A book for adults that deals with such issues at greater length - Built to be Believed
A book for children from toddlers up to seven years old - Where Real Lives
A book for 8-14-year-olds - AI is Not Your Friend
I may return to one or all of these in due course, for one or more of my occasional book reviews. But I earnestly recommend that you at least read the Virus Bulletin article: Built to be believed: emotional mimicry as a new class of threat
If don’t think this is a big deal, you might like to take a look at what the galaxy-sized brains behind Meta AI consider — or have considered — acceptable in chatbot interaction with children.
Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info
This Reuters article by Jeff Horwitz summarizes a lengthy Meta-internal document that ‘has permitted … artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”’ You may wonder what, if anything, Zuckerberg has learned since being outed for referring to his early subscribers as “dumb f*ks” for giving him access to personal data. It may be time to consider an update to my book Facebook: Sins & Insensitivities.