“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” ...“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
Last week Apple put their foot in it, in a mistake that, while essentially trivial, had huge implications for journalism. This extract from a BBC report of the (ridiculous) event ends 'Apple has made no comment.' But that doesn't mean they did nothing. In my search for traces of the story I was often stymied by 404 errors, and I suspect there has been an attempt at a web-wide clean-up attenpt, thankfully resisted by the BBC
A major journalism body has urged Apple to scrap its new generative AI feature after it created a misleading headline about a high-profile killing in the United States.
The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione.
The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.
Now, the group Reporters Without Borders has called on Apple to remove the technology. Apple has made no comment.
Reporters Without Borders (RSF) said this week that Apple's AI kerfuffle, which generated a false summary as "Luigi Mangione shoots himself," is further evidence that artificial intelligence cannot reliably produce information for the public. Apple Intelligence, which launched in the UK on December 11, needed less than 48 hours to make the very public mistake.
"This accident highlights the inability of AI systems to systematically publish quality information, even when it is based on journalistic sources," RSF said. "The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public."
Because it isn't reliably accurate, RSF said AI shouldn't be allowed to be used for such purposes, and asked Apple to pull the feature from its operating systems.
But an expert speaking last week on BBC News did not fully agree that 'AI shouldn't be allowed to be used for such purposes'. She said that there should always be a human being 'in the loop...
<tangent>
She really did say that, I'm afraid. 'Which loop?' I remember thinking at the time. When there are several people sharing information, and a boss says 'Keep me in the loop', that makes sense. It's still gross managerese, it's still a lamentable cliché, but at least it makes sense. In this case, though, an AI engine is exposed to thousands of reports and millions of possible summaries, and regurgitates a patent lie. Where's the loop in that?
</tangent>
... '
The Byte reported the mistake thus:
"I can see the pressure [HD: on? for? to get?] getting to the market first, but I am surprised that Apple put their name on such [a....
<tangent>...?] demonstrably half-baked product [HD's emphasis; I just applaud the phrase]," Petros Iosifidis, a professor in media policy at City University in London, told the BBC. "Yes, potential advantages are there — but the technology is not there yet and there is a real danger of spreading disinformation."
With acute regret I have to concede that the speaker may have meant to omit the article – he was, after all, a 'professor in media policy' (whatever that is).
</tangent>However, this danger is one that's fundamental of [HD:sic...
<tangent>
'Oh dear', as my grandson might say. This bit of writing really leaves a lot to be despaired. Meanwhile back at that article...
</tangent>
...generative AI, and not just Apple's flavor of it. AI models routinely hallucinate and make up facts. They have no understanding of language, but instead use statistical predictions to generate cogent-sounding text based on the human writing they've ingested
And finally..
On 22 March 2025 my choir is joining forces with Bracknell Choral Society. for a performance of The Dream of Gerontius:
<autobiographical-note>
The first time I sang this extraordinary piece my mother (whom saints preserve [and they better had]) was in the audience.
</autobiographical-note>
You've got just over a month (as from publication date) to snap up the Earlybird deal.
b