|
1 August 2024
[food] ‘One of the most disgusting meals I’ve ever eaten’: AI recipes tested… A look at the unwelcome rise of the AI Cookbook. ‘I have an even better time with Teresa’s The Ultimate Anti-Inflammatory Cookbook for Beginners. Here I am reminded why proofreaders exist. Something in the AI processing for this book took objection to the word “and”, turning it into “&;” in every instance. It inadvertently leads to beautiful phrases such as “h&ful cori&der” and “using an immersion blender or even by “h&”. We know that AI struggles with hands, but this is ridiculous. The Japanese hotpot I attempt – not obviously anti-inflammatory, like all the other recipes – is one of the most disgusting meals I have ever eaten.’
23 July 2024
[ai] GANksy — A.I. street artist … ‘We trained a StyleGAN2 neural network using the portfolio of a certain street artist to create GANksy, a twisted visual genius whose work reflects our unsettled times.’
17 July 2024
[morris] Errol Morris on whether you should be afraid of generative AI in documentaries… Errol Morris interviewed. ‘Film isn’t reality, no matter how it’s shot. You could follow some strict set of documentary rules…it’s still a film. It’s not reality. I have this problem endlessly with Richard Brody, who writes reviews for The New Yorker, and who is a kind of a documentary purist. I guess the idea is that if you follow certain rules, the veritical nature of what you’re shooting will be guaranteed. But that’s nonsense, total nonsense. Truth, I like to remind people — whether we’re talking about filmmaking, or film journalism, or journalism, whatever — it’s a quest.’
26 March 2024
[tube] TfL’s AI Tube Station experiment is amazing and slightly terrifying … A good look at TFL’s recent use of AI with CCTV at Willesden Green tube station. ‘In total, the system could apparently identify up to 77 different ‘use cases’ – though only eleven were used during trial. This ranges from significant incidents, like fare evasion, crime and anti-social behaviour, all the way down to more trivial matters, like spilled drinks or even discarded newspapers.’
14 March 2024
[internet] Are We Watching The Internet Die? … A look at how LLMs might lead to a homogenization of online content. ‘As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings. This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.” In reality, a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.’
5 June 2023
[ai] Superintelligence: The Idea That Eats Smart People … This talk about AI and much more from 2016 by Maciej CegÅ‚owski seems worth revisiting. What I find particularly suspect is the idea that “intelligence” is like CPU speed, in that any sufficiently smart entity can emulate less intelligent beings (like its human creators) no matter how different their mental architecture.
With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
Or maybe it would become obsessed with the risk of hyperintelligence, and spend all its time blogging about that.
|