What I find particularly suspect is the idea that “intelligence” is like CPU speed, in that any sufficiently smart entity can emulate less intelligent beings (like its human creators) no matter how different their mental architecture.
With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
Or maybe it would become obsessed with the risk of hyperintelligence, and spend all its time blogging about that.
Maciej CegÅ‚owski on AIs and Much More…
This entry was posted on Monday, June 5th, 2023 at 11:45 am and is filed under Artifical Intelligence.
« 2023 Profile of Douglas Rushkoff The Inside Story of Snopes.com »