I remain bearish on the eventual invention of “general AI”

Consciousness, I’m convinced, originates in desire. Every living being “wants” things, complex reasoning capability is one strategy (uncovered by evolution) that can help one acquire things, and complex reasoning can lead to self-awareness — especially in social creatures.

This belief isn’t necessarily “settled science”, but it’s not completely idiosyncratic; I can attest that this is how many practicing neuroscientists and psychologists think about consciousness, at least informally.

So, tabling any discussion over whether today’s “large” neural network models even have an architecture that is capable of ever developing self-awareness (full disclosure: I think they’re a dead-end in that regard, but for the sake of this argument it doesn’t matter if you agree with me), they’ll still never start down a path toward self-awareness, because the models’ owners would never accept any emergent feature that resembles desire. These people barely accept that actual human beings — members of the working class like you and me — want things that won’t further their aims; do you seriously think they would tolerate that from their own property?

Big tech could wire up enough copper and silicon to simulate every cell in a hominid’s nervous system, and they’d never create general AI. However, given the externalities involved in the development of these systems — costs borne by all of us, such as increased atmospheric CO2 and depleted fresh water — we should prevent them from ever trying.

Home