Discussion about this post

User's avatar
Julian's avatar

Great article!

Expand full comment
Steve Zekany's avatar

One thing that occurs to me is that even if we had a list of traits or measurable attributes that probabilistically correspond with consciousness, we’d probably still have a Goodhart’s Law problem due to programmability of AI systems. E.g. SuperAI would just claim “see... training run #8356 exhibited a lower than 10% chance of consciousness!”

I find this worrying because right now we seem pretty far from even having that list of proxy measures to begin with!

Expand full comment
1 more comment...

No posts