The real reason we can’t define 'Artificial General Intelligence'

Could it be that the real reason we struggle to define Artificial General Intelligence (AGI) is that any clean, broad definition of general intelligence would exclude us?

Intelligence is hard to pin down, but I like two related definitions:

  1. Flexible, goal-directed behaviour

  2. Efficient skill acquisition

So: learning how to do something without much to go on, and then applying it in new ways as needed.

Yes. We can certainly acquire new skills that we didn't evolve for. We are comparatively flexible. The AIs that are super-human, like AlphaZero, tend to be pretty narrow in applicability.) And our learning is comparatively sample-efficient. (For every book that a human child has read, GPT-4 has read 10,000.) We can occasionally generalise out of distribution in ingenious ways.

But. We struggle to transfer knowledge outside the learned context. We make mistakes even when we know better or in principle. We struggle to learn new concepts, languages, skills, habits - we struggle to change our ways, even when we want to. We behave sub-optimally, both wittingly and unwittingly, even when it's pointed out. In short, we struggle to learn how to do some things, and then we struggle to apply them.

So, our intelligence is more general than the best AIs in 2024, by a good distance. But 'general intelligence' is a continuum rather than binary, and our intelligence is only somewhat general.

In practice, the only clean definition in AI is for superintelligence - "better than the best of us at everything". And I think that's probably still a way off.





Belongs to these tags