Tuesday, August 30, 2022

The Bitter Lesson

The Bitter Lesson by Richard Sutton

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Are there any lessons for the social sciences here? E.g., should economists give up on trying to find general laws and instead create systems to find those patterns, perhaps in incomprehensible form? Of course, a machine found model doesn't need to be incomprehensible... even with an ordinary least squares regression model, the parameters are still "found by the machine". Also, Sutton is speaking to AI researchers so the "bitter lesson" may not be for everyone. Even if speech synthesis based on linguistic expertise wasn't competitive, linguistics is still useful, right?

No comments: