Four years ago I tried to catch up with deep learning and this summer I aimed to try to catch up again. Who would've thought 2017 is ancient history.
I watched the lectures in the latest MIT course, played with GPT-3 and Codex, read the new Stanford manifesto on what they call foundation models, models trained that can perform on a wide range of tasks instead of a single goal. We've seen machine learning solve protein folding, detect cancer from x-rays better than radiologists, not to mention effectively solving many of the traditional AI problems (language translation, voice and face recognition, game playing, etc.) Watching Codex generate computer code from plain English is a game changer. Far from perfect but this technology is in its infancy.
From what I can tell, the main technological advances focus on learning when we don't have labeled data such as new techniques to transfer knowledge to new domains and using generative models to provide more data to train ML algorithms.
The trend that worries us all is that deep learning algorithms in the long run seem to do better if we limit or eliminate previous human knowledge from the equation. The game playing algorithms now train from just the rules alone (or even just the outcomes). We do use some knowledge: words come in some linear order, faces have hierarchical features, but not much more than that. Human expertise can help as we start solving a problem, even just to know what good solutions look like, but then it often gets in the way.
When we longer use a skill we tend to lose it, like my ability to navigate from a good map. If we eliminate expertise we may find it very difficult to get it back.
There's a more personal issue--people spend their entire careers creating their expertise in some area, and that expertise is often a source of pride and a source of income. If someone comes along and tells you that expertise is no longer needed, or even worse irrelevant or that it gets in the way, you might feel protective, and under guise of our expertise tear down the learning algorithm. That attitude will just make us more irrelevant--better to use our expertise to guide the machine learning models to overcome their limitations.
You can't stop progress, but you can shape it.
Nice honest post
ReplyDeleteLance, I always enjoy your crisp narrative.
ReplyDeleteSomewhat reminiscent on another level to "The of End of History" by Francis Fukuyama.
I wonder if there are significant examples of solving previously unsolved mathematical or TCS problems via AI or machine learning? For example, has anyone ever collapsed or separated complexity classes using AI/ML? In my view, ML tends to work best in situations where the problem is imprecisely defined, such as face recognition etc. If this is indeed so, then human knowledge and expertise in certain areas will always remain relevant.
ReplyDelete