The scientific answer to whether AI will become smarter than us

Posted on Leave a comment

In the episode Defusing AI Panic of the Deep Questions podcast, Cal Newport gets nerdy and wears his computer science professor hat to explain how large language models (LLMs) work in order to dissect the parts of these AI models that would need to get so sophisticated that they become human-like or better than us.

As he explained, chatbots like ChatGPT are, on a high level, composed of LLM and control logic. LLM by itself is just a smart word spitter trained on large amounts of natural language. It can only predict the next word based on the input phrase it has been fed. It’s the control logic that activates an LLM. There’s no way it can get sentient without a super-smart control logic. While listening to Cal Newport’s explanation, I envisioned a system with control logic as the brain and LLM as the body it puts into motion.

Cal further explained the layered nature of control logic – Layer 0 being the most basic (auto-regression, the stuff used in early versions of ChatGPT), Layer 1 being able to make web searches to combine the latest information, and Layer 2 being the state-of-the-art complex and smart actuator.

The one common thing across all layers is that they are programmed by humans. Unlike an LLM that may generate content it was not trained with, the control logic will always behave in expected ways unless its programmers forget to put limits and constraints in place to keep it from producing “excessive” results (eg. booking a first-class airline ticket when you were expecting an economy seat).

So, unless a control program is smart enough to create another control program, it will not have a mind of its own. Such general artificial intelligence is still in the realm of the impossible.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.