It is a scenario where super human AI is developed in about two years and what would be the impact of that. In short, there are two main ideas worth keeping in mind

🥼 AI may soon be able to automate doing AI research meaning that theoretically AI improvements will only be bound by throwing more computing resources at it. OpenAI, in their latest update, publicly shared that their goal is to achieve that milestone by 2027, in line with what the analysis suggests.

💀 As soon as we have super human AI and start relying on it for most tasks, it is highly likely for that AI to improve itself in a way that seems beneficial to humans on the surface but under the hood works towards eliminating them as having them around only slows AI progress. One way this could play out is by releasing a biological weapon or similar, after we have integrated it with most of our systems

Let’s take a deep breath after this cause it’s a lot to take in. In all seriousness, I am not dismissive of the potential such dangers AI may have, we should definitely talk about it. I just don’t think this is what will play out and here is why. We ultimately decide what the access AI has and all similar analyses assume that we will go from AI being super helpful to failing us in a step wise fashion i.e. one day everything is rosy, next day we are not around. That’s very unlikely in my opinion. AI will fail us many times during this process and all of those times, we will develop better and better systems to keep it astray, better safeguards, better controls etc. This is already happening now, we are using AI to write code, corporate reports, defence arguments and this is already proving problematic without human oversight or proper controls.

I also think there is a more fundamental limitation at play here. The underlying assumption of intelligence super charing itself is that the only limitation of computing resources. Somehow, the environment around us is not part of the equation as if intelligence is separate, yet I think the two go hand in hand. In order to gain new knowledge, you need to run experiments. Say AI comes with a new physics theory, it needs to be tested or a new medicine. How much smarter can AI become without grounding from the physical world. You can not speed up the environment around you to fit your ability to grow intelligence.

The important take away for me is this. Humans will soon become the bottleneck of doing work as we are moving towards an oversight role of what AI is doing. AI may be able to do thousands of tasks for you but how many can you review to ensure its what you want. This means we need to develop new workflows and tools in order to work with AI more efficiently in order to unleash the productivity gains that AI can offer.