RSA 2016 Developing massively intelligent computer systems is going to happen, says Professor Nick Bostrom, and they could be the last invention humans ever make. Finding ways to control these super-brains is still on the todo list, though.
Speaking at the RSA 2016 conference, Prof Bostrom, director of the University of Oxford's Future of Humanity Institute in the UK, said artificial intelligence to at least human level is highly likely in our lifetime.
More ReadingA Logic Named Joe: The 1946 sci-fi short that nailed modern techSnowden is a hero to the security biz – but not for the reason you'd expectDon't expect AI to save our security skins, warns RSA bossIBM Watson offers $5m for an AI to save the worldTerrified robots will take middle class jobs? Look in a mirror
A survey of academics in the field produced a median guesstimate of 2040 to 2050 for that milestone, but it's likely that superhuman AI systems won't be far behind this.
"I think it will swoosh past human intelligence," he said. "Our basic biological machinery in humans doesn't change; it's the same 100 billion neurons housed in a few pounds of cheesy matter. It could well turn out that once you achieve a human level of machine intelligence then it's not much harder to go beyond that to super intelligence."
Once we reach superhuman AI, methods to control this power become critical, lest our invention sees human for what we truly are and turns on us or crushes us – a theme that has been the staple of science fiction since Olaf Stapledon's Last and First Men. But this isn't something scientists have given serious thought to until recently.
Over the past 18 months that has changed, and there is now a parallel effort to create control systems for such an advanced AI. Bostrom said that in his opinion this was as important as building the AI in the first place.
One possibility is developing a reward system for AIs that can be used to exert control, but in his view, that just wasn't scalable once AI reaches a particular level of strength. Presumably a sufficiently advanced machine could figure out a way to either disable or seize control of such a mechanism.
Another idea is to raise the AI to want what we want, within a suitable moral framework. It's a promising area, but very difficult, since the machine might not want to hold the same values as flawed and compromised mortals.
In a more extreme case, we could ensure AIs are developed in isolation, giving us the ability to just pull the plug on compartmented systems, preventing a large intelligence from emerging. Such a design could limit its potential, though.
"To me the controls are the most important intellectual challenges of our time," he said. "That is one of the most important things to invent."
Despite the risk, Bostrom opined that AI systems were essential if mankind is to move on to greater levels of consciousness. Biological enhancements, or biomechanical ones, could weld machine AI hardware to our fleshy brains.
In any case, if we do get high-level cyber-minds walking among us, this hack likes William Gibson's approach in Neuromancer – wiring an electromagnetic shotgun to every AI's forehead just in case. ®