Home Artificial Intelligence Planning for AGI and beyond

Planning for AGI and beyond

0
Planning for AGI and beyond

There are several things we expect are essential to do now to organize for AGI.

First, as we create successively more powerful systems, we wish to deploy them and gain experience with operating them in the actual world. We imagine that is the perfect strategy to fastidiously steward AGI into existence—a gradual transition to a world with AGI is healthier than a sudden one. We expect powerful AI to make the speed of progress on the planet much faster, and we expect it’s higher to regulate to this incrementally.

A gradual transition gives people, policymakers, and institutions time to know what’s happening, personally experience the advantages and drawbacks of those systems, adapt our economy, and to place regulation in place. It also allows for society and AI to co-evolve, and for people collectively to work out what they need while the stakes are relatively low.

We currently imagine the perfect strategy to successfully navigate AI deployment challenges is with a good feedback loop of rapid learning and careful iteration. Society will face major questions on what AI systems are allowed to do, the best way to combat bias, the best way to cope with job displacement, and more. The optimal decisions will depend upon the trail the technology takes, and like all latest field, most expert predictions have been unsuitable up to now. This makes planning in a vacuum very difficult.[^planning]

Generally speaking, we expect more usage of AI on the planet will result in good, and wish to put it on the market (by putting models in our API, open-sourcing them, etc.). We imagine that democratized access can even result in more and higher research, decentralized power, more advantages, and a broader set of individuals contributing latest ideas.

As our systems catch up with to AGI, we have gotten increasingly cautious with the creation and deployment of our models. Our decisions would require rather more caution than society often applies to latest technologies, and more caution than many users would really like. Some people within the AI field think the risks of AGI (and successor systems) are fictitious; we could be delighted in the event that they develop into right, but we’re going to operate as if these risks are existential.

In some unspecified time in the future, the balance between the upsides and drawbacks of deployments (resembling empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, during which case we might significantly change our plans around continuous deployment.

LEAVE A REPLY

Please enter your comment!
Please enter your name here