
However the governance of probably the most powerful systems, in addition to decisions regarding their deployment, will need to have strong public oversight. We consider people world wide should democratically choose the bounds and defaults for AI systems. We do not yet know easy methods to design such a mechanism, but we plan to experiment with its development. We proceed to think that, inside these wide bounds, individual users must have a number of control over how the AI they use behaves.
Given the risks and difficulties, it’s value considering why we’re constructing this technology in any respect.
At OpenAI, we now have two fundamental reasons. First, we consider it’s going to steer to a significantly better world than what we are able to imagine today (we’re already seeing early examples of this in areas like education, creative work, and private productivity). The world faces a number of problems that we’ll need far more help to unravel; this technology can improve our societies, and the creative ability of everyone to make use of these latest tools is for certain to astonish us. The economic growth and increase in quality of life will probably be astonishing.
Second, we consider it could be unintuitively dangerous and difficult to stop the creation of superintelligence. Since the upsides are so tremendous, the associated fee to construct it decreases every year, the variety of actors constructing it’s rapidly increasing, and it’s inherently a part of the technological path we’re on, stopping it could require something like a worldwide surveillance regime, and even that isn’t guaranteed to work. So we now have to get it right.