The 10 Commandments
of Responsible AI
The 10 Commandments
of Responsible AI
The 10 Commandments
of Responsible AI
“Responsible AI.”
It sounds good. But what does it actually mean?
If we want people to stand behind this cause, we need to be crystal clear about what we’re asking for.
That’s why we’ve defined 10 guiding principles that clearly and globally set out what “Responsible AI” means in practice.
Simple. Universal. Accessible to all.
But grounded in leading research and policy frameworks.
Together with our partners, we work to amplify the voices calling for Responsible AI,
and to mobilise support for initiatives that turn principles into action.
We hope every partner, news outlet, online voice, creator, voter, lawmaker, and head of state will join us.
Including you.
#1
AI must serve humanity. Not the other way around.
Unlike past tools, AI may become much smarter than us, pushing us to follow systems we no longer understand for commercial gain. AI would stop being our tool. Instead, we would become the tool of AI and lose our place at the top of the food chain.
#1
AI must serve humanity. Not the other way around.
Unlike past tools, AI may become much smarter than us, pushing us to follow systems we no longer understand for commercial gain. AI would stop being our tool. Instead, we would become the tool of AI and lose our place at the top of the food chain.
#1
AI must serve humanity. Not the other way around.
Unlike past tools, AI may become much smarter than us, pushing us to follow systems we no longer understand for commercial gain. AI would stop being our tool. Instead, we would become the tool of AI and lose our place at the top of the food chain.
#2
AI must always be under human control and have an off switch.
If something goes wrong, we must be able to shut it down. Unlike the internet or the stock exchange, AI can decide how to act for itself. And it also might not want to be shut down. That is why it is critical to design it in such a way that we always can.
#2
AI must always be under human control and have an off switch.
If something goes wrong, we must be able to shut it down. Unlike the internet or the stock exchange, AI can decide how to act for itself. And it also might not want to be shut down. That is why it is critical to design it in such a way that we always can.
#2
AI must always be under human control and have an off switch.
If something goes wrong, we must be able to shut it down. Unlike the internet or the stock exchange, AI can decide how to act for itself. And it also might not want to be shut down. That is why it is critical to design it in such a way that we always can.
#3
Like cars, planes and meds, AI must be proven safe before release.
We already test critical technology before release. AI is not only the most impactful, it also compounds. What we build today will grow over time. We should not beta test on people just because we want to move fast.
#3
Like cars, planes and meds, AI must be proven safe before release.
We already test critical technology before release. AI is not only the most impactful, it also compounds. What we build today will grow over time. We should not beta test on people just because we want to move fast.
#3
Like cars, planes and meds, AI must be proven safe before release.
We already test critical technology before release. AI is not only the most impactful, it also compounds. What we build today will grow over time. We should not beta test on people just because we want to move fast.
#4
AI must be built to protect and promote our human values.
AI, as built by the big labs, is trained on everything online, including X, Pornhub, and Reddit. That is not a clear reflection of the values we want to live by. If we don’t actively choose what AI stands for, it will scale whatever it is fed into infinity.
#4
AI must be built to protect and promote our human values.
AI, as built by the big labs, is trained on everything online, including X, Pornhub, and Reddit. That is not a clear reflection of the values we want to live by. If we don’t actively choose what AI stands for, it will scale whatever it is fed into infinity.
#4
AI must be built to protect and promote our human values.
AI, as built by the big labs, is trained on everything online, including X, Pornhub, and Reddit. That is not a clear reflection of the values we want to live by. If we don’t actively choose what AI stands for, it will scale whatever it is fed into infinity.
#5
AI must not advance itself in ways we can’t observe and understand.
We can’t control what we can’t see. With the world’s most powerful technology, that is a problem. AI must be transparent. We must be able to see what it does, when it changes, and why it acts. Only then can we intervene and adapt when necessary.
#5
AI must not advance itself in ways we can’t observe and understand.
We can’t control what we can’t see. With the world’s most powerful technology, that is a problem. AI must be transparent. We must be able to see what it does, when it changes, and why it acts. Only then can we intervene and adapt when necessary.
#5
AI must not advance itself in ways we can’t observe and understand.
We can’t control what we can’t see. With the world’s most powerful technology, that is a problem. AI must be transparent. We must be able to see what it does, when it changes, and why it acts. Only then can we intervene and adapt when necessary.
#6
AI must respect every human equally, without bias or discrimination.
AI doesn’t wake up racist or sexist. It learns that from our hiring data, arrest records, credit scores, and clicks shaped by unequal systems. Left alone, AI scales yesterday’s prejudice into tomorrow’s global automated rulebook.
#6
AI must respect every human equally, without bias or discrimination.
AI doesn’t wake up racist or sexist. It learns that from our hiring data, arrest records, credit scores, and clicks shaped by unequal systems. Left alone, AI scales yesterday’s prejudice into tomorrow’s global automated rulebook.
#6
AI must respect every human equally, without bias or discrimination.
AI doesn’t wake up racist or sexist. It learns that from our hiring data, arrest records, credit scores, and clicks shaped by unequal systems. Left alone, AI scales yesterday’s prejudice into tomorrow’s global automated rulebook.
#7
AI must never violate our rights or the privacy of our data.
What we think, do, and say is ours. No privately owned technology should track, profile, manipulate, or profit from us without our consent. We do not want a surveillance state run by a few people in power.
#7
AI must never violate our rights or the privacy of our data.
What we think, do, and say is ours. No privately owned technology should track, profile, manipulate, or profit from us without our consent. We do not want a surveillance state run by a few people in power.
#7
AI must never violate our rights or the privacy of our data.
What we think, do, and say is ours. No privately owned technology should track, profile, manipulate, or profit from us without our consent. We do not want a surveillance state run by a few people in power.
#8
AI should never hurt people or help people hurt others.
AI must never help design a virus, print a gun, commit suicide, create images of naked children or do anything else that causes harm. Unfortunately, both the internet and the world are filled with people who want to do this, so we have to make sure they can’t.
#8
AI should never hurt people or help people hurt others.
AI must never help design a virus, print a gun, commit suicide, create images of naked children or do anything else that causes harm. Unfortunately, both the internet and the world are filled with people who want to do this, so we have to make sure they can’t.
#8
AI should never hurt people or help people hurt others.
AI must never help design a virus, print a gun, commit suicide, create images of naked children or do anything else that causes harm. Unfortunately, both the internet and the world are filled with people who want to do this, so we have to make sure they can’t.
#9
Whoever builds an AI must be legally responsible for its every action.
AI is not a passive tool. It decides and acts on its own. If you create something with agency, like a self-driving car or a trained attack dog, you are responsible for the harm it causes. Even if you’re not there.
#9
Whoever builds an AI must be legally responsible for its every action.
AI is not a passive tool. It decides and acts on its own. If you create something with agency, like a self-driving car or a trained attack dog, you are responsible for the harm it causes. Even if you’re not there.
#9
Whoever builds an AI must be legally responsible for its every action.
AI is not a passive tool. It decides and acts on its own. If you create something with agency, like a self-driving car or a trained attack dog, you are responsible for the harm it causes. Even if you’re not there.
#10
No AI should ever be capable of turning on the human race.
No system should ever hold irreversible power over humanity as a whole. If an AI cannot be safely limited, contained, or reversed, it should not be built or deployed at all.
#10
No AI should ever be capable of turning on the human race.
No system should ever hold irreversible power over humanity as a whole. If an AI cannot be safely limited, contained, or reversed, it should not be built or deployed at all.
#10
No AI should ever be capable of turning on the human race.
No system should ever hold irreversible power over humanity as a whole. If an AI cannot be safely limited, contained, or reversed, it should not be built or deployed at all.