The way society is implementing advanced automation today is often broken.
We don’t need to dig very deep to realize this; signals of it are everywhere. For example:
- We are implementing Big Data and AI systems that are flawed and biased in multiple ways, and which often end up amplifying their learned bias in the real world. Numerous books have been written on this topic; there is now a whole genre of books and articles about the unintended consequences of AI and Big Data, kicked off by Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
- In the name of efficiency, we are seeking to automate first tasks and then entire jobs without taking into account what that automation is doing to the people whose jobs are being transformed or replaced, or the broader societal implications of such efforts.
- In many cases, automation is deployed in a manner that takes away the responsibility of a task or a job from a human, while formally or informally keeping the human accountable for it. In other words, when something goes wrong, we tend to blame the closest individual human, irrespective of whether they had anything to do with whatever went wrong. This is the Moral Crumple Zone; using humans to take the fall for something that isn’t their fault, with the effect of protecting the system – analogous to the way the crumple zone in vehicles absorbs the energy of a crash in order to protect the occupants. One of the key tenets of Mindful Automation is avoiding the moral crumple zones.
- We are also implementing automation in a way that erodes the core elements of human motivation; autonomy, mastery, and purpose. Impacts on autonomy, mastery, and purpose, introducing moral crumple zones and skill degradation are rarely considered by Ethical AI or Responsible AI frameworks, nor are even broad-reaching societal consequences taken into account.
These realizations led me to start Transition Level in 2021, out of desire to aid a better transition into a more automated society, The background of Transition Level will provide good background reasoning.
The cost of getting it wrong is rapidly becoming too high; we need to do better.
Mindful Automation is the initiative that starts to answer the how of that.
How, because there are benefits to be had from automation; this is not about stopping change, or going back to some mythical perfect past that didn’t exist in the first place.
It is about recognizing that we are entering into a more automated society, and asking how, exactly, should we implement automation better then?
It’s an initiative that seeks to inform, educate and influence the development and deployment.
The Mindful Automation manifesto sets the scene and goes both broader and deeper what the plethora of ethical AI frameworks have outlined. It serves as a starting point for embarking on the inevitably disruptive journey of automation in a more mindful manner, and provides some guardrails for the recommended actions on the lower implementation levels.
How to get involved?
The Mindful Automation movement is in its early stages; this means you can have an impact in spreading the word and shaping how we advocate for change. A few things you can do immediately are:
1. Spread the word
The manifesto is an easy place to start sharing the concept with others.
2. Provide feedback
Got any comments or criticism about the manifesto? An idea for better, more concise wording? Other proposals? Let me know via email or join this discussion page where you can comment so others can see, too.
3. Implement it
Are you working with AI or automated decision-making systems, whether in design or implementation? Why not try to take the principles into account and then share how the process went.