Many potential roadblocks stand between the conception of a machine learning project and its...
How to Start Responsible AI Practices at Your Company

Responsible AI is the latest term describing the effort to produce ethical Artificial Intelligence and Machine Learning algorithms. Why the change from “Ethical AI”? Who gets to decide what is ethical and what isn’t? As each individual decides what is ethical based on their own morals and values, there isn’t a clear, agreed-upon standard to center new technological developments around. Instead of saying ethical, technologists are now adopting responsible tech practices to better describe models that center positive and harm-reduction outcomes.
For many companies, the road to Responsible AI lacks an obvious path forward. How does a Data Scientist or Machine Learning Engineer ensure that their work will do no harm or less harm than what was being done before? Is there a tool that can tell if data is biased? How can a manager be sure that the outcomes of their team’s work are deployed with the best intentions? I have found that there is no authoritative set of tools or processes for technologists to use to adequately answer these questions. Most companies will publish their Responsible AI standards and principles, but getting specific information about their practices is not easy. This suggests that there is no one way to create or enforce Responsible AI practices, but there are tips to help guide effective Responsible AI practices.
Here are four steps you can take to start implementing Responsible AI practices at your company:

1.) Establish a Cross-Departmental Team—
People from different departments have an innate diversity of opinions and perspectives for how to implement and enforce Responsible AI. Getting this diverse view of perspectives allows the Responsible AI Team to consider how each department will interact with clients and the scientific output, and thus determine each department’s role in minimizing the potential for harm caused by the technology. Having a team member from each department can impact the company culture, as it gets others outside of Data Scientists and Engineers thinking and talking about Responsible AI and enforcing the agreed-upon framework.
2.) Write a Mission Statement —
What is the goal for your Responsible AI team? What do you hope to accomplish for your organization? What principles will you rely on to guide the work of the Responsible AI team? How will you know if and when you’re successful? Creating your mission, vision, objectives, and principles as a team early on will help determine the direction of the group and the work that needs to be done.
Our Algorithmic Accountability group at Valkyrie did not immediately create these items, but looking back, I see how much more focused our team was once we had a mission statement. We were able to shape our yearly goals and the tasks that team members worked on based on our objectives. Our Responsible AI mission at Valkyrie is:
“We strive for our solutions to be aligned with our values and deployed for the betterment of humanity.”
3.) Align Practices with Your Company Values —
At Valkyrie, we believe that our values should not just be words on our website but apparent in our work outcomes and culture. Since ethics is hard to define, we relied on our values to guide our Responsible AI standards. I was at a responsible tech conference where I heard a presenter say, “If you don’t have values, get some!”. Identify what the core values are for your organization and use them to guide your Responsible AI practices. Once you have identified the values, determine how you will make sure the organization’s work aligns with those values to create Responsible AI.
When evaluating potential clients and scientific solutions at Valkyrie, we consider the following questions:
- HONOR — Are we considering the full impact of the project and understanding our own bias?
- GRIT — Do we have the client’s support to commit to algorithmic accountability even if more work is required, the approach changes, or we must delay solution implementation?
- LOVE — Are we providing benefits to stakeholders and not harming others in their ecosystem?
- HOPE — Are we producing good work and contributing towards the best case?
- CURIOSITY — Are we able to pursue truth as scientists?
The answers to these questions may not be a clear yes or no, but initiating the discussion and letting people openly express their concerns is a good way to start evaluating potential solutions.
4.) Educate Yourself and Your Team —
There are numerous research papers and books about the misuse of mathematics and technology, algorithms that produce biased outputs, and poor data collection and processing methods. Many of which tell stories of creators of these harmful technologies having the best intentions. Find literature that interests your team or is related to your organization’s work in order to further your understanding of Responsible AI. One of the first things our Algorithmic Accountability Task Force did was buy Weapons of Math Destruction by Cathy O’Neal for our company to read and discuss together. By encouraging everyone to study this book, the Responsible AI Group impacted our company culture. Whether a Valkyrie team member was technical or not, they became responsible for understanding the need for Responsible AI practices and the ramifications of operating off good intentions alone.

Here are some other resources to look into:
- Organizations: Center for Humane Tech, Montreal AI Ethics Institute
- Books: The Alignment Problem, Predict & Surveil, Humble Pi, Technology is Not Neutral, Algorithms of Oppression, Technology and the Virtues
- Conferences: The Trusted AI Summit organized by Re∙Work, The Responsible Tech Summit hosted by All Tech is Human
After educating ourselves and the rest of our team, we decided to start with one new Responsible AI tool: the Ethical Matrix. We borrowed the idea from Cathy O’Neil and adapted it for our business use. We created a template for our project teams to use, trained them on the process, and then followed up by reviewing how projects were implementing it.
When I first became a leader in our Algorithmic Accountability Task Force at Valkyrie, I had no idea what I should do to ensure our scientific solutions did the least harm. After attending a conference in June 2022 with some of the biggest names in tech presenting, I realized that I was doing all the right things to lead my team toward Responsible AI. We had a dedicated group of people who wanted to learn more about Responsible AI, and we had created processes for our organization to ensure our solutions reduced harm. Most importantly, we were asking questions about the potential effects of our work, with the intent to minimize harm.
I hope this blog encourages you to initiate Responsible AI practices at your company. A motivated team can feel confident that a few small steps are all you need to take to set yourself on the right path.
The Algorithmic Accountability Group is a horizontal practice at Valkyrie committed to manifesting our company values within our work. Through our rigorous processes, we ensure excellence and minimize harm from our scientific deliverables. To learn more about us please visit valkyrie.ai or contact us at inqueries@valkyrie.ai.
About The Author:
Keatra Nesbitt is a Senior Data Scientist and Product Manager at Valkyrie, where she leads the Algorithm Accountability Task Force and the Internship Program. She earned her BS in Applied Mathematics from the University of Northern Colorado and studied Data Science at Galvanize. Over the past 3 years Keatra has combined data science and thoughtful strategy to deliver impactful solutions to solve client’s unique business problems.