Machine learning and artificial intelligence are by no means perfect, and it takes human intervention to constantly tweak algorithms.
Those applications are essentially based on math problems and may never bee 100% accurate, so companies and software developers should think carefully before going down that road.
At a recent conference, TWIMLcon: AI Platforms, panelists spoke about the ethics of artificial intelligence and the need for its human developers to take painstaking actions to ensure these applications work for everybody.
Three panelists gave several recommendations on incorporating ethical standards into the application development process:
Forge a Company Culture That Puts the User First
Any one group or central team should not be the only to write code and fix fairness or the whole company. To do this, companies must have a diverse group of people working on these applications.
The lack of diversity in the sector has been well documented, including in an April 2019 study from researchers at New York University that found:
- Only 18% of authors at leading AI conference are women and more than 80% of artificial intelligence professors are men.
- Women are by far the minority at Facebook and Google at 15% and 10%, respectively.
- Only 2.5% of Google’s workforce is black, while Facebook and Microsoft are both at 4%.
The study’s other findings include:
- The AI Sector needs a profound shift in how it addresses the field’s diversity crisis
- The push on women in technology is likely to favor white women
- A larger workforce pipeline hasn’t fixed these issues
- Use of technology for the classification, detection and prediction of range and gender needs to be re-evaluated
How to Root Out AI Bias
Despite its workforce shortcomings, Silicon Valley is beginning to pay attention to bias in algorithms.
Accenture, Microsoft, Google and IBM have all announced new programs intended to help data scientists mitigate inherent biases in AI algorithms.
Facebook said in June that it was following suit with similar systems to ensure it’s computer vision systems work for all genders and ethnicities.
Algorithm Watch also maintains a running list of AI ethics and guidelines from dozens of tech companies, government entities, research institutes and other organizations.
Along with those tools and guidelines, the AI Now Institute recommends:
- Tracking and publicizing where AI systems are used and for what purposes.
- Rigorous testing across the lifecycle of AI systems in sensitive domains.
- Expand research to a wider social analysis of how AI is used in context.
- Methods for addressing bias and discrimination in AI should include conversations about if certain systems should even be designed at all.