Op-Ed: Is Your AI a Ken?

Stefanie Lauria
6 min readJun 14, 2022

You might have heard of the term “Karen,” used in the last few years in the pop lexicon of Twitter, to refer to a white woman using her privilege to manipulate the system to get what she wants and by doing so discriminating against minorities. The most notorious in recent memory was “Central Park Karen” who used her privilege and whiteness, asserting some form of victimhood called the cops on a black man bird watching.

Less known is her male counterpart, Ken, and in this context, Ken is an AI that is created by white males in a bro culture and it is inherently biased against non-whites. Ken is a more befitting name than Karen for an AI behaving badly, I posture.

Photo by Tara Winstead: https://www.pexels.com/photo/robot-s-hand-on-a-blue-background-8386437/

There is no shortage of examples of how AI discriminates against nonwhite, nonmale, non-cisgender people. Sasha Costanza-Chock (2018) describes her experience as a transgender woman going through airport security, and the shame, and the humiliation because an AI system was not built to acknowledge non-cisgender people. Ogbonnaya-Ogburu et al. (2020) describe how autonomous driving vehicles are more likely to run over people with dark skin because it was not built to identify dark skin tones as accurately. ProPublica published a scathing article on the AI by Northpointe used by several states' justice systems to judge the recidivism of defendants. Northpointe’s assessment tool labeled blacks at higher risk of re-offending at twice the rate of whites, however, 44.9% of those labeled at such did not re-offend (Angwin et al, 2016). That is a high rate of inaccuracy, and even though Northpointe says it does not take race into account in its data set, it does include data sets that are proxies for Race such as poverty and joblessness rate. Lastly, Mike Li (2020) reports on a U.S. government study that showed that over 200 facial recognition algorithms had a problem distinguishing nonwhite faces.

So, there are a lot of Kens out there in the AI space. Perhaps the problem is really the demographic makeup of those building these systems. The numbers stack up; only 2.5% of workers in Google are black and they comprise only 4% of the workforce at Facebook and Microsoft (Howard, 2020). Latinx make up 5.7% of Google's employees (Li, 2020). While women make up only 22% of all AI professionals (Howard, 2020). In addition, women make up only 15% of AI researchers at Facebook and only 10% at Google (Paul 2019). The numbers are also reflected in the design community; 93% percent of those recognized by the CHI Academy for having significantly contributed to HCI (Human-Computer Interaction) were white (Ogbonnaya-Ogburu, 2020).

It seems a simple solution to have a more inclusive workforce so that different perspectives bring checks and balances to the design and development of AI. Hire more underserved minorities to work in AI products. Some may say that is unnecessary, that having mostly white males in these roles does not preclude them from creating fair AI systems. Maybe, or their decision-making process is impacted by the structural inequalities through their implicit biases (Kuhlman et al, 2020). The problem here is the absence of these minority groups is not even noticed, therefore they become invisible to the process.

How to Fix the Diversity Problem

So, now, you might have committed your organization to no longer creating Ken AIs and hiring more underserved minorities, but you seem baffled by the lack of minority candidates in your job pool. It might require greater effort for the industry as a whole to achieve parity and equity in these communities. Start by expanding your network on LinkedIn beyond your own demographic. Follow prominent black, Latinx, trans, and women leaders because following those means that you are also following their followers who are most likely a lot like them.

Photo by fauxels: https://www.pexels.com/photo/women-standing-beside-corkboard-3184296/

Build collaboration with minority-serving institutions (Kuhlman et al., 2020). These comprise a small number of colleges and universities, however, HBCUs (historic black colleges and universities) graduate 40% of blacks in STEM, and similarly 40% of Latinx graduate from HSI (Hispanic Serving Institution) (Kuhlman et al., 2020). There should be plenty of candidates looking for opportunities here, reaching out to these institutions would widen the applicant pool.

Create education opportunities for those in these underserved communities so they can be able to attain opportunities in the field of AI. Kuhlman et al (2020) suggest mentoring at AI conferences so that you can help underrepresented up-and-coming professionals be integrated into the community and help them get access to opportunities. Partner with members and leaders of the community to recognize up and comers and allow them to learn and get inspired in the field. Create initiatives in schools so that minority students get an opportunity to expand their possibilities and pursue AI as a career.

Increasing the diversity pipeline is the start that will grow diversity in the field as a whole. There are more immediate steps that can be taken to increase perspective in AI. Reaching out to interdisciplinary members of academia and the communities that are impacted by the design choices is another alternative way to create more equitable systems. Creating a panel and a council to review designs can widen the perspective of those creating these systems.

All of these possible solutions take time and effort to come to fruition, but the effort needs to be made. Admittedly, the industry recognizes that there is a need for diversity in the decision-making processes when creating any AI. Machine learning scientists when surveyed responded that blind spots exist because of the lack of diverse perspectives within their teams and acknowledge that they do not fully represent the real-world users interacting with the AI they are creating (Kuhlman et al., 2020).

The first step to correcting change is acknowledging that the problem exists, now the second step is taking action to change the environment around you. Gentlemen, it is time to roll up your sleeves and do the hard work of making the field more equitable. After all, no one wants their AI to be a Ken, right?

References

Angwin, J., Kirchner, L., Mattu, S., & Larson, J. (2016, May 23). Machine bias. ProPublica. Retrieved November 9, 2021, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Costanza-Chock, S. (2018). Design Justice, A.I., and escape from the Matrix of Domination. Journal of Design and Science. https://doi.org/10.21428/96c8d426

Howard, A., & Isbell, C. (2020, September 21). Diversity in AI: The invisible men and women: Ayanna Howard and Charles Isbell. MIT Sloan Management Review. Retrieved November 9, 2021, from https://sloanreview.mit.edu/article/diversity-in-ai-the-invisible-men-and-women/.

Kuhlman, C., Jackson, L., & Chunara, R. (2020). No computation without representation. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. https://doi.org/10.1145/3394486.3411074

Li, M. (2020, October 26). To build less-biased AI, hire a more-diverse team. Harvard Business Review. Retrieved November 9, 2021, from https://hbr.org/2020/10/to-build-less-biased-ai-hire-a-more-diverse-team.

Ogbonnaya-Ogburu, I. F., Smith, A. D. R., To, A., & Toyama, K. (2020). Critical race theory for HCI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376392

Paul, K. (2019, April 17). ‘disastrous’ lack of diversity in AI industry perpetuates bias, study finds. The Guardian. Retrieved November 9, 2021, from https://www.theguardian.com/technology/2019/apr/16/artificial-intelligence-lack-diversity-new-york-university-study.

--

--

Stefanie Lauria

UX Designer in the NY Metro area. Music hunter. Lover of the great outdoors. Van life dreamer. Sharing is caring.