Can AI eliminate biases at the workplace?

Dr Elijah Wee, Assistant Professor of Management at the University of Washington, examines the limits and possibilities of AI in promoting inclusion.

By Dr Elijah Wee | 23 Jun 2023

Image: Harvard Business Review

In my work on organisational behaviour, I study how individuals with lower power or status may overcome the numerous obstacles in their attempts to challenge the status quo, improve their work relationships, and contribute meaningfully to the workplace. My research identifies unique strategies for these individuals to make their voices heard and their opinions count. 

Despite important strides made in the areas of diversity and inclusion, many challenges remain. Token hires for diversity have proven ineffectual, and appointing DEI specialists may not produce the miraculous results hoped for in chasing away intercultural tensions or unconscious biases.

Within the last three to four years, there has been surging interest in the use of AI to combat unconscious biases within hiring practices and widen the gates for a more diverse talent to enter the workforce. AI is a potential catalyst for fairer workplaces, yet it is also vital to be mindful of the limits and risks of its usage.

4 ways that AI can eliminate biases

Levelling the playing field for hiring

Firstly, AI can help with making fairer hiring decisions. Already, early-stage interviews are often conducted with AI-powered assessment centres, where candidates are tested on problem-solving skills. In filtering CVs, AI is capable of selecting candidates with less interference from human biases.

AI is also excellent at pattern recognition. As such, it is able to scan job listings and applications and clean up wording that creates a narrow self-selection bias. It can also analyse trends among applicants to detect if they skew in a certain direction.

Weeding out undesirable language

Given that AI is suitable for language processing, internal communication may be analysed to capture problem areas in our interactions. In so doing, discriminatory or demeaning expressions used without our full awareness may be flagged for further action.

Triangulating this data with interviews and psychometric surveys is a good way to take the temperature of an organisation’s DEI practices and check whether there’s any form of potential hypocrisy or misalignment between aspiration and reality. One good thing about such data is that it can pick up on general trends without pinpointing any one offender.

Assessing performance holistically

Companies have started using AI to aid in performance management. Because AI doesn’t have personal likes or dislikes, it won’t be swayed by favouritism or other factors irrelevant to performance.

Rather than base performance reviews or salary and promotion decisions on vague impressions and arbitrary judgements, AI-led appraisals are grounded in data. This provides a clearer view of how well an employee is achieving objectives and contributing to the organisation, as well as generates reliable feedback and developmental recommendations for professional growth.

A data-based approach gives the employee greater assurance in the process, which makes them more open to feedback. In terms of compensation and benefits, AI is also useful for equitably calculating salary increments, circumventing pay discrimination, or distributing benefits among employees at varying life stages.

Measuring downstream effects

An indirect way of promoting inclusion with AI is to measure or simulate the downstream effects of diversity initiatives, such as brand perception or company performance. This would incentivise and encourage organisations to become more invested in promoting diversity, rather than participating half-heartedly for the sake of creating favourable optics.

Image: Harvard Business Review

3 caveats on the limits of AI

Biases in, biases out

The most obvious issue with AI is its utter dependency on the bank of data it is fed. Programmers are likely imprinting their worldviews onto AI. If the data is biased, then it’s garbage in, garbage out, and we end up with programming with in-built biases.

When it comes to technology, we sometimes focus so much on the breakthrough that DEI-related issues might come in as an afterthought. This was precisely how colour film was originally created for light-skinned people. Likewise, what’s lacking in the select groups of individuals pursuing AI innovation and racing to be the leader is a diversity of people at the table to discuss the social implications of the technology. 

The experts working on AI shouldn’t just come from a computing or machine learning background, but should include biologists, social scientists, ethicists, and more. Hopefully, ongoing conversations about ethics and accountability will drive the development of AI in a more inclusive direction.

Not everything can be automated

When implementing AI, it is critical to distinguish between its two fundamental modes: augmentation and automation. 

Automation is perfectly suitable for routinised processes like onboarding, scheduling, or payroll processing. But when it comes to decision-making in hiring or appraisals, the responsibility of the final call should still be left in the lap of the manager. That is, AI should be used to augment human decision-making, not automate or replace it.  

AI excels at crunching data to generate insights and recommendations, but humans are in a better position to make nuanced judgements on things like culture fit, and performance reviews benefit from the authenticity of a face-to-face, human-to-human conversation. As we’ve heard from stories of bizarre interactions with chatbots, the emotional intelligence of AI simply isn’t as evolved as its cognitive ability.

Access must be inclusive

AI has become widely accessible to a global audience. The technology is advanced yet user-friendly. You don’t have to understand its mechanics to unlock its power.

Still, widespread doesn’t necessarily mean inclusive, and care should be taken to ensure that AI isn’t at the exclusive disposal of specific beneficiaries. While we apply AI in workplaces for white-collar jobs, we should also be figuring out how it can improve conditions for blue-collar workers.

Similarly, AI apps should be customised for inclusivity, such that users of different language proficiencies and levels of digital savvy can gain access. Both the CEO and the entry-level employee should have an understanding of what AI is and how they can use it to their advantage.

Challenging the status quo

As with most technologies, AI can be wielded to create a great positive impact in society, yet remains susceptible to flawed design or implementation.

It has emerged as a powerful tool in the elimination of workplace biases, thanks to its strengths in analysing large piles of data and identifying problem areas. Ceding all responsibility of decision-making to AI, however, would be detrimental. Ultimately, it will take genuine interactions and conversations with people different from ourselves to illuminate our blind spots and erase our ingrained prejudices.

Let us now strive toward a future where AI helps us challenge the status quo and facilitate fairer practices at workplaces – one where it is used with clarity, oversight, and diverse voices participating in its evolution. 

About Elijah

Elijah is an Assistant Professor of Management at the University of Washington’s Foster School of Business. A Singaporean based in Seattle with his wife, Betty, and daughter, Charlotte, he holds a PhD in organisational behaviour from the University of Maryland.

Connect with him here.

Hello, Welcome to Singapore Global Network

Already a member?
Sign up with us as a
member today
Skip to content