Executive InterviewsLatest

Meet Vatsal Ghiya – CEO of Shaip, Shares Insights on How to Eliminate Bias in Conversational AI

Today we have selected Vatsal Ghiya to take his interview. Vatsal Ghiya is a serial entrepreneur with more than 20 years of experience in healthcare AI software and services. He is the CEO and co-founder of Shaip, which enables the on-demand scaling of our platform, processes, and people for companies with the most demanding machine learning and artificial intelligence initiatives.

What is bias in voice technology?

Voice search, unlike text-based search, is very complex. There are constraints in accents, pronunciation, emotion, sentiment, sarcasm, and more.

Statistics reveal that the accuracy rate of fetching results through voice search for an American white male is 92%. But this goes down to 79% and 69% for a white American female and a mixed American female.

This is a classic example of bias in the voice search spectrum. This means the mixed-race American female must repeat her commands until the algorithm understands her query or asks questions differently.

Real-world Examples of Bias

  • Amazon’s inspiring ambitions to automate its recruitment process sort of backfired when its AI system started favoring male candidates over females. Historical data showed that there were more men in the organization than women. The AI model learned that male candidates were better suited and even penalized women candidates.
  • Facebook – had also fallen prey to bias in 2019, when the company allowed marketers and brands to target their customers based on religion, race, and gender. This allowed the AI models to show ads on the requirement of nurses, secretaries, and more to women and jobs on taxi drivers to men, specifically those from weak financial backgrounds.
  • Princeton University’s word analyzer tool also picked up bias and delivered its interpretations. On analyzing over 2.2mn words, they learned that the system felt European names were more pleasant than the names of African Americans. Gender bias also led to the model associating women with arts and not math or science.

What Causes Bias In AI?

Bias is involuntary, and most of it stems from natural inclination and beliefs. It becomes difficult for a professional to identify and acknowledge bias as what is different and subjective could be a fact or truth for them. That’s why bias gets introduced in several intricate ways into a system, which are

  • Data
  • People
  • and Algorithms

Data: The quality of datasets is very important in developing voice search models. If datasets are sourced from portals or avenues with an inherent bias, the same will reflect in results as the AI model keeps training from the same datasets.

People: People become involuntary contributors to the creeping up of bias when they annotate datasets or manually collect data from diverse sources.

Algorithms: Algorithms don’t have the innate ability to introduce bias. But the superpower they possess is the ability to blow existing biases to exponential proportions. For instance, if a model is trained with images showing only women in kitchens and men in garages, it will skew future results.

How To Counter Bias in Conversational AI

Detecting bias is the toughest part of the process. Once the touchpoints are uncovered, bias can be eliminated in several ways.

Data Sources

One of the first steps is to extensively check and classify data sources and assess historic data for pre-existing bias and prejudices.

Diversity

Try as much as possible to compile representative data. An inclusive dataset is devoid of bias and more objective in giving impactful results.

Quality Check

Recruit a dedicated team of experts to implement multiple rounds of quality checks.

Real-time Monitoring of Models

Monitor how your AI models perform in real-time, process datasets, and churn out results.

How Do Market Players Tackle Bias in Their Systems?

Big tech companies are striving to be responsible, from launching proprietary tools to rolling out open-source applications:

  • IBM deployed a cloud service to detect bias automatically and eliminate it.
  • Google launched an open-source tool to assess ML models to detect inherent biases. Called the What-if Tool, which helps developers conduct analyses to better understand the models by introducing scenarios for inclusivity and refinement.
  • Facebook is working on a Fairness Flow tool that is being used to check how its algorithms treat diverse groups of people.

How Shaip Is Helping in Bias-Free AI Model

Shaip’s efforts are more directed towards the grassroots level. Remember I mentioned eliminating bias from sources? That’s what we’re striving to achieve. Being a pioneer AI Training data, we take airtight measures to source datasets from diverse representative sources. We don’t work with a tunnel vision but with a peripheral one to source, compile and annotate datasets that are as diverse as possible.

Besides, we also have an extensive team of veteran SMEs, project managers, developers, and quality assurance associates who have protocols for detecting and eliminating bias from datasets.

Can Smaller Companies Working on Conversational AI Achieve Unbiased AI?

Yes, achieving zero bias is not impossible but only quite intricate. Smaller companies can work in these 3 areas:

  1. Get a specialist in your team and let them take care of rolling out unbiased AI
  2. Prioritize quality over anything else
  3. Have knowledge banks and resources developed on bias and have a list of industry or market-specific biases to help your associates quickly spot one whenever they work on datasets.

Few Predictions for Voice Tech of the Future

  • Individualized personalized experiences
  • Touch interaction. Voice and visual displays will merge into a seamless experience, as is already visible with Google’s showcase of the E Ink screen.
  • Security-focused. Voice payments will become more secure and more convenient for users.
  • Mobile App Integration. Integrating voice-tech into mobile apps has become the hottest trend, as voice is a natural user interface (NUI). Voice-powered apps increase functionality and save users from complicated app navigation.
  • Voice Cloning. Machine learning tech and GPU power development commoditize custom voice creation and make the speech more emotional, making computer-generated voice indistinguishable from real.

Leave a Reply

Your email address will not be published.

Back to top button