There are many unsavory facts associated with AI-powered tools and applications.

One of the most common is algorithmic discrimination based on race, gender, and ethnicity. Organizations around the world need to work in sync to get rid of biases in AI.

By definition, AI is believed to mimic the working mechanism of the human brain to optimize organizational activities. Unfortunately, while we have been able to come close to the artificial recreation of human intelligence, AI also displays another distinctive human trait: prejudice against a person based on their race, ethnicity, or gender. Bias in AI isn’t exactly a new concept. Examples of biased algorithms in the healthcare, law enforcement and recruiting industries have been discovered recently as well as in the past. As a result, the intolerance and discrimination of past centuries continue to emerge in one form or another even as the world seems to be moving towards inclusiveness for all. To find out how AI reinforces long-standing human biases, we need to research the ways in which biases creep into AI neural patterns and networks.

Bias data sets used to train AI models

The decisions of an AI-powered system reflect only the type of input data used to train the model. Therefore, if the data sets imbibed by an AI model are discriminatory, the recommendation or exit decisions will follow the same pattern. In the initial phase of machine learning, biased AI models can be created in two ways. First, as already stated, the data used to train an AI model (overall) is narrow and biased. And second, discriminatory algorithms are created due to biased samples in a given data set. Input data can be limited either through negligence or because the data scientists working on the training process are themselves conservative, narrow-minded and, yes, damaging.

One of the best examples of discriminatory AI is the infamous COMPAS system, which is used in several states in the United States. The AI-powered system used historical prison databases and a regression model to predict whether released criminals were likely to reoffend in the future. As expected, the system’s predictions showed that nearly double the number of repeat offenders would be African Americans compared to Caucasians. One of the main reasons for this bias was that network administrators never tried to detect discriminatory nuances of the system when analyzing its predictions. Often the victims of prejudice in AI are women, individuals belonging to racial or ethnic minorities in a given region, or people with an immigrant background. As they say, AI models do not introduce new biases but simply reflect those that are already present in society.

As specified above, the procedure for collecting data for machine learning can also be biased. In this case, designated AI governance officials would be aware of the bias in the data collected, but would choose to ignore it nonetheless. For example, when collecting data about student pathways for admissions, a school might only select applicants if they are white. In addition, the school may simply refuse to offer learning opportunities to other children. In what may be a cycle, the school’s choice to exclusively select white students can be observed up close by an AI model. Later, the model will continue the racist tradition, as her model analyzes only show that this is the correct course of action during school admissions. Thus, racism is repeatedly reinforced despite the presence of advanced technology that manages the process.

Besides racial or gender discrimination, biases in AI can exist even in the form of preferential treatment for the wealthy. Thus, poor people may be underrepresented in AI datasets. A hypothetical example of this can even be imagined in the current era of COVID-19. Several countries have developed their own mobile apps that can be used to track people infected with the virus and alert others in a given area to keep their distance from those people. While the initiative may serve a noble purpose, people without a smartphone will simply be invisible in the app. While this kind of bias is not anyone’s fault, in particular, it defeats the purpose of designing such apps in the first place. In summary, discriminatory training data and operational practices can directly lead to bias in AI systems and models.

Bias in AI for proxy reasons

Another way to infiltrate AI models could be proxies. Some of the details and information used for machine training may correspond to protected characteristics. Biases can be unintentional because the data used to make rational and calibrated decisions can end up serving as a proxy for class membership. For example, imagine a financial institution that uses an AI-powered system to predict which loan seekers might have difficulty repaying. The datasets used to train the AI ​​system will contain historical information spanning more than three decades. Now, this input data does not contain details related to the skin color or gender of the candidates. However, suppose the system predicts that people residing in a particular locality (linked to a certain zip code) will default on their loan installment repayments. This prediction is generated only on the basis of historical records. People staying in this area may rightly feel discriminated against when the bank decides not to sanction their loan applications because of where they live. This type of bias in AI can be eliminated by involving human officials who can override the decisions of the AI ​​system based on real facts and not just historical records.

Apart from these, there are several other ways that biased AI was born, which then reinforces centuries-old prejudices these days. There are several ways to permanently eliminate AI bias, or at least reduce it to a large extent.

Bias_in_AI_Due_to_Proxy-Related_Reasons.png

1. Selection of more representative datasets

Every individual in an organization should try to reduce the likelihood that their AI systems are biased in their work. As we have seen, AI bias comes only from the kind of data it receives for machine training or day-to-day operations. Data scientists and other experts who assemble large expanses of training and operational data must use diverse data that includes people from all ethnicities and racial minorities. Women should be represented as much as men in these datasets. Additionally, segmentation in AI models should only exist if data scientists provide input data to the model that is similarly segmented.

Additionally, organizations using AI-based applications should not use different models for different groups when it comes to racial and ethnic diversity. If there is insufficient data for a single group of people, organizations can use techniques such as weighting to balance its importance against other groups. There is a chance that biases will creep into an AI model if each group of data is not treated with care and with equal weight.

2. Identification of potential triggers or sources of bias

Detecting areas and operations where certain types of bias can enter an AI system is the primary responsibility of AI governance teams and executive level employees in any organization. Ideally, this process should be done before incorporating AI into organizations. Organizations could mitigate bias by looking at datasets and checking whether or not they will cause AI models to have a narrow ‘point of view’. After a thorough examination, an organization should conduct testing to see if its AI systems show any signs of bias in its work. Most importantly, organizations should come up with a list of questions that cover all of the biased areas of AI. Then they must continue to find solutions in order to answer each of these questions one after the other.

3. Implement strict guidelines for AI governance

The AI ​​governance team plays a key role in preventing AI systems from becoming discriminatory over time. To keep bias in AI out of the equation, the governance team needs to regularly update AI models. More importantly, the team must define non-negotiable regulations and guidelines to detect and eliminate, or at least mitigate, bias in the input data sets used for training machines. Apart from this, clear communication channels should be established so that employees at all levels of an organization can notify the governance team when receiving customer complaints about AI discrimination. These channels will work even if the employees themselves find out that their organization’s AI may be biased.

Bias in AI can be extremely distressing for individuals or groups affected by its decisions. More problematically, biased AI systems are new generation symbols of centuries of marginalization and discrimination faced by unfortunate victims throughout human history. Therefore, we need to ensure that algorithmic bias is nipped in the bud by ensuring diverse input datasets for model formation and competent AI governance. Discriminatory AI is everyone’s problem and therefore all parties involved in the design, implementation and maintenance phases of AI models in organizations must come together to solve it.



Source link

Leave a Reply

Your email address will not be published.