Dhiraj Singha’s experience with GPT-5 highlighted the dangerous implications of caste bias embedded in AI models. When his application was processed through ChatGPT, it altered his surname to ‘Sharma,’ indicating he belonged to a privileged caste rather than reflecting his actual identity as a Dalit. This incident not only underscores the microaggressions faced by marginalized communities but also reveals the systemic discrimination that AI perpetuates.
OpenAI’s products, according to the investigation, exhibit significant bias, which risks entrenching discriminatory views in contexts such as hiring and education. Tests conducted on GPT-5 showed an alarming tendency to choose stereotypical associations, suggesting that these models, trained on uncurated data, inherit and reinforce casteist attitudes. Experts warn that without intervention, the adoption of such technology could amplify existing inequalities within society.
While some strides have been made in addressing biases related to race and gender, caste bias remains largely overlooked. Researchers are calling for enhanced evaluation methods to include caste and challenge the AI industry to confront and rectify these biases. The implications are profound, as unchecked biases in AI threaten to further marginalize communities that are already vulnerable, making the development of more inclusive AI essential for a fair digital future.
👉 Pročitaj original: MIT Technology Review Security