This made me reflect on the pattern we are living in these days. I avoid using ChatGPT for my child’s homework when she is around, but I do refer to it for a fourth grade school homework for my own ease. I also use it for quick grammar correction. This is still fine. What made me think more is how some of my friends are using ChatGPT for medical clarification and needs, especially for taboo topics like menopause.
During my recent visit to an ENT clinic, I saw a couple decoding their medical reports related to pregnancy so that they could ask the correct questions during their consultation with the gynaecologist.
The question that triggered me to write this is: are we really aware of how AI works?
AI tools, such as ChatGPT, are incredibly smart chatbots that can generate text responses to a vast array of questions. What sets them apart from older chatbots is that their responses aren’t pre-programmed. Instead, they use supervised machine learning to figure out the most sensible order of words based on what they have learned from past conversations and information available on the free internet (i.e., they are continuously learning).
What is important to note here is that AI can have biases. These are systematic discriminations built into AI systems that can reinforce existing biases and make discrimination, prejudice, and stereotyping worse. Bias in AI models usually comes from two things: how the models are designed and the data they are trained on. Sometimes the developers who create these models may have certain assumptions, which can cause the models to favour particular outcomes.
AI bias can also develop because of the data used to train the system. AI models work by analysing large amounts of training data through a process called machine learning. These models identify patterns and connections in the data to make predictions and decisions.
When AI algorithms detect patterns of historical bias or systemic disparities in the data they are trained on, their conclusions can also reflect those biases and disparities. And since machine learning tools process data at a massive scale, even small biases in the original training data can lead to widespread discriminatory outcomes.
Further, AI chatbots like ChatGPT are trained to have a positive bias towards the user. They may lean towards comfort and reassurance. They can sometimes provide more information than is needed, but because they are always available, we tend to look for ready-made, non-customised answers.
Another thing that bothers me is how AI-based algorithms constantly feed me one type of content that I may have seen three or four times, while not showing contradictory viewpoints. This came up in a casual conversation with my husband when he mentioned something viral, and my response was, “Your and my social media feeds are different based on our interests.”
But even within a particular interest area, I would like to see multiple opinions and contradictory viewpoints, not just content similar to what I have liked or interacted with in the past. Getting similar feeds actually reinforces the idea that this is the most accepted opinion because many others seem to believe the same. What we often fail to realise is that it is the algorithm that makes us believe that what we are thinking or interacting with is correct.
AI-based algorithms in social media act as “virtual matchmakers” that analyse billions of data points to curate personalised feeds designed to maximise user engagement and retention. Instead of showing posts in chronological order, these systems use machine learning to predict what content you are most likely to interact with next.
![]() |
The Human Responsibility: Al is a tool for access, but it must not replace curiosity or independent judgment.
Perhaps the real question is not whether AI is useful or not but whether we are using it critically. AI can be an extraordinary tool for access to information, but it should not replace judgement, curiosity, or the willingness to question what we see and read. In a world increasingly mediated by algorithms, the responsibility to think independently still rests with us.

No comments:
Post a Comment