AI is Power and with Power Comes Responsibility

In this digital era, computers have eliminated many jobs creating numerous new opportunities. Amidst this dynamic environment, experts often emphasize the value of smart work over hard work.
Ai-01 1

AI is Power and with Power Comes Responsibility

Artificial Intelligence (AI) has seeped into the corporate consciousness over the last decade to such an extent that we cannot conceive a world without AI now.

‘Knowledge is Power’ is an adage that we are all familiar with. With AI systems holding the key to a treasure-house of data (a.k.a knowledge), AI has not only become synonymous with Power but has also become all-pervasive.

Lots of interesting use cases have started emerging with the possibilities demonstrated by AI. In fact, use cases are evolving in tune with new challenges and realities that we face in everyday life. Gen AI – the latest evolved version of AI – has only upped the ante around AI.

There is an interesting article that I came across recently, which talks about how Natural Language Processing is being used in Kenya to predict election violence, based on the sentiment analysis of speeches of influential people and leaders in the country. The model predicts both increases and decreases in average fatalities for look-ahead periods between 50 and 150 days, with overall accuracy approaching 85%.

Election is a reality we face from time to time and this use-case above is a typical example of how AI is being used to handle a real challenge of violence that could happen around the time of election.

The above example highlights the positive side of AI, where the system predicts violence thereby providing an opportunity to prevent it.

Another positive application of AI across the world these days is in the recruiting process. For instance, Unilever, which processes over 1.8 million job applications each year, has partnered with Pymetrics to build an online platform that can assess candidates over video software. In the second stage of interviews, candidates answer questions for 30 minutes while the software analyses their body language, facial expressions and word choice using natural language processing and body language analysis technology.

While AI can be put to lots of such good uses, it pays to remember that AI is powerful enough to generate widespread chaos as well, if not modelled in a responsible manner.

Here are a few examples where the AI models went awry.

  • In Feb 2024, Air Canada was ordered to pay damages to a passenger after its chatbot lied to the passenger about the Airline’s policy.
  • In Nov 2023, Sports Illustrated made headlines for publishing articles written by fake AI-generated authors, whose biographies, along with their photos, were created by AI.
  • In Aug 2023, a black woman based in Detroit, who was eight months pregnant, was falsely accused and arrested as a suspect in a robbery and carjacking case. This incident, caused by an AI error, resulted in this woman being jailed for 11 hours thereby causing a traumatic experience.
  • In Jul 2023, it was discovered that ChatGPT can create PHISHING templates that a scammer could use to easily create a convincing scam mail.

Air Canada: A Case Study

Let us zoom in, for instance, on the Air Canada case to understand what exactly happened.

In November 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.

According to a civil-resolutions tribunal, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount.

Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

In February 2024, the British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees.

Not only did this incident lead to a financial penalty for Air Canada but it also led to reputational damage, which usually takes time to fade from public memory.

These examples drive home the message that AI systems that are being developed need to be closely scrutinised and regulated. It is imperative to create a governance mechanism around the AI models that ensures the examination of all ethical, legal and safety dimensions. In other words, the need of the hour is a ‘Responsible AI’ framework.

Ensuring Ethical and Responsible AI Implementation

AI works based on the data fed to them by humans. AI systems, by themselves, cannot behave responsibly. Hence it is the responsibility of the humans, developing and assisting the AI, to ensure fairness and transparency in the predictions given by the AI systems.

According to a recent Accenture research: “Only 35% of global consumers trust how AI technology is being implemented by organisations. And 77% think organisations must be held accountable for their misuse of AI.”

Since the consequences of misuse of AI can be heavily damaging, all organisations should resolve to adopt ‘Responsible AI’ practices that guarantee that their AI systems are explainable, monitorable, reproducible, secure, human-centred, unbiased and justifiable.

This, therefore, implies that all organisations indulging in AI initiatives should prioritise ‘responsible AI’ practices and incorporate them into the AI implementation lifecycle – i.e., from AI strategy to deployment.  Here are the practices that organisations should strive to adopt.

  • Use a human-centred design approach: Engage with a diverse set of users and use-case scenarios and incorporate feedback before and throughout project development.
  • Identify multiple metrics to assess training and monitoring: Ensure that your metrics are appropriately aligned with the context and goals of your system.
  • Examine your raw data closely: Analyse your raw data carefully to ensure that you understand it in its entirety – including but not limited to user sensitivity and privacy.
  • Understand the limitations of your dataset and model: It is important to communicate the scope and coverage of the training, thereby clarifying the capability and limitations of the models. For example, a shoe detector trained with stock photos can work best with stock photos but has limited capability when tested with user-generated photos on mobile phones.
  • Test, Test, Test: The importance of testing cannot be over-emphasised. Hence one must conduct iterative user testing to incorporate diverse sets of user needs in the development cycle. It pays to apply the quality engineering principle of poka-yoke and build quality checks into a system so that unintended failures either cannot happen or trigger an immediate response (for example, if an important feature is unexpectedly missing, the AI system won’t output a prediction).
  • Continue to monitor and update the system after deployment: Continued monitoring will ensure your model takes real-world performance and user feedback into account. Before updating a deployed model, analyse how the candidate and deployed models differ and how the update will affect the overall system quality and user experience.

Last but not least, in any AI endeavour that is undertaken across the world, it is important to often remind ourselves of the adage popularised by the Marvel Comics-inspired Spiderman movie: “With great power comes great responsibility.”

Kesavan Hariharasubramanian

Author

Author Bio

Kesavan Hariharasubramanian is an IT professional with 20+ years of experience spanning stints in Consulting, IT Services, Fintech and Start-up firms. As an Associate Director at LTIMindtree, he currently handles delivery management for a critical project in the NEXI account under the BFS vertical. Functionally, he is a specialist in Delivery Management and technically, he is a specialist in Data Analytics.

His past employers include iNautix Technologies, PricewaterhouseCoopers, HCL Technologies, Cognizant, Western Union and Kritilabs Technologies. By qualification, he is a Computer Science Engineer from the College of Engineering, Trivandrum, with an MBA from IIT Kharagpur. He is also an interview panellist for MBA admission at IIT Kharagpur and also a mentor for IIT Kharagpur – MBA students.

He is an avid book reader – predominantly non-fiction and is passionate about continuous self-improvement. The additional qualifications and certifications he holds bear testimony to exactly that.

Disclaimer: The information, statements and opinions contained in this content are of a general nature only and do not take into account your individual circumstances including any laws, policies, procedures or practices you or your employer or businesses may have or be subject to. Although the statements of fact on this page have been obtained from and are based upon sources that L&T EduTech believes to be reliable, it does not guarantee their accuracy or completeness.