Mastering Decision Excellence: Pitfalls to Avoid with Generative AI | Tech Mahindra

Mastering Decision Excellence: Pitfalls to Avoid with Generative AI

Artificial intelligence (AI), particularly generative AI is rapidly changing the way we make decisions. From healthcare to finance to retail, AI is being used to automate tasks, identify patterns, and make predictions and assist with decision-making when number of parameters, size of data and/or the number of variables is beyond human comprehension. However, there are also several potential pitfalls to consider when using AI for decision-making. The fact that the repercussions of AI failure or malfunction have not yet been experienced mean that any organization employing AI must put the right guardrails in place .

Data bias is one of the biggest concerns. AI systems are trained on data, and if the data is biased, the AI system will be biased as well. This can lead to unfair or discriminatory decisions. For example, an AI system used to make loan decisions could be biased against certain groups of people, such as women or minorities. Hence embedded and/or inserted bias and degradation require particular attention.

The black box problem is an issue as well. The process that AI leverages to make decisions is often complex and opaque. AI system decisions are therefore difficult to implicitly trust. There is also the larger problem of placing accountability in AI decision systems. For example, AI systems could make medical diagnosis based on factors that are not transparent to the medical community that is expected to trust that diagnosis. There have also been cases of misses and false positives.

Lack of transparency is another challenge. An oft repeated concern is that people who are affected by AI decision systems do not transparently understand the decision-making process. This can erode trust and lead to concerns about privacy and accountability. For example, an AI surveillance system used to track people's movements could be collecting data without their knowledge or consent. This bias & opaqueness of AI in surveillance could be feeding into judicial systems leveraging AI and affecting outcomes negatively.

Job displacement is another probable pitfall. Humans perform many of the tasks that AI systems would automate. This increases the risk of economic and livelihood disruption as well as job displacement. For example, Human recruiters could be replaced by an AI system to screen applications.

Security risks are also an area of concern. Hacking or manipulation of AI systems could lead to unauthorized access to sensitive data, biased recommendations, or manipulation of decision-making processes. For example, hacking of critical infrastructure controlled by AI systems could cause a cyberattack affecting large segments of population using that infrastructure.

These are just some of the potential pitfalls to consider when using AI for decision-making. It is important to carefully weigh the risks and benefits of using AI in each specific situation.

Mitigating the Pitfalls: Steps for Organizations to Safeguard Against AI Risks

Some of the steps an organization can take to mitigate such risks when leveraging AI for their decision systems are:

  • Utilize high-quality data which is representative of the population affected by the decisions.
  • Transparent explanation of decision-making processes followed by AI systems.
  • Guard-rails and safeguards to protect against discrimination and/or bias.
  • Monitoring of AI systems for security risks and active mitigation of those risks.
  • Human involvement in the decision-making processes.
  • Regular testing and monitoring of AI systems.
  • Education of relevant stakeholders about using AI in decision-making systems.

An organization can follow these guidelines to leverage AI for making better decisions that are more fair, transparent, and secure.

In addition, here are a few more things to consider when using AI for decision-making:

  • The legal and regulatory implications of using AI:  In some cases, there may be laws or regulations that govern the use of AI (generative AI in particular) for decision-making. For example, the European Union's General Data Protection Regulation (GDPR) places restrictions on the use of personal data for AI systems. Another such law restricting use of AI is CCPA (California Consumer Privacy Act). One may also need to be careful about PII and SPII data in AI decision systems.
  • The ethical implications of leveraging AI: There are ethical issues that need to be considered when using AI in decision-making. For example, it is important to ensure that AI systems are not used to discriminate against certain classes, races, or groups in the population.
  • The implications of generative AI on creativity and copyrights: In 2019, Symphony No. 8 was completed by an AI system. The definition of creativity is also therefore undergoing a change. This means that a fresh look at plagiarism and copyright systems is also an ask of our times.
  • The social impacts of using AI: The potential impact of AI on society is significant, it could be both positive and negative. For example, leveraging generative AI in contract law is exciting but its use in dispensation of justice may need human control. It is important to consider the potential social impacts of AI before using it for decision-making. 

Embracing Responsible AI to Forge the Path Ahead in the Age of GenAI

As Voltaire famously said, “Knowledge is power. And with power comes the responsibility to use it for the greater good, rather than for selfish or destructive purposes.”    

There will be a period when the decision systems are evolving to assimilate generative AI and this means that for decisioning excellence, organizations must ensure continuous evolution of controls as well. Generative AI (and AI in general) should form a part of the evolving process design for all mature organizations to ensure that the right controls and responsibility matrix are put in place. There are some decisions that would need a human to make the final call every time whereas some others would need human to decide in case of exceptions only. Similarly, there are areas that can leverage generative AI to make all decisions with mere human supervision while others where generative AI is assisted by a human in making decisions. It is important to note that adopting any of these decision systems will be a continuous and iterative process of improvement for an organization. The use of AI for decision-making is a complex and challenging issue. There are several potential pitfalls to consider, but there are also several ways to mitigate those risks. By carefully considering the potential risks and benefits, organizations can use AI to make better decisions that are more fair, transparent, and secure.

About the Author
dr-anshu-premchand
Dr. Anshu Premchand
Group function head – Multicloud and Digital Services

Dr. Anshu is a persuasive thought leader with 23+ years of experience in digital and cloud services, technical solution architecture, research and innovation, agility and devSecOps. She heads multicloud and digital services for the enterprise technologies unit of TechM.More

Dr. Anshu is a persuasive thought leader with 23+ years of experience in digital and cloud services, technical solution architecture, research and innovation, agility and devSecOps. She heads multicloud and digital services for the enterprise technologies unit of TechM. In her last role she was Global Head of Solutions and Architecture for Google Business Unit of Tata Consultancy Services where she was responsible for programs across the GCP spectrum including data modernization, application and infrastructure modernization, and AI.

She has extensive experience in designing large scale cloud transformation programs and advising customers across domains in areas of breakthrough innovation. Anshu holds a PhD in Computer Science. She has special interest in simplification programs and has published several papers in international journals like IEEE, Springer, and ACM.

Less