Home » Do you trust AI? Here’s why half of users don’t

Do you trust AI? Here’s why half of users don’t

by Jamal Richaqrds
2 minutes read

In a world where technology reigns supreme, artificial intelligence (AI) stands out as both a marvel and a mystery. A recent study by KPMG and the University of Melbourne sheds light on a concerning trend: half of users lack trust in AI’s ability to provide accurate responses. This skepticism stems from worries about safety and societal impact, especially prevalent in advanced economies where only 39% trust AI, compared to 57% in emerging economies.

Despite these reservations, 72% of individuals acknowledge AI’s utility as a technical tool. However, a significant gap in AI training exists, with only 39% reporting any form of education in this field. This lack of knowledge results in 48% of respondents feeling ill-equipped to understand AI fully. Interestingly, those with AI training experience enhanced efficiency and revenue gains, with managers reaping the most benefits.

Moreover, the study highlights the growing demand for AI regulation, with 70% of people supporting it and 43% deeming current laws inadequate. At work, 58% of employees regularly use AI tools, leading to performance gains but also negative impacts on workload and compliance. Concerns about AI misuse and lack of oversight underscore the importance of governance and training alignment with adoption.

The issue of AI trust extends to data quality, as only 36% of IT leaders regularly trust AI outputs, revealing a critical gap in trust. As AI systems evolve, maintaining reliability becomes a challenge, particularly when training data lacks quality and proper safeguards. The consequences of these shortcomings are evident in AI’s propensity for errors and hallucinations, eroding user confidence.

Recent assessments of AI models, like OpenAI’s ChatGPT, indicate a troubling rise in hallucination rates, reaching as high as 79% in newer reasoning systems. These hallucinations stem from overthinking by AI models, highlighting the need for grounding in accurate and current data sets to minimize errors. The future of AI may lie in small language models (SLMs), which offer faster processing and improved accuracy, addressing concerns about large language models’ reliability.

As businesses navigate the evolving AI landscape, investing in transparent, explainable, and traceable AI models becomes imperative to close the trust gap. Thorough testing before, during, and after deployment, coupled with human or AI red teaming, can enhance AI reliability. By embracing tailored SLMs and prioritizing data quality, organizations can unleash AI’s full potential while mitigating risks associated with hallucinations and inaccuracies.

In conclusion, building trust in AI requires a concerted effort to bridge the knowledge gap, enforce robust governance, and prioritize data quality. By embracing responsible AI practices, businesses can harness the power of artificial intelligence while safeguarding against potential pitfalls, ensuring a future where AI is not just trusted but transformative.

You may also like