The more a technology or concept enters our daily lives and becomes commonplace, the more we expect it. Nearly two decades ago, a dial-up internet connection of less than 56 kbps seemed like a miracle. Today, with internet speeds as high as 2000 Mbps becoming common, a 56 kbps connection would be seen as a failure, at least in the developed world.
This expectation change applies to AI as well. After seeing the many practical applications of AI that contribute to human comfort and progress, the general population and the AI research community now expect that each new breakthrough in the field will be more earth-shattering than the last. Similarly, what qualifies as an AI failure has also seen massive changes in recent years, especially from the perspective of the owner of the problem.
What counts as a failure of AI today
The mere fact that the AI model performs a specific task with expected levels of efficiency is no longer the only requirement for its applications to be considered successful. These systems should be able to provide meaningful real-world benefits in the form of time savings or earned revenue. For example, an intelligent parking system capable of predicting parking space availability with 99.7% accuracy – undoubtedly effective – cannot be considered successful if its adoption in the real world does not yield tangible benefits.
Even with such a system in place, parking managers or smart city administrators may not be able to make optimum use of their parking space due to several reasons.
These can range from simple reasons, such as the parking operator not being able to make better use of the software interface, to complex reasons, such as customers and drivers finding it difficult or unwilling to adapt to the new system. , For these and many other reasons, only a fraction of AI projects succeed. Estimates of the overall percentage of AI projects that fail to deliver the actual price range from 85% to 90%.
In most of these cases, the lack of concrete results achieved by AI systems has less to do with the technical aspect than the human aspect of these systems. The success and failure of these projects depend on how people interact with technologies to achieve the intended goals.
Why most AI initiatives fail
As researchers continue to work and enrich the body of AI research, the effectiveness of AI and AI-based systems continues to grow. No matter how powerful it is, any AI-powered device is just a tool. The success and failure of AI initiatives are, more often than not, determined by how users – primary and secondary – understand, receive and operate these AI systems.
Lack of buying and selling from management
Business leaders – such as owners, directors and senior executives – are often only secondary users of AI or any other technological application for that matter. However, he is the biggest beneficiary as well as the biggest supporter of these initiatives.
After all, it is often their will and their means that count in driving an AI initiative. Thus, the most common reasons why AI initiatives fail to deliver real value often include a lack of buy-in from business leaders. Membership does not simply mean willingness to distribute funds for AI initiatives. Either way, a growing number of companies are investing in AI initiatives, which means that the failure of AI is not the result of a lack of investment.
Today, buy-ins are represented by absolute confidence in the ability of a technology or investment to make an impact. This belief translates into a commitment to make these technological endeavors successful in ways that involve more than the technology itself. For example, a company that is truly committed to the success of its AI initiative will also invest in non-core aspects of other initiatives, such as security and privacy. Ultimately, it is this commitment that ensures that they will take whatever steps are necessary to ensure the success of AI.
insufficient user training
More often than not, AI-based applications do not fully automate manual processes. They automate only the most analysis-intensive tasks. This means human operators are needed to operate and enhance the data processing capabilities of AI. This makes the role of human users extremely important for these AI applications.
Even the best AI-based business intelligence tools will prove useless if the executives using them are not trained to navigate dashboards or understand data.
This problem becomes more apparent when AI tools are involved at the operational level, such as a computer vision-based handheld vehicle inspection tool or a mobile parking app that users can use to find and book. Parking places. When users are not adequately trained to navigate and use technology interfaces, applications may not produce the expected results.