The systems in question tend to judge people based on their behaviour and categorise them into clusters, such as by ethnicity or gender. A report by the United Nations Educational, Scientific and Cultural Organization (Unesco), ‘Towards an Ethics of Artificial Intelligence’, highlighted the dangers of social scoring whereby certain AI systems can be susceptible to bias and misinterpretation. That last point – the ethical side of AI – has grabbed regulators’ attention in recent times. The second is that it also helps business and data science improve the quality of the model, and third is it avoids inadvertently creating models with harmful bias for the consumer.” “The first is that it is critical for business adoption to explain a recommended action. “There are three different reasons that explainable AI is important,” says Roger Burkhardt, capital markets chief technology officer at Broadridge Financial Solutions. This enables data scientists to make any necessary adjustments to the model. It is an evolving system that is used to describe an AI model, its expected impact and potential biases, and to help characterise model accuracy, fairness, transparency and outcomes. At its heart, it is a set of processes and methods that allow people to comprehend and trust the results and output created by machine learning and AI-driven algorithms. Indeed, the ability to predict and manage risks took on new meaning for many managers.Įxplainable AI is seen as part of a more responsible approach to the use of the technology. This has become more significant since the pandemic, when unprecedented levels of volatility and unpredictability rattled markets. In addition, 71% planned to increase their budget for these alpha-generating technologies over the next 12-18 months.ĪI is also being used to screen and monitor risks more effectively. In fact, a recent survey from Deloitte Centre for Financial Services found that 85% of the 400 senior investment executives polled were using AI-based solutions in the pre-investment stage to help them generate returns. As data became more accessible and cheaper, a wider range of fund managers have leveraged AI and machine learning to derive meaningful insights that can be used to generate outperformance. Traditionally, AI was the preserve of top-tier managers and hedge funds that had the resources. The result is that explainable AI has increasingly become an important component in institutional investing strategies as asset managers turn to technology to navigate the low-yielding investment landscape. “In a simple model, it was much easier to extrapolate information, but they have become more complex because there is so much more data to traverse and people want to understand what is driving the decisions,” says Pizzi. However, today the pressure is on companies across the board to be clear about who trains their AI systems, what data was used and, just as importantly, what went into their algorithms’ recommendations. In the past, asset managers were also reluctant to relinquish their confidential algos and intellectual property for fear of losing their competitive advantage, according to John Pizzi, senior director, enterprise strategy, capital markets at fintech group FIS. They can lease the computing power, and code packages are more readily available in an open-source world.”Īs Julien Molez, group innovation data and AI leader at Societe Generale, puts it: “I do not think we will see a strong and definitive transition from a black to white box, but explainable AI algorithmic techniques can help the model developer better understand what the model is doing and why.” But today, with advances in the cloud and the proliferation of open-source libraries, people no longer need to buy hardware or write complex algorithms from scratch. “In the past, access to AI and machine learning was too expensive and required specialist skills to write highly bespoke code. “The difference today is that AI and machine learning are so much more accessible. “There is a shift away from the black box but not completely,” says Steve Cracknell, CEO of Insig AI, a technology and data science firm. This does not mean the end of the ‘black box’ – a system viewed purely in terms of input and output – but greater usage of tools such as explainable artificial intelligence (known in the tech world as XAI) to justify the rationale behind the outcomes spewed out by AI and machine learning. The rise in AI investing parallels a growing need to understand how computers make decisions, writes Lynn Strongin Dodds.Ĭomputer-driven models are nothing new, but as they have grown in sophistication, so too have calls for a brighter light to be shone on their inner workings.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |