Sponsored by: ?

This article was paid for by a contributing third party.

How advanced AI threatens banking security systems

How advanced AI threatens banking security systems

The future of payments, like voice-based banking, promises to transform the fintech world as traditional friction barriers begin to disappear. Amid this innovation, concerns arise, addressed here by Seemanta Patnaik, co-founder and chief technology officer at SecurEyes: could the advent of artificial intelligence (AI) technology give rise to novel forms of fraud that overshadow improvements to the banking experiences?

Voice-based banking is revolutionising how customers interact with their banks, offering a heightened level of convenience, speed, accessibility and personalisation. This innovative approach enables individuals who struggle with using computers or mobile devices to access banking services effortlessly through voice commands. By simply speaking (through a smart speaker), customers can readily enquire about their account balances, perform money transfers and make payments. It eliminates the need to navigate complex online interfaces or endure lengthy queues at bank branches.

Seemanta Patnaik, SecurEyes
Seemanta Patnaik, SecurEyes

The advent of digital assistants has further enhanced the capabilities of voice-based banking. Capital One, for instance, enables its customers to leverage the voice assistant Alexa to carry out tasks such as making payments, checking balances and tracking expenses. Similarly, Barclays has empowered Apple’s Siri to accept voice commands for mobile payments. These trends have gained traction in various banks worldwide, signifying the growing prominence of voice-based banking. As a result, industry projections estimate the voice-based banking sector will reach a value of $3.7 billion by 2031.

However, alongside the many advantages, there is rising concern regarding the potential security risks with the emergence of advanced AI technology. These risks give rise to innovative new forms of fraud, which can breach the security of banking systems.


Voice cloning and data poisoning

A novel form of fraudulent activity has surfaced recently: leveraging the capabilities of AI voice technology. This technique, known as ‘voice cloning’, enables cyber criminals to fabricate counterfeit audio clips or voice instructions that closely resemble the authentic voice of an individual. The implications of this development are concerning, as it opens avenues for identity theft, deceptive phone conversations and the proliferation of phishing emails. Regrettably, this advancement has already resulted in its first casualty in which a UK-based energy firm lost €220,000 in a fraudulent transfer.

The UK-based chief executive officer (CEO) of the energy firm fell victim to a scam in which an AI-powered deepfake impersonated his boss, the chief executive of the firm’s German parent company. The fraudster used AI voice technology to mimic the accent of the CEO’s superior in telephone conversations, convincing him to transfer funds to the account of a Hungarian supplier.

The CEO made the first payment as requested, but suspicion arose when the scammer demanded a follow-up payment. The stolen money was subsequently transferred to a bank account in Mexico and dispersed to various locations. While this is believed to be the first reported case involving AI voice technology in a scam, it warns of similar occurrences. Businesses are advised to remain vigilant and ensure their employees are aware of such scams. This incident also emphasises the importance of implementing robust security measures and providing adequate training to employees to prevent fraud.

The prospect of losing one’s voice to an algorithm has become a tangible reality. For example, Microsoft’s AI text-to-speech tool VALL-E gained notoriety recently for its ability to accurately mimic a speaker’s tone and emotions with minimal training. Additionally, ElevenLabs has developed a system that allows individuals to upload recordings and generate artificial versions of their voices. Predictably, the misuse of this AI technology quickly ensued, as evidenced by the viral circulation of samples featuring Emma Watson [purportedly] reading Mein Kampf and a fabricated announcement by US president Joe Biden about an invasion of Russia serving as examples of the technology’s negative implications.

Another battleground in the banking sector is ‘data poisoning’, a new emerging form of cyber attack that aims to deceive AI systems by manipulating the data they process. Since AI relies on extensive data processing, the quality of the input data directly affects the quality of the AI itself. Data poisoning involves deliberately providing inaccurate or misleading data to compromise the AI’s performance. With the advent of large language models, such as ChatGPT, the risks associated with data poisoning have become increasingly significant. As AI systems continue to grow in complexity and scale, detecting data poisoning attacks is expected to pose considerable challenges. The detection process becomes particularly intricate when dealing with politically sensitive topics, exacerbating the risks associated with such attacks.


The role of central banks

The rise of AI is revolutionising various aspects of our lives, including communication and business practices. However, the success of any emerging technology brings a fresh set of challenges and risks, particularly in terms of security. The online realm is increasingly inundated with deepfake videos and AI-generated articles, making it increasingly difficult to discern authenticity.

Acknowledging concerns about the dark side of cyber risk from AI, the World Economic Forum (WEF) reported in March 2023 that the rapid rise of technology in the financial sector poses systemic risks to the global financial system.

“While continued tech integration into the financial system has many benefits, it’s important that industry leaders, regulators and consumers be aware of emerging tech-driven risks, and take appropriate action to mitigate them,” says Drew Propson, head, technology and innovation in financial services at the WEF, and adds that “as the financial system becomes more dependent on technology, new risks are surfacing as a result and it’s essential to apply solutions throughout the financial services ecosystem to ensure resilience and stability in coming years”. The report, Pushing through undercurrents, highlights many risks driven by adopting technology in the financial services sector, including geopolitically motivated cyber attacks.

This alarming situation warranted the US Federal Reserve to have “regular discussions” with the banks it supervises about managing the risks associated with AI, as more financial institutions utilising AI for customer service applications, fraud monitoring and underwriting. In April this year, Reuters reported that Fed governor Christopher Waller warned that, although AI could bring new efficiencies to bank processes, it also involves novel risks, including difficulties detecting problems or biases in large datasets.

A massive deployment of AI in banks would come with its share of risks and opportunities. Banks increase their investment in AI every year, often at the risk of becoming obsolete. McKinsey & Company estimates the value of AI in the banking sector will soon reach $1 trillion.

It is therefore prudent for central banks to now enhance risk-focused supervision activities by including detailed reviews of security measures relating to AI-linked technologies. Furthermore, central banks can formulate governance frameworks for the secure use and application of AI.


Conclusion

Recent events serve as cautionary tales of the dangers of machine learning falling into the wrong hands. Banks must invest in strong security measures, employee training and additional authentication methods to prevent voice-cloning attacks for sensitive financial information. There is a lack of knowledge surrounding AI and security in the sector, so organisations must adapt their structures to promote collaboration and address unintended biases. Successful integration of AI in finance requires a partnership between humans and machines, with a commitment to transparency and ethics. By taking these steps, banks can leverage the benefits of AI in finance and thrive in the digital era.

 

You need to sign in to use this feature. If you don’t have a Central Banking account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account

.