Contact Us

Continuity Insights Management Conference

Treasury Report: Managing AI-Specific Cybersecurity Risks In The Financial Sector

Report identifies opportunities and challenges, outlines 10 steps to address AI-related operational risk, cybersecurity, fraud challenges.

Today, the U.S. Department of the Treasury released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang.

“Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud,” Liang explained.

Financial Services
(Credit: Adobe Stock / AI by lililia)

10 Next steps

The report identifies significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector. It outlines a series of next steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges:

1. Addressing the growing capability gap. There is a widening gap between large and small financial institutions when it comes to in-house AI systems. Large institutions are developing their own AI systems. However, smaller institutions may be unable to do so because they lack the internal data resources required to train large models. Financial institutions that have already migrated to the cloud may have an advantage when it comes to leveraging AI systems in a safe and secure manner.

2. Narrowing the fraud data divide. As more firms deploy AI, a gap exists in the data available to financial institutions for training models. This gap is significant in the area of fraud prevention, where there is insufficient data sharing among firms. As financial institutions work with their internal data to develop these models, large institutions hold a significant advantage because they have far more historical data. Smaller institutions generally lack sufficient internal data and expertise to build their own anti-fraud AI models.

3. Regulatory coordination. Financial institutions and regulators are collaborating on how best to resolve oversight concerns together in a rapidly changing AI environment. However, there are concerns about regulatory fragmentation, as different financial-sector regulators at the state and federal levels, and internationally, consider regulations for AI.

4. Expanding the NIST AI Risk Management Framework. The National Institute of Standards and Technology (NIST) AI Risk Management Framework could be expanded and tailored to include more applicable content on AI governance and risk management related to the financial services sector.

5. Best practices for data supply chain mapping and “nutrition labels.” Rapid advancements in generative AI have exposed the importance of carefully monitoring data supply chains to ensure that models are using accurate and reliable data, and that privacy and safety are considered. In addition, financial institutions should know where their data is and how it is being used. The financial sector would benefit from the development of best practices for data supply chain mapping. Additionally, the sector would benefit from a standardized description, similar to the food “nutrition label,” for vendor-provided AI systems and data providers. These “nutrition labels” would clearly identify what data was used to train the model, where the data originated, and how any data submitted to the model is being used.

6. Explainability for black box AI solutions. Explainability of advanced machine learning models, particularly generative AI, continues to be a challenge for many financial institutions. The sector would benefit from additional research and development on explainability solutions for black-box systems like generative AI, considering the data used to train the models and the outputs and robust testing and auditing of these models. In the absence of these solutions, the financial sector should adopt best practices for using generative AI systems that lack explainability.

7. Gaps in human capital. The rapid pace of AI development has exposed a substantial AI workforce talent gap for those skilled in both creating and maintaining AI models and AI users. A set of best practices for less-skilled practitioners on how to use AI systems safely would help manage this talent gap. Also, a technical competency gap exists in teams managing AI risks, such as in legal and compliance fields. Role-specific AI training for employees outside of information technology can help educate these critical teams.

8. A need for a common AI lexicon. There is a lack of consistency across the sector in defining what “artificial intelligence” is. Financial institutions, regulators, and consumers would all benefit from a common AI-specific lexicon.

9. Untangling digital identity solutions. Robust digital identity solutions can help financial institutions combat fraud and strengthen cybersecurity. However, these solutions differ in their technology, governance, and security, and offer varying levels of assurance. An emerging set of international, industry, and national digital identity technical standards is underway.

10. International coordination. The path forward for regulation of AI in financial services remains an open question internationally. Treasury will continue to engage with foreign counterparts on the risks and benefits of AI in financial services.

As part of its research for this report, Treasury conducted in-depth interviews with 42 financial services sector and technology related companies. Financial firms of all sizes provided input on how AI is used within their organizations. Additional stakeholders included major technology companies and data providers, financial sector trade associations, cybersecurity and anti-fraud service providers, and regulatory agencies. The report provides an extensive overview of current AI use cases for cybersecurity and fraud prevention, as well as best practices and recommendations for AI use and adoption.

You can download Treasury’s AI Report here.

Read more about cybersecurity and business continuity issues from Continuity Insights.

Continuity Insights

Similar Articles

4 Ways the Most Resilient Businesses Respond to Data Breaches

Throughout 2017 companies large and small suffered data breaches, often with a larger overall impact than necessary. If you want to mitigate the impact of data breaches at your company, …

10 Tips for Recovery From Ransomware Attacks

There’s no avoiding the fact that a potential ransomware attack is a significant cyber security threat. If your organization is attacked, make sure you have a plan in place ensure …

ISO 22301 & 22313 – Sneak Peek at 2019 Updates

Changes are coming to both ISO 22301 BCMS Requirements and ISO 22313 Guidance. Get a sneak peek at the expected changes and how they will impact how to implement and …

Leave a Comment

Share to...