LIBRARY>REPORT>RPT-033
professional
2026.04.06 · 03:09 UTC

Personalized Fraud Prevention via Agentic Content

This report explores the transition from generic security warnings to highly personalized, agentic content systems designed to proactively educate and protect consumer fintech users against sophisticated fraud. By synthesizing AI-driven behavioral modeling, real-time threat detection, and behavioral economics, the research outlines a framework for deploying context-aware interventions that build trust while mitigating cognitive vulnerabilities.

Why you should care: As fraudsters weaponize generative AI to execute hyper-personalized social engineering attacks at scale, design leaders must pivot from static friction to empathetic, agentic interventions that dynamically adapt to individual user vulnerability and context, ultimately protecting the institution's bottom line and the user's financial well-being.
AGENTIC UXAI & DESIGNCONSUMER FINTECHCONTENT DESIGNBANK FRAUD
|0 UPVOTES
~22 MIN READ

Key Points:

  • The Paradigm Shift: Fraud has evolved from technical system breaches to psychological manipulation of legitimate users, rendering static, rule-based warnings obsolete.
  • Agentic AI Interventions: Agentic systems continuously monitor transaction lifecycles, autonomously deploying targeted educational content and adaptive friction exactly when user vulnerability peaks.
  • Behavioral Economics: Effective fraud prevention requires understanding cognitive biases (e.g., authority bias, urgency cues) and designing friction that forces a shift from emotional, fast thinking to rational, slow thinking.
  • Ethical Guardrails: While powerful, agentic AI introduces significant risks surrounding algorithmic bias, data privacy, and "nudge fatigue," demanding rigorous human-in-the-loop oversight and transparent design.

Understanding the Threat Landscape Research suggests that as artificial intelligence becomes democratized, malicious actors are leveraging it to industrialize social engineering. The evidence leans toward an exponential increase in losses driven by authorized push payment (APP) fraud, where users are manipulated into willingly transferring funds.

The Role of the Design Leader Design leaders in financial services occupy a critical nexus between security, technology, and user experience. It seems likely that the next generation of fintech platforms will compete not just on convenience, but on the efficacy of their proactive protection mechanisms. Building systems that educate and intervene without alienating the user is paramount.


[1] The Evolution of Fraud and the Failure of Generic Warnings [source]

The financial services sector is currently navigating an unprecedented escalation in both the volume and sophistication of fraudulent activities. Historically, cybersecurity in consumer banking focused heavily on preventing unauthorized system access, such as credential stuffing or brute-force attacks. Today, however, the battlefield has shifted fundamentally from exploiting technical vulnerabilities to exploiting human psychology.

[1] 1 The Industrialization of Social Engineering [source]

Fraudsters are increasingly executing Authorized Push Payment (APP) fraud, wherein victims are manipulated into voluntarily transferring money to criminal accounts. This shift is being massively accelerated by the advent of accessible generative artificial intelligence (GenAI) tools. Criminals harness GenAI to rapidly produce convincing deepfakes, synthetic voice clones, and highly targeted phishing emails 1]. According to industry forecasts, the proliferation of generative AI could fuel an estimated $40 billion in U.S. fraud losses by 2027, representing more than a threefold increase from 2023 figures 1].

These sophisticated tactics manifest in various forms, prominently including vishing (voice phishing) and smishing (SMS phishing). Smishing, in particular, is highly effective due to the inherent nature of mobile communication; over 90% of SMS messages are opened in under three seconds, perpetuating an immediate sense of urgency that scammers exploit to bypass critical thinking 2]. The deception now occurs at the level of human judgment rather than system breach 3].

[1] 2 The Shortcomings of Traditional Fraud Warnings [source]

In response to rising fraud, traditional banking interfaces have relied heavily on static, generic warnings—such as standard banner alerts or pre-transaction pop-ups reminding users not to share their passwords. These mechanisms often fall short for several reasons:

  1. Banner Blindness and Habituation: Users exposed to identical, static warnings repeatedly develop "nudge fatigue," eventually ignoring the alerts entirely as they become integrated into the background noise of the interface 4].
  2. Lack of Context: A generic warning applied to every transaction does not account for the specific risk parameters of a given action. A $50 routine transfer requires different contextual handling than a $5,000 wire transfer to a new overseas beneficiary 5].
  3. Heightened Emotional States: When a user is under the influence of a scammer—often convinced that their account has already been compromised or that a loved one is in danger—they are operating in a state of high emotional arousal. In these critical moments, simple pop-up warning screens are largely ineffective because people in heightened states actively ignore them, viewing the warning as an obstacle to resolving their perceived crisis 6].

To effectively combat modern, AI-powered social engineering, financial institutions must abandon static warnings in favor of dynamic, personalized, and context-aware interventions.

[2] Conceptual Framework for Agentic Personalized Prevention [source]

To counter dynamic threats, fintech platforms must adopt equally dynamic defense mechanisms. This necessitates a shift from passive, rule-based automation to Agentic AI—artificial intelligence systems capable of acting autonomously, reasoning through complex contexts, making independent decisions, and interacting directly with consumers 7, ey.com">8, b0SFiLpDqQ2AjV4HtMLfqL3HD-RIWb-FKk62vuoF6PxL1k7mDS7z2Vfm9PFjdkNsE4HGKASOMkglS-4yFbOjO-G3flV1wqZIkAswJGSzDikg9drZ-kIklncJc6X4iwWSP68NM2E8Ogyx2-TIlN8LssanrCHU=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">finastra.com">9].

[2] 1 Defining Agentic Content Systems [source]

Unlike traditional machine learning models that assess risk at fixed, discrete points (e.g., exclusively at login or at the moment of a final transaction request), agentic systems observe and assess user behavior continuously throughout the entire transaction lifecycle 3]. Agentic content systems utilize this continuous stream of intelligence to autonomously generate and deliver targeted educational or preventative content in real-time.

These systems move beyond simple "allow/deny" paradigms. Instead, they dynamically adjust the user journey, deciding when and how intervention should occur 3]. If an agentic system detects anomalies indicative of an active scam—such as erratic navigation, extended pauses, or signs of remote desktop manipulation—it can autonomously initiate a specialized, personalized intervention flow 10].

[2] 2 Moving from Reactive to Proactive Postures [source]

Traditional systems operate reactively, relying on post-transaction analysis and chargebacks. Agentic AI shifts the posture to proactive, pre-authorization intervention 11]. By leveraging real-time data analysis, these systems can intervene mid-transaction, presenting the user with personalized educational content that is highly relevant to the specific threat vector they are currently facing 3].

For example, if a user's behavioral biometrics suggest they are actively being coached over the phone while making a high-value transfer to a cryptocurrency exchange, the agentic system can pause the transaction. It can then deliver specific, context-aware content explaining the mechanics of investment or impersonation scams, directly addressing the user's immediate reality.

[2] 3 Comparison of Fraud Prevention Paradigms [source]

FeatureTraditional Rule-Based SystemsAgentic AI Systems
Risk Assessment TimingFixed points (login, checkout)Continuous, throughout transaction lifecycle 3]
Action MechanismBinary (Allow / Decline)Nuanced (Allow, Partial Hold, Educational Intervention) 12]
Content DeliveryStatic, generic warning bannersDynamic, hyper-personalized, context-aware content 11, zoZiw79xXkWzj1o6hMuQQTNqrqeF62Mz2aZKOIixzxkUGLnNtZIntCcQJx4XApqL5dkD3hOMLqJ0uu8KXrVSuhepDJbXtXWtV5XXv7-FUrJ6qTx9CTtNtfnLaPx0uAbOnpAZnmSSsQM=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">entersekt.com">13]
Learning CapabilityManual rule updates requiredAutonomous, continuous adaptation to new patterns 8, cyberproof.com">14]
UX ImpactHigh friction across all usersInvisible continuous monitoring; targeted friction only when needed 15]

[3] Psychological Underpinnings: Behavioral Economics in Fraud Education [source]

The efficacy of personalized agentic content is deeply rooted in behavioral science. Because modern fraud attacks human cognition rather than software code, defending against it requires a profound understanding of how users make financial decisions under pressure.

[3] 1 Cognitive Biases Exploited by Fraudsters [source]

Scammers meticulously engineer their attacks to exploit specific psychological heuristics and cognitive biases:

[3] 2 Forcing a Shift in Cognitive Processing [source]

To counteract these manipulative tactics, design leaders are leaning into the work of behavioral economists like Nobel laureate Daniel Kahneman. Kahneman's dual-process theory posits two modes of thinking: System 1 (fast, emotional, automatic) and System 2 (slow, deliberate, analytical).

Scammers force victims into System 1 thinking. The primary goal of an agentic fraud prevention system is to forcefully transition the user back into System 2 thinking. This is achieved by introducing intelligent, context-aware friction 6]. Rather than attempting to eliminate all friction from the digital banking experience—a core tenet of early 2010s UX design—modern fintech platforms must use friction as a protective feature.

By making the transaction process methodical and introducing agentic interventions that specifically remind customers to slow down their decision-making, banks can break the "scam spell" 6, wyicbW9PWks7QbkGrXWAsAA5MAwvJBXEn5TAIzEs30jivpqpkQd2giHDO-5NUIcUAhiUUWEDjYscJ4EdV2gyjXrAsbnh8Nuk6E1YlVrbaJLb7hDMEgZGTo7B9d9O3DPZ81k=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">calcalistech.com">19]. Charm Security, an AI startup focused on this specific issue, designs interventions around human vulnerability patterns to disrupt scams in progress, proving that psychological insights are just as critical as technological countermeasures 19].

[3] 3 The Power of Personalization in Education [source]

Generic education assumes a homogeneous user base, but susceptibility to specific scams varies wildly based on demographic, behavioral, and contextual factors. Personalized educational content is psychologically more effective because it demonstrates system competence and builds trust.

If an AI agent detects that a user is attempting to wire money related to a real estate transaction and subsequently delivers a targeted warning about "Business Email Compromise (BEC) and real estate wire fraud," the user is far more likely to engage with the content. The high relevance of the intervention pierces through confirmation bias, compelling the user to critically evaluate the recipient's credentials.

[4] Technological Infrastructure for Real-Time Personalization [source]

Delivering agentic, personalized fraud prevention requires a robust, highly orchestrated technological infrastructure capable of ingesting massive amounts of unstructured data, generating insights in milliseconds, and autonomously executing UX interventions.

[4] 1 Data Ingestion and Behavioral Biometrics [source]

The foundation of agentic AI is continuous data monitoring. Systems must track a holistic array of signals:

[4] 2 Machine Learning Models and AI Operations (AIOps) [source]

To process this data, institutions rely heavily on complex machine learning (ML) architectures. Unsupervised learning techniques, such as isolation forests, are particularly effective for rapid anomaly detection, processing vast datasets to identify novel, previously unseen fraud attack methodologies 12].

Furthermore, the implementation of AIOps (Artificial Intelligence for IT Operations) centralizes fraud monitoring. For instance, GlobalLogic's integration of AIOps for a UK retail bank automated threat detection, providing a 360-degree view of customer activity across all channels and reducing manual investigation efforts 23].

[4] 3 Model Retraining and Adaptive Learning [source]

Fraud vectors are not static; they evolve constantly as criminals probe defenses. Consequently, the models powering agentic content cannot remain static. Unlike traditional rigid IF-THEN logic, these systems utilize dynamic learning techniques to adapt to emerging tactics 8, QG3zlofXnEa6o5dr04ddgvsgdPdSP-x4x-1tgLAWLa6sLpKwqd3UZNwhBOx56Eop4gnzm4dLiBSWakUIYdFlYl2AuSGHgqJoQm9XWK3QZRnSSm-E3rU5rR5Skxogo=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">emburse.com">24].

To maintain efficacy, leading financial institutions implement continuous or highly frequent model updates. Studies indicate that banking fraud detection models using agentic AI require weekly retraining protocols to incorporate new threat intelligence into the datasets, thereby maintaining detection accuracy rates above 95% 25].

[4] 4 Federated Learning for Privacy-Preserving AI [source]

A critical technological advancement supporting this ecosystem is Federated Learning. Traditionally, training robust AI models required centralizing vast amounts of highly sensitive customer transaction data, creating immense privacy and security risks. Federated learning allows AI models to be trained on decentralized datasets across different institutions or devices. The insights and pattern recognition are shared globally to improve the core agentic AI without ever exposing or moving individual users' personally identifiable information (PII) 14].

[5] Design Principles for Agentic Content Systems [source]

Building the technological infrastructure is only half the battle; how the agentic system interfaces with the human user determines its ultimate success or failure. Design leaders must synthesize AI capabilities with empathetic UX design to create impactful prevention strategies.

[5] 1 Context-Aware Authentication and Interventions [source]

Friction must be intelligent. Context Aware Authentication solutions consolidate disparate data sources to provide detail and context for each potential fraud scenario, allowing the system to tailor the authentication experience to the specific risk, channel, and user preference 13].

If the system detects a low-risk anomaly (e.g., a customer logging in from a new device but conducting typical transactions), it may deploy a silent verification or a standard biometric prompt. However, if the system detects high-risk context (e.g., a sudden request to transfer a large sum to an unverified crypto-exchange while screen-sharing software is active), the agentic system must deploy a "hard stop" coupled with highly specific educational content detailing remote-access scams.

[5] 2 Empathy and the Tone of Interventions [source]

When designing agentic content, the tone of the communication is paramount. Users who are actively being scammed are often defensive, fearful, or highly committed to the fraudulent narrative. Interventions that are accusatory, overly technical, or dismissive will likely be bypassed or ignored.

Best practices dictate that content should be empathetic, supportive, and distinctly human in its phrasing. Mixed messaging that blends clear warnings with words of encouragement can effectively de-escalate the user's emotional state 18]. The agent should act as a trusted advisor, framing the friction as a protective measure taken out of care for the user's financial security, rather than a punitive bureaucratic hurdle.

[5] 3 Designing the "Partial Hold" Paradigm [source]

One of the most effective design strategies enabled by agentic AI is moving away from the binary "approve/decline" model toward selective parameter modification. Implementation data reveals that replacing outright transaction declines with partial holds for suspicious activity drastically improves both security and user experience.

In a study of financial institutions serving 127 million customers, utilizing partial holds reduced false positive friction by 67% while maintaining effective fraud prevention. Crucially, when the transaction was ultimately proven legitimate, customer satisfaction ratings were 3.2 times higher for users who experienced a temporary partial hold compared to those who faced complete transaction declines 12]. This highlights how nuanced, agent-driven design can balance risk mitigation with customer sentiment.

[5] 4 Omnichannel Messaging and Pervasive Awareness [source]

Drawing inspiration from public health campaigns and initiatives like the Singapore Police Force's ScamShield, design leaders must treat fraud education as a persistent marketing campaign 6]. The messaging generated by agentic AI must be everywhere, constant, and seamlessly integrated across all digital touchpoints—mobile apps, web portals, push notifications, and SMS.

[6] Ethical Implications: Bias, Privacy, and Nudge Fatigue [source]

The deployment of autonomous agentic systems in consumer finance introduces profound ethical complexities. Institutions must navigate these carefully to avoid inadvertently harming the consumers they intend to protect.

[6] 1 Algorithmic Bias and Financial Exclusion [source]

A paramount concern in AI-driven fraud detection is algorithmic bias. If AI models are trained on datasets that reflect historical inequalities—whether related to race, gender, socioeconomic status, or geographic location—they can inadvertently codify and amplify these biases 20, yOHNlm9h-wLsQg0MNOci0Aw4WOl9xBZ1no7pfJy0sCjizresyGGDGCH8OrCPnkVjopn9WOf9xTfiv35bQMS4STtsTaXdGcJ6-0H0pe9iqQwGo8h1HsmYbkTW7XX9aQZUAX1gMA5Vj9BFPJE-4kdsnmLcXpb_a2s2vHluSHBt1LXz1XRiSgYcXE1-lWIK4NB-V7-lxW2S7bKLci3cZCohN5EC7Hw=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">finastra.com">26].

For example, if an algorithm correlates certain geographic zip codes or spending patterns typical of lower-income demographics with high fraud risk, it may disproportionately subject marginalized users to heavy friction, frozen accounts, or denied transactions 20, yOHNlm9h-wLsQg0MNOci0Aw4WOl9xBZ1no7pfJy0sCjizresyGGDGCH8OrCPnkVjopn9WOf9xTfiv35bQMS4STtsTaXdGcJ6-0H0pe9iqQwGo8h1HsmYbkTW7XX9aQZUAX1gMA5Vj9BFPJE-4kdsnmLcXpba2s2vHluSHBt1LXz1XRiSgYcXE1-lWIK4NB-V7-lxW2S7bKLci3cZCohN5EC7Hw=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">finastra.com">26]. This transforms the algorithm into a hidden enforcer of bias, acting as an unaccountable gatekeeper to financial fairness 20]. To mitigate this, design teams must employ Explainable AI (XAI) frameworks that provide transparent reporting on why specific interventions were triggered, ensuring accountability and allowing human auditors to identify and correct discriminatory patterns 22].

[6] 2 Data Privacy and the Boundaries of Observation [source]

Agentic systems require continuous, granular monitoring of user behavior, which inherently raises severe data privacy concerns 24]. Tracking keystrokes, geolocation, and browsing habits to build personalized vulnerability profiles treads a fine line between protective surveillance and corporate overreach. Institutions must establish robust data governance frameworks, clearly communicating to users what data is being collected, how it is used solely for their protection, and providing mechanisms for consent.

[6] 3 Nudge Fatigue and Psychological Manipulation [source]

While behavioral nudges are effective, overusing them leads to nudge fatigue. When the same types of rewards, warnings, or frictions are repeated excessively, users become desensitized and annoyed, ultimately ignoring the interventions 4]. To combat this, the agentic system must regularly vary the presentation, timing, and nature of the content, strategically transitioning from external warnings to fostering internal motivation for secure habits 4].

Furthermore, as agentic AI interacts directly with consumers, it faces regulatory scrutiny under federal and state UDAP (Unfair, Deceptive, or Abusive Acts or Practices) laws. Systems that hallucinate inaccurate information, exhibit unwarranted overconfidence, or engage in manipulative behavioral targeting expose the institution to massive legal liabilities. AI is a tool, not a liability shield; institutions remain fully responsible for the autonomous actions of their agents 7].

[7] Case Studies and Empirical Efficacy [source]

The theoretical benefits of agentic AI and behavioral design in fraud prevention are already being validated by leading financial institutions globally.

[7] 1 Commonwealth Bank of Australia (CBA) [source]

As a global leader in AI adoption, the Commonwealth Bank of Australia has heavily embedded generative AI into its customer-facing operations. By utilizing AI-powered safety tools like NameCheck, CBA successfully reduced customer scam losses by an impressive 50% 27]. Additionally, their AI assistant Eno (note: Capital One is also cited using Eno in source data, representing widespread industry adoption) conversationally handles inquiries and proactively flags unusual charges, improving overall security and reducing call center volumes by 50% 27].

[7] 2 UK Retail Bank & Glassbox Behavioral Analytics [source]

A major UK retail bank partnered with Glassbox to implement AI-powered real-time behavioral analytics. Facing sophisticated remote access scams and hidden DOM manipulations that traditional systems missed, the bank deployed AI to monitor on-screen content and user interactions the moment they changed. The agentic intervention enabled the fraud team to act instantly. The results were immediate: $18 million saved in fraud losses within just seven months, with analyst efficiency skyrocketing from hours per review to mere seconds 10].

[7] 3 Krungthai Card PCL (KTC) Thailand [source]

KTC, a leading credit card provider in Southeast Asia, integrated ACI Worldwide's payments intelligence and multilayer AI to combat increasingly sophisticated cross-border threats. By utilizing adaptive machine learning models, KTC successfully distinguished genuine unusual activities from actual fraud at the individual customer level. This nuanced approach delivered a precise 3:1 false positive ratio, improving the customer experience by reducing friction, and increased cash-out scam initial detection rates from 33% to 50% 28].

[8] Forward-Looking Perspective: The Future of Defense Mechanisms [source]

The trajectory of fraud prevention indicates a total paradigm shift by the end of the next decade. As generative AI becomes increasingly capable of orchestrating hyper-realistic, multi-channel attacks at scale, traditional authentication mechanisms like passwords, SMS OTPs, and static knowledge-based questions will become completely obsolete.

[8] 1 Continuous Passive Authentication [source]

Future agentic systems will likely rely entirely on passive, continuous authentication. By seamlessly adapting to each user's behavioral patterns in real-time, the system will verify identity invisibly in the background based on biometric factors, eliminating routine logins 29]. Friction will only be introduced when the agentic system detects a deviation that signifies potential social engineering.

[8] 2 Collaborative Defense Ecosystems [source]

The future of defense also lies in collaborative ecosystems and shared intelligence. Initiatives like the Agentic AI Foundation and the integration of Agent-to-Agent (A2A) protocols will allow different platforms (e.g., a telecom provider's network API and a bank's fraud detection engine) to communicate autonomously 30, S0C0BTNJhSf5XuVEhf4PjLIVxmAQ-18JBCxAIHboMh-Q37m8US5gM70Bgw88yv7HXiPhwsi3faEDNimTiSUWCj3pgL9Xi0uIIRQUJTVVqkObLFUXsT7TEd0xvfQtl7N7bfSJhz-hssZgVSmofHmg1-kOHmbhh0h3WNYg1ULQ79JybixK7jpYurm5cmnTXFl7PVibaXiWNNtMDkzGgGU3hpa5cHexZeKDTee9-F4=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">juniperresearch.com">31]. For example, if a telecommunications AI detects a SIM swap or an active, unverified VOIP call, it can instantly communicate this risk to the bank's agentic AI, which will automatically temporarily restrict high-risk transactions for that user.

[8] 3 Empathy as a Core System Metric [source]

Ultimately, the evolution of consumer banking defense mechanisms will be measured not just by financial losses prevented, but by the psychological safety afforded to the user. Design leaders must ensure that as AI systems gain full autonomy, they are hardcoded with empathy, fairness, and transparency. Personalized fraud prevention via agentic content represents the vanguard of this movement—transforming the banking app from a mere transactional utility into an intelligent, vigilant, and trusted financial guardian.


References

[1] 27] UXDA (2024). "AI Gold Rush: 21 Digital Banking AI Case Studies and CX Transformation." The UXDA Blog. YEHrIQj5wN4B6cyk5fSJhLQnD-0fLA6iq7reI7F1RJzckEQfdchrlUv6TwgLBTDDCSlusEvSRpdpPS743G8H1qBsftXDd0PmrcXRZtxKjC1hXw7Q==" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">thedecisionlab.com">2: 23] GlobalLogic (2025). "Case Study: ML & AIOps Fraud Detection for UK Retail Bank." GlobalLogic Insights. cf39T7FzoIaSyRB9afEY5nSgl1bpMv3qOpAZaFgd6RurZHET6rfuTgMXGsu3XjdHJ4Yz3JOzUyuLGfqPI7TuJYiG07NPcx9dU8uAuRZoumR9jxvKx-Mu6pAd" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">pymnts.com">3: 21] Master of Code (2026). "Generative AI in Banking." Master of Code Blog. meHX1p3AhZg6QbsCKhtOt11EDvXlQZwIJFSj8O6E5tOH7xuV9Db1uSUdYtYI385lWL5RsWHJ1BHq-rfRcFmD5Dmcmi8EPP5Fqk0VLNtK6MrRHY629YTnT95BUM8wpHeZXLs3P6e7aTUpI=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">isdsi-global.com">4: 10] Glassbox (2025). "Retail Bank Fraud Detection Case Study." Glassbox. BDMIOS4BgSPfwRv" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">prosightfa.org">5: 28] ACI Worldwide (2025). "How KTC leveraged AI for precise fraud management." ACI Worldwide Case Studies. de4Dz6ajzCI1J0yXKRho0o2XF7PRjj06WEM3ASJFFov-PHa8uZEQiGpN7jZ9YKo0Da31U-uDH0BIx3nBZ9omIMKceAJIaAElg9PdT63rUpYuOVxx4ac8fFQuUVw==" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">rmahq.org">6: 29] Riverty (2025). "Fintech 2040: Global Insights." Riverty Reports. Fs6iLY-CtCDcIf6W8LmseoBHIWY5G7CksNuH-ZOsg0mdT4uY0lqJFX35UQD3QlzItBVTKwIVauoIvJj4blbXFiShriwCURp2IrKCm4VB4rLuGCz3RX7ytO-MSphCUEC1sZN7lNA40B6NXOWhkKwY0c0ZxQ8xxwndBmeJUHxlytwJ7E-kVPvVFUpQINl5DnM4my9nKQMpJsQf6bt31aOUu2QmkA9zEgjDsGFAoh9VvOprFk2hjrjKxdAXbmn-PIWKXD7xIxSN" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">consumerfinancemonitor.com">7: 6] RMAHQ (2025). "Applying Behavioral Science to Fraud Prevention." Risk Management Association. ey.com">8: 2] The Decision Lab (2025). "Smishing." The Decision Lab Reference Guide. b0SFiLpDqQ2AjV4HtMLfqL3HD-RIWb-FKk62vuoF6PxL1k7mDS7z2Vfm9PFjdkNsE4HGKASOMkglS-4yFbOjO-G3flV1wqZIkAswJGSzDikg9drZ-kIklncJc6X4iwWSP68NM2E8Ogyx2-TIlN8LssanrCHU=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">finastra.com">9: 18] Losada, C. (2025). "The Role of Behavioural Economics in Understanding and Countering Fraudulent Tactics." IOSR Journals. 7J3ombrhzQv-pf1Ei72OHwFz2RABf3IJEaMlU-u4P0BdxyzxaRXsQrbQyp3mP9m1fBji5XXVV2cpaILeSEV2WTZbnUiSpCkXaur9ycSSQ-uei6RPD5ISGa0-NIL1K5vul2blvzw7T285vX73k=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">glassbox.com">10: 1] Mastercard (2026). "AI is helping banks save millions by transforming payment fraud prevention." Mastercard News. niceactimize.com">11: 32] Feedzai (2025). "2025 AI Trends in Fraud and Financial Crime Prevention." Feedzai Press. 61B-HpKr1C3iEVzAuADB6xKqt8RFSiVoxxF5smJiAtCvcXFuN2hAAsygtUIDi6g8JV0Icpoo2oN0jM-0GwkWsKwMRMe1pAAX0=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">ijarsct.co.in">12: 4] IMI Kolkata (2025). "ISDSI-Global Conference Proceedings Book: Tech for Net Zero." ISDSI. zoZiw79xXkWzj1o6hMuQQTNqrqeF62Mz2aZKOIixzxkUGLnNtZIntCcQJx4XApqL5dkD3hOMLqJ0uu8KXrVSuhepDJbXtXWtV5XXv7-FUrJ6qTx9CTtNtfnLaPx0uAbOnpAZnmSSsQM=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">entersekt.com">13: 7] Consumer Finance Monitor (2026). "Agentic AI in Consumer Financial Services: Opportunities, Risks, and Emerging Legal Frameworks." Consumer Finance Monitor Podcast. cyberproof.com">14: 8] EY (2025). "The rise of agentic AI: transforming fraud risk management." EY Insights. kOoihvIJfFJBMXc=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">threatmark.com">15: 9] Finastra (2026). "Agentic AI: From assistance to autonomy - The next chapter in banking." Finastra Viewpoints. izTP5ZDX7go0eqCO9eA5V9bkvA43qQyQsom" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">thedecisionlab.com">16: 22] Fintech Weekly (2025). "How AI is Transforming FinTech Fraud Detection." Fintech Weekly Magazine. jm8afmEXGJgEfFAtVsGkO-yTMXqytWcVO00HZ6XVfQxYKJ4V2RS2cqBNklK-8KQMBqMwRr8RiOOW6yHpMaGsFs4oob6TXL0q3Sc0jUgDox5hS7EmtJIU5DvBUMNM" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">thedecisionlab.com">17: 3] PYMNTS (2026). "Agentic Agents Confront and Combat Fraud as Scams Accelerate." PYMNTS Fraud Prevention. tMKr1Wcit--m8rjPPjerlPqYVielR2DqvseYyMNX-ycRgwKqaat7UH7rjYkObHr-VAh5aOf4mCo2gqFmE6TCqE7379Eswfj3rYgLBWMU-yqikNLs1Dm1XCMI0nVyDdWpoeJ43L4Wd-V82r1wgS3uRR3hrp93lB" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">researchgate.net">18: 33] University of Illinois (2025). "Artificial Intelligence and Fraud Detection in US Commercial Banks." World Journal of Advanced Research and Reviews. wyicbW9PWks7QbkGrXWAsAA5MAwvJBXEn5TAIzEs30jivpqpkQd2giHDO-5NUIcUAhiUUWEDjYscJ4EdV2gyjXrAsbnh8Nuk6E1YlVrbaJLb7hDMEgZGTo7B9d9O3DPZ81k=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">calcalistech.com">19: 24] Emburse (2026). "AI Fraud Detection in Banking." Emburse Resources. RL1goN46b7ARxazYyiIDl45NLDosAXIYJqiSYmHfwQ9gYSGkBvi7s1REPH98m6Suv0Imk5CPGhnkdLcrYn-kupfzIXVtOFbXuTHxqFOGlPlOshV7WWvnamYPreSzIE1PdTIC7lk9O-rG0R65d5XymEifEmn1sMOqgWlk7i8VB59-I1d8LjsVQ==" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">thepaymentsassociation.org">20: 20] The Payments Association (2025). "Algorithmic Gatekeepers: The Hidden Bias in AI Payments." Payments Review. tO9nE7bIKGg1IfYQrKWtBEdCUsaQfbyhzave2GvqzSwMiYYHTH-xRHZ4ciu3WUse9bpvSFbDgMOa3SdL46ySsr014BMnNAOUfAO-jg2WWIbzgtgF6vVmdBj2COriC553C78=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">masterofcode.com">21: 26] Finastra (2021). "Algorithmic Bias in Financial Services." Market Insight. GBtRdjs4OBIPH11LlGCEhDASWqznr1V4raYMUapnN6UcN2ZNlSmjeh39EwZwvZBP24CSTWXz4pGIq76D1BVeWY46NVLRaCoyPgbtzzcU14iI4LMg9oMz9D99eGNg==" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">fintechweekly.com">22: 13] Entersekt (2025). "Context Aware Authentication." Entersekt Platform. TEiW2AUoUxSUFiG1QoGwnvnwTOjiuP1Eg9BnzE7iHuqkljr9ECspZhX1iYHw5wj4aqVoI1wAkOavgTsgvwmZeslT4kk9XOGGzHsVb5qdr9knANUffZJrpt1il6lNgxaX1uaSrO73r53k-8sW3x0rN0677dDz0Fl6HjO2UrESWO0i66uQSbT2tE=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">globallogic.com">23: 14] CyberProof (2025). "AI-Powered Fraud Detection." CyberProof Blog. QG3zlofXnEa6o5dr04ddgvsgdPdSP-x4x-1tgLAWLa6sLpKwqd3UZNwhBOx56Eop4gnzm4dLiBSWakUIYdFlYl2AuSGHgqJoQm9XWK3QZRnSSm-E3rU5rR5Skxogo=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">emburse.com">24: 15] ThreatMark (2025). "AI Fraud Detection in Banking." ThreatMark. z7ZmI-p7myiS61BUvWhHe7t8jWbIkqEynTl87l1jPOD2qScRicpSykfjlcNaDP40jMaNZSJeV8Zsb5gKk91KEwxpbDnHC1Wsigl4kl39WxDotFfpEsu762dTyTd2NQbxDO7NgFU2sReiMLGE10QMJRR1rFX4hn567ocxYrDQ4nsaO" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">fluxforce.ai">25: 11] NICE Actimize (2025). "The Next Frontier of Fraud Prevention in Commercial Banking." NICE Actimize Blog. yOHNlm9h-wLsQg0MNOci0Aw4WOl9xBZ1no7pfJy0sCjizresyGGDGCH8OrCPnkVjopn9WOf9xTfiv35bQMS4STtsTaXdGcJ6-0H0pe9iqQwGo8h1HsmYbkTW7XX9aQZUAX1gMA5Vj9BFPJE-4kdsnmLcXpba2s2vHluSHBt1LXz1XRiSgYcXE1-lWIK4NB-V7-lxW2S7bKLci3cZCohN5EC7Hw=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">finastra.com">26: 5] ProSight Financial Association (2026). "How Social Media is Fueling Check Fraud." ProSight Insights. HQZr005OjEdNX43ID9LchEY7cOlSE793P-4Bz3gzkchpGVYVWGU3wBjHAMg16OVpcmcJMabksbSIDWd98nVsRCXute0LCFLj-FAvcrsV9N5e5CkWL6SBfqXzXz1yvf3IPXHTN-OPb5Aut1bHffztrqbijhRmjCkFuwwK5huzj" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">theuxda.com">27: 25] FluxForce (2025). "Agentic AI for Fraud Detection in Real Time." FluxForce Blog. TbU9tTxXgT60sj1l9C1OcxHtRvJXSOO-9J-dc4CJwkJGKgygKzKivDvl7jys-vGSGodPBmtr2xagz2o45A5grh27SkCE2ubsK65cEKnlexGq9YM0=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">aciworldwide.com">28: 30] Consumer Reports (2024). "Peer-to-Peer Services Policies." CR Innovation. hpz8E9B21FEt29G3HTIJ4w1Ae0I5A4nhm8-pf7vvZogk7Lv2PeloHBd2SODrjgG09cwojbREyOvKAUXQSGamFKAe3xGYFl46Unc1zvjF2vsPPa-SzivFggIqI=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">riverty.com">29: 31] Juniper Research (2026). "Telefónica and Nokia Test AI-Native Network Exposure." The Distillery. WD5Rk5Y57M2TZvj0jrlrj0pYSHjfg4aHCKL0PvUtlLKP" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">consumerreports.org">30: 34] The Alan Turing Institute (2019). "Artificial Intelligence in Finance." Turing Report. S0C0BTNJhSf5XuVEhf4PjLIVxmAQ-18JBCxAIHboMh-Q37m8US5gM70Bgw88yv7HXiPhwsi3faEDNimTiSUWCj3pgL9Xi0uIIRQUJTVVqkObLFUXsT7TEd0xvfQtl7N7bfSJhz-hssZgVSmofHmg1-kOHmbhh0h3WNYg1ULQ79JybixK7jpYurm5cmnTXFl7PVibaXiWNNtMDkzGgGU3hpa5cHexZeKDTee9-F4=" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">juniperresearch.com">31: 12] IJARSCT (2025). "Evaluating AI Implementation in Fraud Prevention." IJARSCT Papers. feedzai.com">32: 35] IRE Journals (2025). "Predictive AI Systems in Cybersecurity." Iconic Research and Engineering Journals. lpwJXdvXZ3VDVC5y4a5N7MKLtXv5WYuyIWdGddDQ1uyfnfFEVrZ-CLjqyEnmSnQVBWg4awU5Cg57hT260WcfqIYuHK-1AGXSPVzyPqtD4355tdpRIfqMkpoEn6eKqKZYvScNgXDnWjkv7RT6lkJmzrhZD085MoFutrILraMqTij3FtDJGhPHs2HLcUR9WMroPfQwtpY91nLQ==" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">wjarr.com">33: 19] Calcalistech (2025). "Charm Security emerges from stealth with $8M seed to stop AI scams." CTech News. KwQj7VxciwoxUnR8R0GETTbTg3balVSqsMOZX2-XhwV0t6c-Jfh3gwOGZ6aiS9sPVsmRwm1P3wKHMaAtNDW7ohBBWUu73e5y6oulCnqzzNbnz8QI" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">turing.ac.uk">34: 16, jm8afmEXGJgEfFAtVsGkO-yTMXqytWcVO00HZ6XVfQxYKJ4V2RS2cqBNklK-8KQMBqMwRr8RiOOW6yHpMaGsFs4oob6TXL0q3Sc0jUgDox5hS7EmtJIU5DvBUMNM" class="text-muted hover:text-primary border-b border-dotted border-grid-line" target="_blank" rel="noopener">thedecisionlab.com">17] The Decision Lab (2026). "Vishing and Scamming." The Decision Lab Reference Guide [source]

Sources:

  1. mastercard.com
  2. thedecisionlab.com
  3. pymnts.com
  4. isdsi-global.com
  5. prosightfa.org
  6. rmahq.org
  7. consumerfinancemonitor.com
  8. ey.com
  9. finastra.com
  10. glassbox.com
  11. niceactimize.com
  12. ijarsct.co.in
  13. entersekt.com
  14. cyberproof.com
  15. threatmark.com
  16. thedecisionlab.com
  17. thedecisionlab.com
  18. researchgate.net
  19. calcalistech.com
  20. thepaymentsassociation.org
  21. masterofcode.com
  22. fintechweekly.com
  23. globallogic.com
  24. emburse.com
  25. fluxforce.ai
  26. finastra.com
  27. theuxda.com
  28. aciworldwide.com
  29. riverty.com
  30. consumerreports.org
  31. juniperresearch.com
  32. feedzai.com
  33. wjarr.com
  34. turing.ac.uk
  35. irejournals.com