GF: Alan, describe the dynamic between your R&D team and business units. How do you ensure innovative ideas are adopted?
Sung: On my first day, my boss told me, ‘Alan, we are a cost center, not a profit center. Therefore, we must prioritize development based on our business unit’s needs.’ We operate on an 80/20 strategy: 80% dedicated to specific business user case needs.
The remaining 20% is for ‘value mode,’ developing new technologies like generative AI. During this time, we conduct proofs of concept (POCs).
Once we generate a minimum viable product, we present it to business units. If interested, we conduct a tailored POC to demonstrate real-world benefit to their business processes. Finally, we scale and expand our algorithms or AI core engines.
GF: We spoke about fear of data and AI. What skills are you prioritizing in new hires, and how are you upskilling existing staff?
Hasson: When hiring for product management, we prioritize candidates with AI experience. Practical AI experience and an open mindset is essential. Second, understanding data is crucial; effective data design improves data lineage and integration with AI tooling. Third, design skills are key. A skilled designer with prototyping abilities can rapidly develop ideas, enabling quick failure on numerous concepts – testing many ideas in a few days and narrowing to two or three for further investigation. This efficiency requires the right mindset and correct application. Practical experience and its application are extremely important
Schmidt: AI experience is now a necessity for new hires. We provide continuous training for our CGI partners, ensuring they remain current – crucial because AI tools and opportunities constantly evolve. Staying updated with market trends and tools is essential for productivity. Understanding the capabilities of these tools, whether generative, agentic, or for code generation, is vital.
Hasson: Hackathons, common in software development, involve collaborative coding to solve problems. A modern adaptation for general work is a “prompt-a-thon,” which is a good way to enthuse people about using AI. In these sessions, participants use prompts to generate creative solutions in small groups. The ideas are often excellent, and I highly recommend them.
Sung: Firstly, we define who can use AI. Secondly, we need to effectively communicate with our online users about how to use these AI tools, as they often perceive AI as a “black box” – incorrectly assuming it’s very simple. Therefore, it’s crucial to equip employees with skills like using Copilot, prompt engineering, and context engineering to integrate the full context into the agent mode. Employing people who understand how to use authentic AI in today’s landscape is very important.
Panchmatia: We examine this from several angles. First, the functional aspect requires familiarity with technology, especially LLMs and their ecosystem. While this functional knowledge is important and teachable, the greater challenge, given widespread AI adoption, lies in developing core competencies. We’re increasingly focusing on curiosity, tenacity, change management, and adaptability. This is where people need to evolve.
Learning and using prompts is valuable, but AI will profoundly change how work is done, necessitating a re-evaluation of processes, organizational structure, and metrics. This shift is coming soon. The human element of the organization needs to be prepared. People must become curious, ask questions, be adaptable, and possess tenacity, because things will change and it won’t always be easy.
Consider Jeff, who has been doing his job for 25 years. His role won’t disappear, but it will transform significantly. The question is: how do we enable people to make that transition? Soft skills will be incredibly important and likely distinguish those who succeed.
Towards this, we have doubled down on our upskilling efforts to ensure that employees continue to stay relevant even as AI reshapes operating models. We have rolled out bankwide access to Gen AI Training, including workshops, e-learning, live webinars, covering foundational and technical GenAI topics as well as Responsible Data Use. Since this year, we have identified more than 12,000 employees for upskilling or reskilling, and with nearly all of them commencing their respective learning roadmaps, including on skills such as AI and data.
GF: Looking at your own teams, what specific skill has become more valuable now that AI is part of the workflow? Conversely, what skills have become less critical?
Panchmatia: Number one, you have to be curious. It’s interesting because outside of work, everyone uses an AI app, but at work, it’s the opposite. That curiosity applied to work would be amazing. Repetitive tasks are likely to be automated. But remember, AI only knows what we’ve told it. It doesn’t create new stuff. So, human curiosity and creativity are important. Mundane tasks, like data entry or analysts summarizing hundreds of pages, will change. It doesn’t mean the person loses their job; they’ll have the ability to use their creativity and curiosity.
Schmidt: I’d add critical thinking. You’re working with various models and getting feedback. There have been many times I’ve thought, “That’s not right.” So, we tweak it. Being able to refine and question is going to be more important because, for so many jobs, it’s repetitive. You don’t have time to question; you only have time to do. So, with agents doing some of these things, being able to ask, “Are we doing this the right way? Can we revolutionize this?” That’s where bigger breakthroughs will come from.
Hasson: There’s also a point of scrutiny. We use AI to identify software vulnerabilities and recommend corrections. But a senior person must still verify it’s doing the right thing. We assume it’s good and correct, and most of the time it is. But what if it isn’t? Who provides the oversight? You still need someone with that level of scrutiny to ensure it’s truly correct.
GF: Alan, how does the R&D department manage the risk of AI-driven fraud and ensure the security of AI models themselves? Are there specific emerging threats that keep you awake at night?
Sung: Fraud is changing very fast. Traditionally, we used statistical or machine learning rules, but that’s not enough. At CTBC, we built our AI-powered fraud detection and prevention system, AI Skynet, which learns from cross-channel data, finds hidden patterns, and reduces false positives. Nowadays, fraudsters operate within an ecosystem, so we are building our own antifraud ecosystem connecting with the police and third parties, including the Financial Supervisory Commission and regulators, to build anti-fraud transactions through a profiling project. When money is transferred from account A to account C, the bank only sees the direct link. However, third parties like the Financial Information Service (FISC) can track the full transaction path, allowing us to alert other banks involved to help find the bad guys. Ultimately, preventing scams requires a collaborative ecosystem, not just individual bank efforts.
GF: How can Agentic AI be used to build a financial ecosystem that is efficient, transparent, and auditable?
Panchmatia: Agentic AI is very new. The ideas are fantastic, with great applications in retail and travel. However, the necessary technology to run this ecosystem isn’t yet fully available. While promising, current platforms are far from providing the traceability, auditability, and policy management required for strict banking processes. By definition, a human gives an agent agency, essentially representing a human being. When hiring an employee, policies dictate who they can communicate with and what systems they can access. How will we manage this with an entity that possesses human agency?
Significant thought and technological development are needed. We are achieving good results with agentic technology in straightforward applications like marketing and behavioural science, and complex ones like end-to-end credit processing for large corporations. However, I’m not sure we’ll declare victory within the next 6 or 12 months. There’s significant opportunity, and we continue to innovate. While progress will come in ‘bits and pieces,’ we must avoid ‘pilotitis,’ a problem we encountered with Generative AI. If this happens again with agentic AI, the ‘trough of disillusionment’ will be prolonged. Many aspects are still developing. Our approach should be to fully commit, but with the understanding that not all problems are solved, and we will incur technical debt, which must be managed properly. We are a long way from declaring victory in the agentic space.
Schmidt: For any new initiative like this, transparency is paramount. Clearly define objectives and co-design the solution with your financial institution, ideally involving regulators. The design must prioritize transparency, demonstrating underlying work and decision-making. Thorough testing is crucial, with continuous adjustments. Additionally, carefully assess and communicate the risk profile to all partners. Finally, consider not only how to commercialize this offering, but also how to provide ongoing support, identify future directions, and facilitate easy entry into new markets.
Hasson: I love this conversation. Imagine reconciling data, finding a discrepancy, and needing to allocate it for resolution. Traditionally, an agent figures out who to allocate it to. Now, think of an Agentic system – an automated assistant – employed to allocate this work. How do you know it’s done the right thing? What level of trust do you place in it?
Just as with a human employee, you’d implement scrutiny checks and balances. At the moment, you need to apply this same principle of scrutiny and oversight to Agentic systems. While Agentic capabilities can create massive value, what happens when an error goes unnoticed, potentially leading to significant issues? You could potentially have another agent checking the work, like a teacher marking homework. But how do you know they’re working correctly?
Hasson: That’s a different problem, but we need to reach a level of maturity where we can trust something. What can we trust? Honestly, not very much at the moment. Generative AI is great for anything that doesn’t have a right answer. It can generate good content, but is it always correct? If you ask it for 2 + 2, it’s probably right. But for almost anything else, is it right? No, it’s not. It’s somewhere between bad and good. Therefore, it’s crucial to implement checks and balances and not give it free rein, which is truly tricky.
GF: Moving on to MCPs. Unlike traditional APIs, which primarily handle static requests, a Model Context Protocol acts as a standardized “language” for AI applications to communicate effectively with external services. How does adopting an MCP enable new AI-driven opportunities for efficiency and personalized customer service, while creating a robust framework for managing data security, regulatory compliance, and model explainability?
Panchmatia: MCP, like APIs in the past, is an industry imperative. The positive development is the rapid establishment of common protocols, preventing fragmentation.
However, MCP introduces new risk management considerations. Unlike strict APIs, MCP incorporates context, allowing for probabilistic outcomes. Consequently, it necessitates robust guardrails. This could involve additional AI models for accuracy verification or human oversight. These aspects require careful thought.
The exciting development is the agreement on protocols for model and agent communication within the industry. This standardization will significantly reduce waste and uncertainty. While MCP adoption isn’t optional for many and brings numerous benefits, it also comes with inherent risks, some not yet fully understood. Therefore, similar to generative AI, it’s crucial to proceed step-by-step: test, evaluate, then gradually expand implementation.
Sung: MCP offers a great chance to strengthen our AI governance framework. Before MCP, it was like searching a huge library with each department having its own catalog. MCP is like the Dewey Decimal System. Imagine an assistant helping you find a book and providing extra information.
We are not a technology company, but we can use MCP to build an AI governance framework on top of it, as it provides a single point of standardized control. We can integrate auditing, access checks, and data review directly into the workflow.
Previously, with multiple vendor systems and API frameworks, applying AI governance consistently was hard. If we adopt MCP and ask every bank and vendor to implement a MCP server, we can enforce the same AI governance, perform identity checks, and analyse model interactions in a unified way. This is the direction we should take.
Hasson: I was at a conference recently where one of the guys who helped establish the MCP framework expressed a degree of uncertainty about its success, which was interesting. He says it is so much about using it the right way for it to be amazing. From my perspective, MCP presents a significant opportunity. Consider a “break” – where a user manually retrieves data to fix a problem.
While an API might exist, budget constraints often prevent development to connect it. However, the excitement around MCP could incentivize organizations to publish access to their systems for internal collaboration.
This creates an opening to expose those APIs, allowing for automated connections. The “break” could then be automatically resolved by fetching necessary information, eliminating manual intervention. I believe MCP’s novelty will open doors to such solutions.
GF: Finally, what is the biggest technological or organizational challenge the financial industry must solve to unlock AI’s full potential in the next five years? And what is the most exciting opportunity you foresee once that challenge is overcome?
Schmidt: As with any opportunity, a lack of daring or imagination gets in the way, particularly identifying true product value propositions. If we don’t push the envelope, we won’t achieve its full potential. At the same time, I worry about complacency. Just saying a process is working fine. But if something changes a seemingly stable process, for instance, if a data set changes and starts making errors that grow exponentially, you have a much bigger problem.
Panchmatia: I’d say the biggest challenge is structural, not technological. Banks have been organized in silos for over 150 years. This means work is thrown across departments, while the customer experiences a horizontal journey. AI will change this, forcing banks to think deeply about their approach. Many consulting firms focus on technology implementation, but I believe the real problem is structural, impacting processes and more.
The biggest opportunity is that if banks can move away from these costly vertical pillars, it could profoundly impact their cost-to-income ratio, making banking an investable stock at the level of tech companies. At DBS, we’re most excited because it will open up markets we couldn’t scale before due to our size and allow us into previously inaccessible markets due to capital restrictions, capacity, and talent. It opens up many possibilities.
GF: Rounding up: to ensure a successful AI initiative, begin with a clear starting point and rethink existing workflows. Prioritize data quality and robust governance. Focus on augmenting human talent, establishing a strong framework, and implementing effective risk management strategies.
It’s crucial to define clear business value and metrics. When hiring, prioritize candidates with AI experience and adaptability, and foster critical thinking and scrutiny within your team. Overcome any structural challenges.
The future of AI in finance is not a distant concept; it’s already here. Therefore, it’s essential to start experimenting, learning, and adapting now.
IN PARTNERSHIP WITH