Apr 27
/
Risk Tide
🌊Digitizing the Mess: Are We Actually Innovating TPRM?
Welcome back to The Current, where we cut through the noise to highlight the risk and third-party management updates that actually matter. This edition tackles two connected problems: why TPRM teams are struggling with capacity despite years of hiring conversations, and whether the technology we’re investing in is actually solving the right problems.
Let’s dive in. 🌊
Quick Note: Risk Tide shares educational insights and our take on regulatory changes, this is not legal or compliance advice. Every organization is different, so always check with your legal or compliance teams. Think of us as your "practical" TPRM guide, not your lawyers.
No One Should Read in Silence!
Fair warning: this newsletter pairs best with our office playlist. Press play, then dive in.

The Real TPRM Capacity Problem Isn’t Always Headcount. Can it be Design?
Everyone knows TPRM teams are stretched. But the staffing conversation misses the deeper issue: whether the program was built to scale in the first place.
What changed:
The industry is finally talking about what practitioners have known for years. TPRM programs are carrying more vendors, more risk types, and more regulatory scrutiny than they were ever originally designed to handle. But the conversation keeps landing on headcount, and that’s the wrong starting point.
What changed:
The industry is finally talking about what practitioners have known for years. TPRM programs are carrying more vendors, more risk types, and more regulatory scrutiny than they were ever originally designed to handle. But the conversation keeps landing on headcount, and that’s the wrong starting point.
The real issue is the constant push and pull between three things: (i) how many third parties are in scope, (ii) what type of service each provides, and (iii) the level of exposure each relationship creates. That combination determines the actual workload, and it shifts constantly. A vendor manager “managing” 100+ relationships has very little time to do anything meaningfully productive with any of them. They’re updating spreadsheets, chasing responses, requesting documentation and triaging the loudest fires. That’s not risk management. That’s administrative version of hot potato where just like a head butt, no one wins…
When the program doesn’t scale, if monitoring, assessment, and governance processes weren’t built to flex with the portfolio, adding headcount just distributes the same problem across more people. You get three people raising their hand instead of one.
What didn’t change:
The foundational interagency guidance issued by the OCC, FDIC, and Federal Reserve in June 2023 still expects banks to involve staff with the requisite knowledge and skills at each stage of the third-party risk management lifecycle: planning, due diligence, contract negotiation, ongoing monitoring, and termination. That expectation is reasonable in principle. But it assumes a program infrastructure that may have not adapted since its creation.The standard hasn’t changed. The vendor count (and type) has. And the gap between the two keeps widening.
What didn’t change:
The foundational interagency guidance issued by the OCC, FDIC, and Federal Reserve in June 2023 still expects banks to involve staff with the requisite knowledge and skills at each stage of the third-party risk management lifecycle: planning, due diligence, contract negotiation, ongoing monitoring, and termination. That expectation is reasonable in principle. But it assumes a program infrastructure that may have not adapted since its creation.The standard hasn’t changed. The vendor count (and type) has. And the gap between the two keeps widening.
Regulators also haven’t backed away from treating capacity as the institution’s problem to solve. The OCC’s November 2025 Request for Information on Community Banks acknowledged that institutions face difficulties attracting and retaining staff with requisite skills, budget limitations, and reduced bargaining power with dominant service providers. But acknowledging the challenge isn’t the same as lowering the bar.
What it means for business:
This is where the conversation needs to shift: from “we need more people” to “we need a program that works at this scale.” The real question is whether your model is built to apply effort where risk actually changes. Can your tiering distinguish a critical cloud provider from a low-risk office supply vendor without forcing the same manual process on both? Are you monitoring for meaningful change like incidents, control breakdowns, financial deterioration, subcontractor shifts, service issues or just repeating the same annual exercise? And does your governance model push ownership into the business where it belongs, while preserving centralized challenge for the relationships that truly matter?
What it means for business:
This is where the conversation needs to shift: from “we need more people” to “we need a program that works at this scale.” The real question is whether your model is built to apply effort where risk actually changes. Can your tiering distinguish a critical cloud provider from a low-risk office supply vendor without forcing the same manual process on both? Are you monitoring for meaningful change like incidents, control breakdowns, financial deterioration, subcontractor shifts, service issues or just repeating the same annual exercise? And does your governance model push ownership into the business where it belongs, while preserving centralized challenge for the relationships that truly matter?
A more scalable answer is not simply adding staff. It is redesigning the operating model: standardize oversight for low-risk vendors, use trigger-based monitoring to detect material changes, and treat critical relationships as part of a broader portfolio of concentration, dependency, and resilience risk. That gives the business speed where risk is low and gives the central team room to focus where failure would actually matter.
Translation:
The question isn’t whether you have enough people. It’s whether the program you built can handle what you’re asking it to do. If a vendor manager is “managing” 100+ relationships, they’re not managing risk. They’re managing a list.
Translation:
The question isn’t whether you have enough people. It’s whether the program you built can handle what you’re asking it to do. If a vendor manager is “managing” 100+ relationships, they’re not managing risk. They’re managing a list.
The answer is not to do the same process faster. It is to build a model that scales which standardizes the bottom, escalates the top, and reacts when risk actually changes.

Are We Innovating TPRM or Just Digitizing the Same Broken Process Faster?
Questionnaires moved from spreadsheets to portals. Assessments went from email chains to workflows. But has the underlying model actually changed?
What changed:
What changed:
The TPRM technology market has expanded significantly and we at Risk Tide love all the new ideas and innovation. Platforms now offer automated vendor intake, continuous monitoring feeds, AI assisted questionnaire review, integrated contract management, and dashboards that can slice risk data a dozen different ways. Every product page says “AI powered.” Every demo looks impressive. And to be fair, many of these capabilities represent real progress, particularly in reducing the manual lift on intake, triage, and evidence collection.
But here’s the question that doesn’t get asked enough: did the technology change how your program works or did it just make the existing process run on a better interface? Because those are very different outcomes.
Too many programs took a process that was built around annual spreadsheets, migrated it into a platform, and called it transformation. The questionnaire is the same. The assessment cadence is the same. The tiering logic is the same. The bottlenecks are in the same places. The only thing that changed is the tool, and the licensing cost that came with it.
That’s not innovation. That’s digitization. And there’s a meaningful difference.
What didn’t change:
The core TPRM operating model at most institutions still follows the same basic pattern it did a decade ago: onboard the vendor, send a questionnaire, review the responses, assign a risk rating, revisit annually. Technology has made each of those steps faster, but has it fundamentally challenged whether those are the right steps, or whether that sequence reflects how risk actually moves.
The programs that are actually advancing aren’t just layering technology onto the old model. They’re asking harder questions: What if risk tiering was dynamic instead of set-it-and-forget-it? What if the assessment depth flexed automatically based on what monitoring signals are telling us between cycles? What if business line owners could interact with vendor risk data directly instead of waiting for the TPRM team to translate it into a report? Those are architectural shifts, not feature upgrades.
What it means for business:
This matters because the technology investment is real and it’s growing. But spending more doesn’t automatically produce better outcomes. If the underlying process is the problem, a better tool just helps you execute the wrong process more efficiently. You get faster questionnaires that still ask the wrong questions. You get real-time dashboards that nobody outside the risk team looks at. You get continuous monitoring feeds that generate alerts but no action, because the escalation process wasn’t redesigned to handle the volume.
The institutions getting real value from technology are the ones that used implementation as an opportunity to challenge the process itself. They retired the parts that existed out of habit. They automated the parts that were genuinely repetitive. And they redirected human attention to the parts where judgment, relationship management, and escalation decisions actually matter. That’s where a person adds irreplaceable value, and no platform replaces it.
Translation:
Moving from a spreadsheet to a platform isn’t innovation. It’s migration. Real progress means asking whether the process deserved to survive the upgrade.
Moving from a spreadsheet to a platform isn’t innovation. It’s migration. Real progress means asking whether the process deserved to survive the upgrade.

Third-Party Risk Rewired: A Technology Summit for Risk Leaders
Friday, May 8, 2026 | 1:00 PM – 5:00 PM | New York, NY
Above is one of the questions we’ll be exploring in depth at Third-Party Risk Rewired, a half-day summit built for practitioners, not vendor pitches. Three rotating discussion sessions on enablement, innovation, and the business question behind your TPRM operating model. No slides. No panels. Just the conversations that matter with the people doing the work.
Space is limited.
Above is one of the questions we’ll be exploring in depth at Third-Party Risk Rewired, a half-day summit built for practitioners, not vendor pitches. Three rotating discussion sessions on enablement, innovation, and the business question behind your TPRM operating model. No slides. No panels. Just the conversations that matter with the people doing the work.
Space is limited.
Where in the world is Garit?
Risk Tide co-founder (and frequent flyer) Garit Gemeinhardt is always on the move, so we’ve decided to keep track of his travels.
This week’s destination: Honduras 🏖️
Risk Tide co-founder (and frequent flyer) Garit Gemeinhardt is always on the move, so we’ve decided to keep track of his travels.
This week’s destination: Honduras 🏖️

Pages
Risk Tide provides educational and training content for informational purposes only and does not provide legal, regulatory, accounting, or professional advice. Participation in Risk Tide training does not create a consulting or advisory relationship. Organizations should consult qualified professionals regarding their specific risk and compliance needs. All third-party trademarks and frameworks are the property of their respective owners.
© 2026 Risk Tide Solutions LLC. All rights reserved.

