
Jake Jacobs-Smith
Product Security Leader
Yext
My story moving from App Security to Product Security
Current State & Background
How would you describe Product Security at your organization today?
Product Security at Yext is currently in a transitional phase. While we have established foundational controls, our posture is primarily reactive. Currently, our lean team focuses heavily on auditing 'in-flight' development. We are actively working toward a more proactive model where security is integrated earlier in the process to reduce manual overhead and friction.
Before Product Security became a focus, how was security typically involved in your development lifecycle?
Historically, security operated as a siloed function and was often treated as a final checkpoint rather than a partner. Engagement was purely ad-hoc and dependent on engineering teams self-reporting their needs. This lack of visibility meant we were often catching risks late in the cycle rather than influencing the design phase.
Inflection Point
What started to feel misaligned that made you think the focus on holistic product security is needed? Was there a specific moment, pattern, or type of issue that made it clear code-level security wasn’t enough?
We initially relied on an 'honor system' where engineering teams self-reported high-risk projects. I believed we were maturing until a company all-hands meeting where a major new product feature was announced that my team hadn't even heard of. That was the turning point.
It proved that while our code-level security was improving, our architectural visibility was non-existent. We realized that a security model built on 'opt-in' participation is inherently fragile; we needed a holistic system that captures risk by design, not by choice.
How did those issues show up internally - was it delays, friction with engineering, uncertainty, or something else?
It manifested as a 'lose-lose' scenario. Because we were engaging late in the cycle, our security requirements were perceived as roadblocks rather than requirements. We faced a constant tug-of-war between security quality and shipping velocity.
Ultimately, this led to a binary choice we never wanted to make: either delay the launch to fix critical gaps or accept 'technical debt' by reviewing features that were already live. Neither was sustainable.
Organizational Response
When you first raised the idea of formalizing earlier or design-stage security, how did leadership respond?
Leadership was highly receptive, as they recognized the need for a more scalable model. We were able to present clear metrics demonstrating that our existing manual processes couldn't keep pace with our deployment frequency.
By showing the gap between our current resources and the volume of code being shipped, we made a data-driven case that 'shifting left' wasn't just a security preference, but a necessary evolution for our operational stability.
What concerns did you hear most often from product or engineering teams at the beginning?
The primary concern was the impact on shipping velocity. Because security had historically been an ad-hoc or post-development activity, teams were wary of any new 'gates' that might introduce friction. There was a natural apprehension about the 'unknown' specifically, how much additional overhead these new controls would add to their existing sprints.
My challenge was to prove that catching issues at the design stage would actually reduce long-term work by preventing emergency late-stage fixes.
Execution & Impact
How do you measure whether the focus on design stage risk management is actually working versus just box-checking?
Our primary KPI is our Security Coverage Rate, the percentage of high-risk initiatives that undergo a review before a single line of code is written. Before we formalized design-stage reviews, we were essentially operating on 'hope' as a strategy.
Now, by tracking the delta between planned initiatives and security engagements, we can quantify our visibility. If we see a feature reaching production without an upstream design sync, we know we have a process gap to close.
What changed once security started engaging earlier - either in planning, design, or architecture discussions?
The most significant change has been a cultural shift toward 'Security by Design.' We’ve moved from being a late-stage 'blocker' to an upstream 'architectural partner.' Engineers have realized that addressing a security requirement during a design doc review takes thirty minutes, whereas fixing that same issue in a nearly finished codebase can take weeks of rework.
As a result, we’ve seen a 'pull' effect: teams are now pulling us into discussions early because they see the direct correlation between early engagement and faster, smoother releases.
Reflection
How long did it take to go from "we're doing this" to "this is how we work now", and what were the milestones in between?
I view this as a continuous evolution rather than a finished project, but the transition to our current operating model took roughly 18 months of concentrated effort. We broke this down into three distinct phases:
Phase 1: The Data & Visibility Audit (Months 1–6): We spent the first half-year just gathering the data, tracking how many features were shipping 'dark' and where the friction points were. This allowed us to move from 'gut feelings' about being late to having a quantified business case for change.
Phase 2: Formalizing the 'Design Gate' (Months 6–12): This was the most difficult stretch. We shifted from the ad-hoc 'honor system' to integrating a mandatory security sync at the architectural level. We had to build the templates, define what a 'high-risk' feature actually looked like, and prove to engineering that this wouldn't stall their sprints.
Phase 3: Cultural Stabilization (Months 12–18): This is the stage we are in now. The 'fight' over whether we should be involved has largely ended. The milestones here weren't new tools, but rather 'organic reach-outs' where developers began tagging us in design docs before we even asked. We’ve moved from chasing the roadmap to being a native part of it.
Looking back, what was harder than you expected about making the shift - and what turned out to be easier?
The hardest part was breaking the 'retroactive' habit. Even after we had leadership support, it took a long time to change the muscle memory of the organization. Shifting a culture from 'build first, secure later' to 'security by design' requires constant advocacy. We had to patiently demonstrate, over and over again, that a 30-minute design review saves two weeks of emergency coding later. Overcoming that initial skepticism about 'velocity' was an uphill battle that required building deep trust with individual engineering leads.
On the other hand, securing executive buy-in turned out to be easier than expected. I anticipated having to fight for the importance of Product Security, but once I presented the metrics showing our 'visibility gap' specifically the reality of major features being announced at all-hands that we hadn't seen the business case was undeniable. Leadership immediately understood that our current pace of growth had outstripped our manual processes, and they were eager for a solution that provided both safety and scalability
Advice to Peers
For teams considering a similar transition, what would you recommend starting with, and what would you avoid over-optimizing too early?
Start with Visibility. You cannot secure what you can’t see. Before you try to fix the code, fix your intake process. My recommendation is to find the 'source of truth' for product work whether that’s a Jira board, a product roadmap, or a specific planning meeting and insert yourself there. You want to move away from an 'honor system' as quickly as possible and toward a system where security is a default part of the project initiation.
Avoid over-optimizing automated tooling too early. It’s tempting to go out and buy the most expensive tools and try to automate everything on day one. But if your culture hasn't shifted and you don't have a process for handling the results, you’ll just end up with a mountain of alerts and even more friction with engineering. Focus on the human-to-human design reviews first; once you have a predictable process, then find the tools that automate the most painful parts of it.
