Collaboration, Communication, and Culture in Cybersecurity
Collaboration, Communication, and Culture: The Foundation of Security Programs
The hardest part of building a security program isn’t the technology. It’s getting people to care.
When I moved from traditional IT and security roles into specialized programs first in SaaS governance, now leading AI security initiatives, I quickly realized that technical skill alone wouldn’t be enough. The challenge wasn’t just identifying vulnerabilities or implementing controls. It was building trust and understanding across the organization so security became everyone’s concern, not just mine.
Building effective security programs isn’t about being the smartest person in the room or collecting certifications. It’s about becoming a translator, an educator, and a collaborator who can connect security requirements to business realities.
From Gatekeeper to Partner: The SaaS Governance Journey
My first major lesson came while working on a SaaS governance team. We were tasked with assessing risk across a rapidly growing portfolio of SaaS applications. On paper, it looked simple: discover SaaS apps, create a risk framework, evaluate apps, approve or deny (or provide risk reports and let the stakeholers accept risk or deny).
In reality, it was more of a communication problem.
Teams were already using dozens of tools, many of which we didn’t even know about. If we came in with a strict “security says no” approach, people would either go around us or demand that we be excluded from the process. Neither outcome made anyone safer.
Instead, the team and I started hosting webinars, joining department calls, and explaining something few had considered: how SaaS security differs from traditional security.
Teaching Shared Responsibility
In traditional on-premises environments, security teams controlled almost everything: the hardware, the network, the OS, and the applications. We could harden systems, manage access, and monitor traffic on our own terms.
SaaS changed that completely.
I spent hours explaining the shared responsibility model. “Yes, Salesforce secures their infrastructure, but we decide who has access, what permissions they have, and how our data is configured. A SOC 2 report doesn’t replace our due diligence.”
The best results came when I asked questions instead of issuing rules: “What problem are you solving with this tool? What data will it touch? Who needs access?” When teams felt heard, they became partners in security rather than trying to avoid it.
The cultural shift was real. Teams began approaching us before signing contracts, not after. Reviews stopped being red tape and started being part of smart business decisions.
Evolving with AI: Security in the Development Lifecycle
Those early lessons in communication became essential when I moved into AI security. The technology moves faster, and the risks are less defined. My role now involves working closely with development teams throughout the lifecycle, and sprint demos have become one of my best tools.
Making Security Relevant in Every Sprint
Developers don’t ignore security because they don’t care. They ignore it when it feels abstract or disconnected from their goals. So I changed my approach.
During sprint planning and demos, I don’t just say “sanitize inputs” or “check authentication.” I explain why it matters.
“This feature lets users upload training data for the model. If we don’t validate file types or scan for malicious content, someone could poison the model or access data from other users. Let’s figure out how to handle that safely without slowing the experience.”
Security becomes part of the feature, not an afterthought (shift left anyone?).
Collaborative Vulnerability Management
At times patching can seem like a fight. Security would send a list of CVEs, set a deadline, and wait. Developers pushed back. Everyone lost.
Now we work together. When a new CVE affects our stack, I sit down with the developers to review it. We look at the details: How does it work? Does it apply to our setup? What’s the right timeline to fix it? Are there short-term mitigations?
This collaboration does two things. It helps developers understand the real risk, and it shows that security respects their expertise.
Recently, we found a critical vulnerability in a Node package. Instead of forcing an immediate patch that would delay release, we verified that our implementation didn’t expose the vulnerable code path. We documented the decision, added monitoring, and scheduled the update for the next sprint. Everyone understood and supported the plan.
Building a Culture, Not Just a Program
The difference between a security program and a security culture is simple. A program is a set of policies and procedures. A culture is when people make secure choices even when no one is watching.
You can’t enforce culture. You grow it through clear communication, shared ownership, and consistent follow-through.
The Power of “Why”
I’ve made it a habit to always explain the “why” behind every security requirement. Not as a lecture, but as context.
Why do we need MFA on this SaaS app? Because it handles customer data, and passwords are still one of the easiest ways in.
Why are we cautious about AI model data? Because if sensitive information enters training data, it might later be exposed.
Why do we require API rate limits? Because without them, attackers can automate credential stuffing or scrape data at scale.
When people understand the reason, they apply that logic to new situations without needing constant oversight.
Making Security Visible
I also make a point to celebrate security wins. When a developer spots a potential injection risk during code review, I make sure their manager knows. When a product team asks about security implications during design, I call it out in meetings.
Security should feel like a shared achievement, not a burden.
The Long Game
Building security programs through collaboration, communication, and culture takes patience. It’s slower than simply enforcing controls, but it produces results that last.
Today, I spend less time convincing teams to think about security and more time helping them implement it. Our SaaS tools are better vetted not because our policies are stricter, but because teams understand what good security looks like. Our AI products are safer not because I catch every issue, but because developers think about risks as they build.
Technical work matters. Risk assessments, architecture reviews, vulnerability management, but the human work matters more. Without communication and collaboration, even the best technical program will struggle.
Security isn’t something you do to an organization. It’s something you build with one. That’s what makes all the difference.