<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
    xmlns:rawvoice="http://www.rawvoice.com/rawvoiceRssModule/"
    xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0">

    <channel>
        <title>This is GRC</title>
        <link>https://thisisgrc.com</link>
        <description>For GRC’s next generation of leaders building the skills that actually matter</description>
        <language>en</language>
        <copyright>This is GRC Copyright 2026</copyright>
        <atom:link href="https://thisisgrc.com/rss/" rel="self" type="application/rss+xml" />
        <lastBuildDate>Tue, 14 Apr 2026 07:40:16 -0400
        </lastBuildDate>
        <itunes:author>This is GRC</itunes:author>
        <itunes:summary>For GRC’s next generation of leaders building the skills that actually matter</itunes:summary>
        <itunes:owner>
            <itunes:name>Your Name</itunes:name>
            <itunes:email>youremail@example.com</itunes:email>
        </itunes:owner>
        <itunes:explicit>clean</itunes:explicit>
        <itunes:image href="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/2022/06/icone.jpg" />
        <itunes:category text="Technology"></itunes:category>

                <item>
                    <title>The Truth About Cybersecurity Certifications</title>
                    <link>https://thisisgrc.com/the-truth-about-cybersecurity-certifications/</link>
                    <pubDate>Wed, 17 Sep 2025 08:08:01 -0400
                    </pubDate>
                    <guid isPermaLink="false">68bf99c06ef9990001c0255e</guid>
                    <category>
                        <![CDATA[ Break In GRC ]]>
                    </category>
                    <description>Entry-level cybersecurity jobs are more competitive than ever. Learn why certifications aren’t enough and what really sets you apart.</description>
                    <content:encoded>
                        <![CDATA[ <p>Let's piss off every cybersecurity influencer, bootcamp, and certification body whose marketing strategy relies on the deception that there are "millions of unfilled cyber jobs," shall we?</p><p>I believed certifications mattered. Hell, I even got short-listed for the EC-Council Canada Hall of Fame and made YouTube videos promoting this stuff. </p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/vpUHUTosvLk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" title="How the C|EH Boosted My Confidence and Empowered My Career| Pierre-Paul Ferland"></iframe><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">I didn't even get a CPE for that effort.</span></p></figcaption></figure><p>I got my CEH and I'm glad I did because building that home lab, staying up late figuring out how things broke, sharing war stories with other wannabe hackers. That part was genuinely valuable because it lit the spark.</p><p>But here's the uncomfortable truth someone needs to say, and I'd rather it come from someone who's been exactly where you are rather than you discovering this fact 8 months into your rejection journey.</p><h2 id="the-marketing-bullshit">The Marketing Bullshit</h2><p><strong>No, Security+ and Google Cybersecurity Certificate are not enough to land an entry-level role in the current market. Stop asking.</strong></p><p>The entire certification industry is marketing on false premises. They keep pushing the narrative that there are "millions of unfilled cyber jobs," but that's straight-up deception built on the assumption that exponential growth from my era would continue forever. Spoiler alert: it didn't.</p><p>I started in 2015, when companies were literally plucking students off campus in their first year of university. I signed a full-time job in second year and only finished my degree because I wanted to, not because I had to. That world doesn't exist anymore.</p><p>Back then, it was Target, Yahoo, Ashley Madison breaches. Later, Equifax. Companies were moving to the cloud, building massive security teams that were finally breaking away from infrastructure and IT to become their own thing. You'd have auditors discovering Windows NT systems with LM hashes that a red team could crack in minutes.</p><p>Now the industry has matured. The problems are "higher hanging fruits" requiring more seniors: cloud security, AppSec, GRC engineering. The market has bifurcated completely: senior roles are safe with increasing demand, but entry-level is brutal.</p><p>My story is a success story from a different time. Problem is, everybody assumed this trend would continue forever. Every school in my province now has a cybersecurity degree! In 2015, there were none. Security wasn't even cool! All computer sciences students wanted to do were AI, videogames and mobile.</p><p>We get 500+ applicants for a single intermediate role. Do you like those odds?</p><h2 id="the-certification-bodies-didnt-keep-up">The Certification Bodies Didn't Keep Up</h2><p>Here's what happened while everyone was getting sold the "cybersecurity gold rush" story: the certification curriculum stayed idle.</p><p>Certifications like CEH and Security+ still focus on foundational skills (think Kali Linux, Windows environments, switches and networks). But the real world has moved on. Today, the "foundations" should be Kubernetes security, cloud IAM, supply chain risks and AI governance <em>on top of the existing ones</em>. The bar needs to rise! </p><p>Everyone's doing the same path: YouTube/Udemy → Google Cyber → CompTIA → TryHackMe → Splunk. Then Tenable → CrowdStrike → ELK. It's hard to differentiate when everyone's following the same playbook, and certification bodies haven't caught up. They're still operating like it's 2015, including the curriculum.</p><p>Those vocational curriculums sound like a good start, but they won't get you hired in security right off the bat. They're the bare minimum, like someone doing their own taxes thinking that qualifies them to be a corporate accountant.</p><p>You're now competing with hundreds of thousands of people who were told the exact same thing about security being the path of the future. The bottom of the market is completely saturated with identical applicants who all have Security+, Google Cybersecurity, and some general IT knowledge. Demand is crushing supply. I call this "the slush pile".</p><h2 id="what-actually-works-and-what-i-did">What Actually Works (And What I Did)</h2><p>You need to acquire the skills, meet people, get noticed. I did this too.</p><p>I landed an internship with a paper resume at a career fair simply because I was fascinated by "access management of documents" and, yes, Mr. Robot. Yes, it was an easier market. Still, my strategy was still better than the "spray and pray" applications I see most of you using. I had attended a "Lunch and Learn" event on campus where a security analyst spoke about transitioning careers while raising toddlers (I had 2 myself at the time). I emailed him afterwards, visited the company premises weeks later, and spoke with a dozen people about the job. Four months later, when I applied for that internship, they remembered me. That's what I mean by "getting yourself out there"!</p><p>Your best bet at pivoting is becoming the job you want from within your current role. If you're currently employed, you have a much better shot at pivoting by actually speaking with people. What good is it if you just do a bunch of certs in your corner?</p><p>Make friends with the compliance team! Show how what you do enables compliance, how your work helps secure systems. Join the security champions program. Get noticed, and you'll get actual experts speaking with you. You'll learn much better talking with people than rummaging through textbooks.</p><p>Don't ask for mentorship. Ask for people's opinions on something you both care about. "Will you mentor me?" is like asking "Will you be my friend?" to a stranger.</p><h2 id="we-need-to-stop-the-certification-narrative">We Need to Stop the Certification Narrative</h2><p>We need to stop the "school narrative," wherein you acquire a credential and that credential "unlocks a new level" (a job). I'm seeing people plot their pathway to CISO, one certification at a time. That doesn't work like that!</p><p>Certifications teach you how to pass a multiple-choice exam. We need to stop pretending you can multiple-choice your way to companies' current problems.</p><p>Here's what works: you need to provide value for people to find you interesting. Stop thinking about HTB boxes and start thinking about inefficiencies in triaging techniques. Think about that PowerShell script you're running to get Azure events: make it compute faster while avoiding API rate limits. THM is fine, but how many times do you run Bloodhound in day-to-day work versus finding a better way to document false positives?</p><p>Your manager needs efficient workers more than HTB champions.</p><h2 id="the-uncomfortable-truth">The Uncomfortable Truth</h2><p>I'm letting my CEH expire this year. I'm done. After close to a decade in this field, I've learned something painful: the certification treadmill doesn't take you where you think it's going. The "certification industry" needs to stop selling fantasies to entry and intermediate level people.</p><p>Getting into cyber nowadays means you have to be in it because this is what you want to do and can't imagine doing anything else. There's been huge marketing about cyber being an easy path toward six-figure salaries, but this is deceptive and objectively wrong. This isn't the convenient career path anymore.</p><p>In my city we need construction workers, nurses, and teachers. I imagine it's the same everywhere. These are the current easy-to-get jobs.</p><p>I'm grateful for my CEH because it got me started, but I've outgrown it, and so has the industry. The reality is harsh: no certification body is keeping up with the pace of change.</p><p>The careers are still being built. Just not the way the certification industry wants you to believe. And definitely not the way I used to promote in those YouTube videos.</p><p>Now that you know my story, what's yours going to be?</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Entry-level cybersecurity jobs are more competitive than ever. Learn why certifications aren’t enough and what really sets you apart.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>Let's piss off every cybersecurity influencer, bootcamp, and certification body whose marketing strategy relies on the deception that there are "millions of unfilled cyber jobs," shall we?</p><p>I believed certifications mattered. Hell, I even got short-listed for the EC-Council Canada Hall of Fame and made YouTube videos promoting this stuff. </p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/vpUHUTosvLk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="" title="How the C|EH Boosted My Confidence and Empowered My Career| Pierre-Paul Ferland"></iframe><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">I didn't even get a CPE for that effort.</span></p></figcaption></figure><p>I got my CEH and I'm glad I did because building that home lab, staying up late figuring out how things broke, sharing war stories with other wannabe hackers. That part was genuinely valuable because it lit the spark.</p><p>But here's the uncomfortable truth someone needs to say, and I'd rather it come from someone who's been exactly where you are rather than you discovering this fact 8 months into your rejection journey.</p><h2 id="the-marketing-bullshit">The Marketing Bullshit</h2><p><strong>No, Security+ and Google Cybersecurity Certificate are not enough to land an entry-level role in the current market. Stop asking.</strong></p><p>The entire certification industry is marketing on false premises. They keep pushing the narrative that there are "millions of unfilled cyber jobs," but that's straight-up deception built on the assumption that exponential growth from my era would continue forever. Spoiler alert: it didn't.</p><p>I started in 2015, when companies were literally plucking students off campus in their first year of university. I signed a full-time job in second year and only finished my degree because I wanted to, not because I had to. That world doesn't exist anymore.</p><p>Back then, it was Target, Yahoo, Ashley Madison breaches. Later, Equifax. Companies were moving to the cloud, building massive security teams that were finally breaking away from infrastructure and IT to become their own thing. You'd have auditors discovering Windows NT systems with LM hashes that a red team could crack in minutes.</p><p>Now the industry has matured. The problems are "higher hanging fruits" requiring more seniors: cloud security, AppSec, GRC engineering. The market has bifurcated completely: senior roles are safe with increasing demand, but entry-level is brutal.</p><p>My story is a success story from a different time. Problem is, everybody assumed this trend would continue forever. Every school in my province now has a cybersecurity degree! In 2015, there were none. Security wasn't even cool! All computer sciences students wanted to do were AI, videogames and mobile.</p><p>We get 500+ applicants for a single intermediate role. Do you like those odds?</p><h2 id="the-certification-bodies-didnt-keep-up">The Certification Bodies Didn't Keep Up</h2><p>Here's what happened while everyone was getting sold the "cybersecurity gold rush" story: the certification curriculum stayed idle.</p><p>Certifications like CEH and Security+ still focus on foundational skills (think Kali Linux, Windows environments, switches and networks). But the real world has moved on. Today, the "foundations" should be Kubernetes security, cloud IAM, supply chain risks and AI governance <em>on top of the existing ones</em>. The bar needs to rise! </p><p>Everyone's doing the same path: YouTube/Udemy → Google Cyber → CompTIA → TryHackMe → Splunk. Then Tenable → CrowdStrike → ELK. It's hard to differentiate when everyone's following the same playbook, and certification bodies haven't caught up. They're still operating like it's 2015, including the curriculum.</p><p>Those vocational curriculums sound like a good start, but they won't get you hired in security right off the bat. They're the bare minimum, like someone doing their own taxes thinking that qualifies them to be a corporate accountant.</p><p>You're now competing with hundreds of thousands of people who were told the exact same thing about security being the path of the future. The bottom of the market is completely saturated with identical applicants who all have Security+, Google Cybersecurity, and some general IT knowledge. Demand is crushing supply. I call this "the slush pile".</p><h2 id="what-actually-works-and-what-i-did">What Actually Works (And What I Did)</h2><p>You need to acquire the skills, meet people, get noticed. I did this too.</p><p>I landed an internship with a paper resume at a career fair simply because I was fascinated by "access management of documents" and, yes, Mr. Robot. Yes, it was an easier market. Still, my strategy was still better than the "spray and pray" applications I see most of you using. I had attended a "Lunch and Learn" event on campus where a security analyst spoke about transitioning careers while raising toddlers (I had 2 myself at the time). I emailed him afterwards, visited the company premises weeks later, and spoke with a dozen people about the job. Four months later, when I applied for that internship, they remembered me. That's what I mean by "getting yourself out there"!</p><p>Your best bet at pivoting is becoming the job you want from within your current role. If you're currently employed, you have a much better shot at pivoting by actually speaking with people. What good is it if you just do a bunch of certs in your corner?</p><p>Make friends with the compliance team! Show how what you do enables compliance, how your work helps secure systems. Join the security champions program. Get noticed, and you'll get actual experts speaking with you. You'll learn much better talking with people than rummaging through textbooks.</p><p>Don't ask for mentorship. Ask for people's opinions on something you both care about. "Will you mentor me?" is like asking "Will you be my friend?" to a stranger.</p><h2 id="we-need-to-stop-the-certification-narrative">We Need to Stop the Certification Narrative</h2><p>We need to stop the "school narrative," wherein you acquire a credential and that credential "unlocks a new level" (a job). I'm seeing people plot their pathway to CISO, one certification at a time. That doesn't work like that!</p><p>Certifications teach you how to pass a multiple-choice exam. We need to stop pretending you can multiple-choice your way to companies' current problems.</p><p>Here's what works: you need to provide value for people to find you interesting. Stop thinking about HTB boxes and start thinking about inefficiencies in triaging techniques. Think about that PowerShell script you're running to get Azure events: make it compute faster while avoiding API rate limits. THM is fine, but how many times do you run Bloodhound in day-to-day work versus finding a better way to document false positives?</p><p>Your manager needs efficient workers more than HTB champions.</p><h2 id="the-uncomfortable-truth">The Uncomfortable Truth</h2><p>I'm letting my CEH expire this year. I'm done. After close to a decade in this field, I've learned something painful: the certification treadmill doesn't take you where you think it's going. The "certification industry" needs to stop selling fantasies to entry and intermediate level people.</p><p>Getting into cyber nowadays means you have to be in it because this is what you want to do and can't imagine doing anything else. There's been huge marketing about cyber being an easy path toward six-figure salaries, but this is deceptive and objectively wrong. This isn't the convenient career path anymore.</p><p>In my city we need construction workers, nurses, and teachers. I imagine it's the same everywhere. These are the current easy-to-get jobs.</p><p>I'm grateful for my CEH because it got me started, but I've outgrown it, and so has the industry. The reality is harsh: no certification body is keeping up with the pace of change.</p><p>The careers are still being built. Just not the way the certification industry wants you to believe. And definitely not the way I used to promote in those YouTube videos.</p><p>Now that you know my story, what's yours going to be?</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>It&#x27;s Cool to Hate Security Vendors</title>
                    <link>https://thisisgrc.com/its-cool-to-hate-security-vendors/</link>
                    <pubDate>Wed, 03 Sep 2025 08:08:27 -0400
                    </pubDate>
                    <guid isPermaLink="false">6820263bc67d4a00011c255a</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>In security, it’s trendy to dunk on vendors and romanticize building in-house. But in most enterprises, buying the boring, supported tool isn’t selling out, it&#x27;s the smart choice.</description>
                    <content:encoded>
                        <![CDATA[ <p>If you spend enough time on LinkedIn, you'll encounter these posts by security directors, VPs or CISOs. Mocking cold calls. Taking a jab at conference booths. Gaslighting SDRs that do bad cold outreach. You know the vibe: "Stop selling me shit!"</p><p>And I get it, most business devs suck at their job. They have a 'takers' mentality: they have their battlecards and go through their motion and you see right through them. But that's not a reason to dismiss the industry.</p><p>At first, I thought it was a form of humblebrag. Confession time: I joined LinkedIn precisely because my colleagues were bitching about having so many recruiters "annoying" them and thought I deserved a piece of the action. All this to say: what you see as an annoyance can be someone else's desire, so be careful throwing them down.</p><p>But this gets deeper. I think the aversion to the industry is deeply rooted in our discipline... and I think it's a problem. </p><hr><h2 id="the-hacker-myth">The Hacker Myth</h2><p>There’s a prevailing mindset in this field that "real" teams build their own tools. This mentality comes from our hacker roots. A hacker tinkers, customizes and cleverly tweaks. For many of us, building is part of our identity. It’s what we’ve done for years. It’s what made us fall in love with the technology in the first place.</p><p>I admire that hacker mindset. The curiosity. The initiative. The sense of being able to outsmart the system. But inside an enterprise environment, that instinct often requires a shift toward what might seem like the "boring" choice: corporate bowties and croissants conferences.</p><p><strong>Build vs Buy is one of my biggest ongoing debates with my colleagues, and I'd argue, in the GRC industry as a whole.</strong> I'm part of the latter group: let's rely on this company who lives and breathes a single problem to solve our identical problem. Developers cost hugely and they hate doing maintenance. A McKinsey study found that developers spend <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity?ref=thisisgrc.com" rel="noreferrer">up to 50% of their time</a> on maintenance rather than innovation.</p><p>The counter-argument I keep hearing is about how these security tools cannot meet our needs. I counteract with: <strong>How unique are you, really? And should you be that unique?</strong></p><p>In most cases, the extra 10% of customization we’re craving comes with a hidden cost. Think time spent on maintenance, troubleshooting, documentation, and support. WHICH EVERYBODY HATES TO DO. Internal tools lack the external support you get from vendors. You lose the ability to escalate when things go wrong, and you’re left managing a tool that can easily become a liability if the original builder leaves or moves on. In fact, <a href="https://www.starmind.ai/hubfs/Assets%202022/Forrester-Opportunity-Snapshot-2022.pdf?ref=thisisgrc.com" rel="noreferrer">Forrester points out that 70%</a> of orgs struggle with knowledge loss when technical staff leave teams that maintain custom tools.</p><p>At a certain scale, building becomes a trade-off that limits our ability to focus on the real, high-value problems that only we can solve.</p><hr><h2 id="what-about-grc">What about GRC?</h2><p>Now, as a GRC professional, it’s easier for me to advocate for buying instead of building. My job is to make strategic decisions, not to write code. But that’s not just a fallback option. I believe my bias is based on the reality of operational efficiency. Buying a tool doesn’t mean you’re less technical or that you’re not capable of building. It means you’re choosing to focus your time and energy on doing something else than debugging software (a.k.a not bringing security value). It’s a decision to invest in scalability and sustainability rather than novelty. A vendor’s tool is already tested, supported, and maintained. Let them update those hosts and containers.</p><p>The relationship with a vendor also gives you leverage. You can push them for improvements, tie renewals to their roadmap, and hold them accountable. That’s harder to do with in-house tools, where you’re locked into whatever you've built, often with no clear escalation path when things go wrong. Worse, from what I've seen, sentiment gets in the way. It kind of get difficult to tell Larry that his software sucks if you ride in the same bus every day.</p><hr><h2 id="the-cost-of-humans">The Cost of Humans</h2><p>Hiring people to build things can be a great solution too, but that takes more discipline. It’s not about pushing a few buttons or making a quick change. It’s about coaching, upskilling, and managing. You’re not just paying for a tool, you’re investing in people who will need your guidance, mentorship, and time to grow. That takes a lot of effort, especially when they are managing tools that don’t have external support.</p><p>None of this is to say that building tools is inherently bad or unnecessary. Sometimes it’s exactly the right decision. <strong>But too often, the drive to build is fueled by an identity, a culture of hackerism that places more value on being different than on being effective. </strong>The truth is, most enterprises don’t need to reinvent the wheel. </p><p>In a high-functioning enterprise, the real work often looks like buying the “boring” tool even if it makes you feel like a corporate stooge or a sellout. But boring gets things done. And let’s not forget the CFO. Predictable OpEx beats mystery Dev time. Buying lets you forecast, track ROI, and tie spend to value. And in security, that’s the kind of work that really matters.</p><hr><p><em>I wrote a similar article in reaction to my visit at the "North Sec" a passion-driven conference:</em></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://thisisgrc.com/we-cant-build-the-cybersecurity-workforce-on-passion-alone/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">We can’t build the cybersecurity workforce on passion alone</div><div class="kg-bookmark-description">Envisioning the transition of cybersecurity from a passion and skill-driven activity to a casual business profession.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/icon/icone.jpg" alt=""><span class="kg-bookmark-author">ppfosec</span><span class="kg-bookmark-publisher">Pierre-Paul Ferland</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/thumbnail/newsletter-2024-05-29.png" alt="" onerror="this.style.display = 'none'"></div></a></figure> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>In security, it’s trendy to dunk on vendors and romanticize building in-house. But in most enterprises, buying the boring, supported tool isn’t selling out, it&#x27;s the smart choice.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>If you spend enough time on LinkedIn, you'll encounter these posts by security directors, VPs or CISOs. Mocking cold calls. Taking a jab at conference booths. Gaslighting SDRs that do bad cold outreach. You know the vibe: "Stop selling me shit!"</p><p>And I get it, most business devs suck at their job. They have a 'takers' mentality: they have their battlecards and go through their motion and you see right through them. But that's not a reason to dismiss the industry.</p><p>At first, I thought it was a form of humblebrag. Confession time: I joined LinkedIn precisely because my colleagues were bitching about having so many recruiters "annoying" them and thought I deserved a piece of the action. All this to say: what you see as an annoyance can be someone else's desire, so be careful throwing them down.</p><p>But this gets deeper. I think the aversion to the industry is deeply rooted in our discipline... and I think it's a problem. </p><hr><h2 id="the-hacker-myth">The Hacker Myth</h2><p>There’s a prevailing mindset in this field that "real" teams build their own tools. This mentality comes from our hacker roots. A hacker tinkers, customizes and cleverly tweaks. For many of us, building is part of our identity. It’s what we’ve done for years. It’s what made us fall in love with the technology in the first place.</p><p>I admire that hacker mindset. The curiosity. The initiative. The sense of being able to outsmart the system. But inside an enterprise environment, that instinct often requires a shift toward what might seem like the "boring" choice: corporate bowties and croissants conferences.</p><p><strong>Build vs Buy is one of my biggest ongoing debates with my colleagues, and I'd argue, in the GRC industry as a whole.</strong> I'm part of the latter group: let's rely on this company who lives and breathes a single problem to solve our identical problem. Developers cost hugely and they hate doing maintenance. A McKinsey study found that developers spend <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity?ref=thisisgrc.com" rel="noreferrer">up to 50% of their time</a> on maintenance rather than innovation.</p><p>The counter-argument I keep hearing is about how these security tools cannot meet our needs. I counteract with: <strong>How unique are you, really? And should you be that unique?</strong></p><p>In most cases, the extra 10% of customization we’re craving comes with a hidden cost. Think time spent on maintenance, troubleshooting, documentation, and support. WHICH EVERYBODY HATES TO DO. Internal tools lack the external support you get from vendors. You lose the ability to escalate when things go wrong, and you’re left managing a tool that can easily become a liability if the original builder leaves or moves on. In fact, <a href="https://www.starmind.ai/hubfs/Assets%202022/Forrester-Opportunity-Snapshot-2022.pdf?ref=thisisgrc.com" rel="noreferrer">Forrester points out that 70%</a> of orgs struggle with knowledge loss when technical staff leave teams that maintain custom tools.</p><p>At a certain scale, building becomes a trade-off that limits our ability to focus on the real, high-value problems that only we can solve.</p><hr><h2 id="what-about-grc">What about GRC?</h2><p>Now, as a GRC professional, it’s easier for me to advocate for buying instead of building. My job is to make strategic decisions, not to write code. But that’s not just a fallback option. I believe my bias is based on the reality of operational efficiency. Buying a tool doesn’t mean you’re less technical or that you’re not capable of building. It means you’re choosing to focus your time and energy on doing something else than debugging software (a.k.a not bringing security value). It’s a decision to invest in scalability and sustainability rather than novelty. A vendor’s tool is already tested, supported, and maintained. Let them update those hosts and containers.</p><p>The relationship with a vendor also gives you leverage. You can push them for improvements, tie renewals to their roadmap, and hold them accountable. That’s harder to do with in-house tools, where you’re locked into whatever you've built, often with no clear escalation path when things go wrong. Worse, from what I've seen, sentiment gets in the way. It kind of get difficult to tell Larry that his software sucks if you ride in the same bus every day.</p><hr><h2 id="the-cost-of-humans">The Cost of Humans</h2><p>Hiring people to build things can be a great solution too, but that takes more discipline. It’s not about pushing a few buttons or making a quick change. It’s about coaching, upskilling, and managing. You’re not just paying for a tool, you’re investing in people who will need your guidance, mentorship, and time to grow. That takes a lot of effort, especially when they are managing tools that don’t have external support.</p><p>None of this is to say that building tools is inherently bad or unnecessary. Sometimes it’s exactly the right decision. <strong>But too often, the drive to build is fueled by an identity, a culture of hackerism that places more value on being different than on being effective. </strong>The truth is, most enterprises don’t need to reinvent the wheel. </p><p>In a high-functioning enterprise, the real work often looks like buying the “boring” tool even if it makes you feel like a corporate stooge or a sellout. But boring gets things done. And let’s not forget the CFO. Predictable OpEx beats mystery Dev time. Buying lets you forecast, track ROI, and tie spend to value. And in security, that’s the kind of work that really matters.</p><hr><p><em>I wrote a similar article in reaction to my visit at the "North Sec" a passion-driven conference:</em></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://thisisgrc.com/we-cant-build-the-cybersecurity-workforce-on-passion-alone/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">We can’t build the cybersecurity workforce on passion alone</div><div class="kg-bookmark-description">Envisioning the transition of cybersecurity from a passion and skill-driven activity to a casual business profession.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/icon/icone.jpg" alt=""><span class="kg-bookmark-author">ppfosec</span><span class="kg-bookmark-publisher">Pierre-Paul Ferland</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/thumbnail/newsletter-2024-05-29.png" alt="" onerror="this.style.display = 'none'"></div></a></figure> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>What Makes a GRC Specialist Great?</title>
                    <link>https://thisisgrc.com/what-makes-a-grc-specialist-great/</link>
                    <pubDate>Wed, 20 Aug 2025 08:08:08 -0400
                    </pubDate>
                    <guid isPermaLink="false">681ad857fe554e0001c0e4e8</guid>
                    <category>
                        <![CDATA[ Leadership ]]>
                    </category>
                    <description>Looking at the mindset that will set us up for success as GRC specialists. The best GRC pros aren’t box-checkers. They’re constant learners who earn trust by sharing knowledge and making the right calls despite ambiguity.</description>
                    <content:encoded>
                        <![CDATA[ <h3 id=""></h3><p>Don't worry. I'm not about to spend the next 1,000 words of your time telling you about how great I am. I'm still figuring this out. Think of it like an ideal to reach.</p><p>What I do know, as someone with some experience, is that there’s a moment in every GRC career when you realize: this job isn’t really about controls or policies. It's about understanding the people making decisions, often without us, most of the time with good intentions, but with poor incentives. And it’s easy to feel like the work is being done without us, or worse, that it’s happening in spite of us. This is how most of the frustration in GRC builds up. Worse, you expect <em>change</em>, but the changes accelerate, just not on the parameters you need them to change!</p><p>I had this realization pretty early in my career. I was part of a team that thought it had a duty to be the "conscience" of the company and to "mature" it. The idea was to build such compelling risk stories that some careless projects would slow down. But the reality was that no one listened. The more I pushed, the more isolated we became. Team meetings were basically just ruminations on how these guys were driving the company into a wall. </p><p>This brings me to another hot topic: <a href="https://thisisgrc.com/you-dont-go-into-cybersecurity-to-make-friends/" rel="noreferrer">security is not cool</a>. We will never push a feature that makes people lives measurably better. We will never single-handedly enthrall an audience with our demo skills. So, imagine: a bunch of specialists worried about being right while the "others" were getting featured internally. After a bunch of burnout and resignations, the lesson was clear: <strong>being right doesn’t matter if no one listens</strong>.</p><p>That's why I insist on relevance. Making security work isn't about authority (we don't have any) or following the frameworks (they're vague and full of holes). It's about being curious about systems, asking questions that no one else thinks of (because we do have a different, adversarial, perspective), and listening more than telling. There's a saying: "greatness is in the agency of others". So, what makes a GRC specialist "great"? How about ensuring others are allowed to build great things? </p><hr><h3 id="grc-is-not-a-role-it%E2%80%99s-a-practice">GRC Is Not a Role. It’s a Practice.</h3><p>What makes GRC impactful isn’t just ticking off items from a compliance checklist. GRC is a practice that requires understanding the full scope of a business. I'd argue, this is even the most interesting aspect of security, the one I always use when people ask me why I love this practice! It’s not enough to know the framework or the policy; you need to understand how things <strong>really work</strong> in the business, from all layers: products, sales, infrastructure, applications, databases, networks, processes, legal, hiring, firing... you touch everything, might as well be curious about everything!</p><p>I can’t tell you how many times I’ve been in a meeting where I could connect two people or two systems to help with an issue that had no relation with security. There's this inside joke in networking: "The problem is always DNS!" Funny thing: most software developers don't know that meme, so when you debug them by finding that DNS flaw: that's "street cred"! This amount of cred is your capital. With this sympathy, you earn trust. By virtue of you being connected to every "system" (human of software or business, think systems-thinking), you see patterns that specialists won't see. </p><p>This is how you become uniquely positioned in your practice: your constant drive to learn about everything helps people learn more themselves, making everyone richer. As my philosophy teacher said in college: "Knowledge is the only thing you can give someone without having less for yourself". This is why "practice" is so deliberate: a practice has a predisposition on learning, because it evolves.</p><hr><h3 id="relevance-built-slowly">Relevance, Built Slowly</h3><p>When you’re doing this work right, most of what you influence will be invisible. Back to my "you're not cool" point: You won’t be recognized for the risks that didn’t materialize, the issues you avoided, or the changes you made quietly behind the scenes. It's impossible to prove the absence of something, and we have no control group to validate what would happen if we did nothing. Ironically, the most praise I've seen a security team get is when we finally <em>remove security measures that had terrible UX</em>. That VPN product that took forever to connect. That inefficient password rotation...</p><p>Anyway, the real impact of GRC isn’t in the loud moments. It’s in the invisible work. The subtle nudges you make to help a team shift their thinking. The quiet check-ins to make sure people are aware of your services. The decision to <em>gently</em> remind a product team that there’s a better, safer way to approach a problem without slowing them down.</p><p>We tweak permissions. We notice weird contractual clauses that we get deleted. We convince VPs to pay the <a href="https://sso.tax/?ref=thisisgrc.com" rel="noreferrer">SSO tax</a>. We see an alert in our configuration management systems and get a user removed. We screen consultants. Yes, sometimes we need to bust out the “policy says” approach. But most of the time, going in <strong>humble and curious</strong>, you can ask the right questions to get to the why your perspective matters. "<em>You want to use this database? Ok, I'm wondering how will you manage the data residency?</em>". I've asked that, <strong>knowing </strong>that the database in question didn't support that. I could have said "forget it, data residency won't work". But that feels like your parent telling you what to do. It was invisible work, but that’s the job. And it was only possible because I’d built the trust to have those quiet, important conversations.</p><p>In GRC, trust is earned slowly by being present and consistently showing up without needing to be recognized.</p><hr><h3 id="the-hardest-part-sitting-with-ambiguity">The Hardest Part: Sitting with Ambiguity</h3><p>The real growth in GRC comes from sitting with the ambiguity. This job is not about certainty or easy answers. There are moments when you don’t know if a decision is going to be the right one, and you have to be comfortable with that. In fact, that can become your superpower. Executives, in my experience, dislike when we throw ambiguity back at them. </p><p>We can never know when an attack will happen and on which entry point. All we do is relying on educated guesses, intuition, and hopefully some data to steer our decisions. Yet, we must never walk into a meeting with a blank slate. As GRC specialists, our value lies in how well we articulate the problems, solutions, tradeoffs. When someone brings up a vendor, we can’t afford to give soft, evasive opinions. “<em>We’ll have to do a risk assessment</em>” isn’t enough. We need to have a take. We need to know when a vendor is a ticking time bomb and when a risk is just noise.</p><p>No, we can’t predict exactly when a third party will get breached. But we can absolutely call out when a vendor sucks at security. And we can design guardrails that limit the blast radius when things go wrong. That’s our job. That’s what earns us a seat at the table. Clear, assertive guidance in the face of limited data, while not going into full bullshitter mode, is the line we thread.</p><p>If we own the ambiguity, we abstract the complexity. That’s how things actually move forward. That’s the difference between being a GRC advisor who shapes decisions, and being a “meeting facilitator” who just nods along and documents action items no one reads.</p><p>You don’t win influence with risk methodologies. You don’t win it with strategic alignment decks or quoting frameworks. Most of that is corporate filler. You win it by being useful, direct, and present. By understanding what’s at stake, what matters to the business, and how to move toward safer decisions without slowing people down.</p><p>That’s what I look for in great GRC work.</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Looking at the mindset that will set us up for success as GRC specialists. The best GRC pros aren’t box-checkers. They’re constant learners who earn trust by sharing knowledge and making the right calls despite ambiguity.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <h3 id=""></h3><p>Don't worry. I'm not about to spend the next 1,000 words of your time telling you about how great I am. I'm still figuring this out. Think of it like an ideal to reach.</p><p>What I do know, as someone with some experience, is that there’s a moment in every GRC career when you realize: this job isn’t really about controls or policies. It's about understanding the people making decisions, often without us, most of the time with good intentions, but with poor incentives. And it’s easy to feel like the work is being done without us, or worse, that it’s happening in spite of us. This is how most of the frustration in GRC builds up. Worse, you expect <em>change</em>, but the changes accelerate, just not on the parameters you need them to change!</p><p>I had this realization pretty early in my career. I was part of a team that thought it had a duty to be the "conscience" of the company and to "mature" it. The idea was to build such compelling risk stories that some careless projects would slow down. But the reality was that no one listened. The more I pushed, the more isolated we became. Team meetings were basically just ruminations on how these guys were driving the company into a wall. </p><p>This brings me to another hot topic: <a href="https://thisisgrc.com/you-dont-go-into-cybersecurity-to-make-friends/" rel="noreferrer">security is not cool</a>. We will never push a feature that makes people lives measurably better. We will never single-handedly enthrall an audience with our demo skills. So, imagine: a bunch of specialists worried about being right while the "others" were getting featured internally. After a bunch of burnout and resignations, the lesson was clear: <strong>being right doesn’t matter if no one listens</strong>.</p><p>That's why I insist on relevance. Making security work isn't about authority (we don't have any) or following the frameworks (they're vague and full of holes). It's about being curious about systems, asking questions that no one else thinks of (because we do have a different, adversarial, perspective), and listening more than telling. There's a saying: "greatness is in the agency of others". So, what makes a GRC specialist "great"? How about ensuring others are allowed to build great things? </p><hr><h3 id="grc-is-not-a-role-it%E2%80%99s-a-practice">GRC Is Not a Role. It’s a Practice.</h3><p>What makes GRC impactful isn’t just ticking off items from a compliance checklist. GRC is a practice that requires understanding the full scope of a business. I'd argue, this is even the most interesting aspect of security, the one I always use when people ask me why I love this practice! It’s not enough to know the framework or the policy; you need to understand how things <strong>really work</strong> in the business, from all layers: products, sales, infrastructure, applications, databases, networks, processes, legal, hiring, firing... you touch everything, might as well be curious about everything!</p><p>I can’t tell you how many times I’ve been in a meeting where I could connect two people or two systems to help with an issue that had no relation with security. There's this inside joke in networking: "The problem is always DNS!" Funny thing: most software developers don't know that meme, so when you debug them by finding that DNS flaw: that's "street cred"! This amount of cred is your capital. With this sympathy, you earn trust. By virtue of you being connected to every "system" (human of software or business, think systems-thinking), you see patterns that specialists won't see. </p><p>This is how you become uniquely positioned in your practice: your constant drive to learn about everything helps people learn more themselves, making everyone richer. As my philosophy teacher said in college: "Knowledge is the only thing you can give someone without having less for yourself". This is why "practice" is so deliberate: a practice has a predisposition on learning, because it evolves.</p><hr><h3 id="relevance-built-slowly">Relevance, Built Slowly</h3><p>When you’re doing this work right, most of what you influence will be invisible. Back to my "you're not cool" point: You won’t be recognized for the risks that didn’t materialize, the issues you avoided, or the changes you made quietly behind the scenes. It's impossible to prove the absence of something, and we have no control group to validate what would happen if we did nothing. Ironically, the most praise I've seen a security team get is when we finally <em>remove security measures that had terrible UX</em>. That VPN product that took forever to connect. That inefficient password rotation...</p><p>Anyway, the real impact of GRC isn’t in the loud moments. It’s in the invisible work. The subtle nudges you make to help a team shift their thinking. The quiet check-ins to make sure people are aware of your services. The decision to <em>gently</em> remind a product team that there’s a better, safer way to approach a problem without slowing them down.</p><p>We tweak permissions. We notice weird contractual clauses that we get deleted. We convince VPs to pay the <a href="https://sso.tax/?ref=thisisgrc.com" rel="noreferrer">SSO tax</a>. We see an alert in our configuration management systems and get a user removed. We screen consultants. Yes, sometimes we need to bust out the “policy says” approach. But most of the time, going in <strong>humble and curious</strong>, you can ask the right questions to get to the why your perspective matters. "<em>You want to use this database? Ok, I'm wondering how will you manage the data residency?</em>". I've asked that, <strong>knowing </strong>that the database in question didn't support that. I could have said "forget it, data residency won't work". But that feels like your parent telling you what to do. It was invisible work, but that’s the job. And it was only possible because I’d built the trust to have those quiet, important conversations.</p><p>In GRC, trust is earned slowly by being present and consistently showing up without needing to be recognized.</p><hr><h3 id="the-hardest-part-sitting-with-ambiguity">The Hardest Part: Sitting with Ambiguity</h3><p>The real growth in GRC comes from sitting with the ambiguity. This job is not about certainty or easy answers. There are moments when you don’t know if a decision is going to be the right one, and you have to be comfortable with that. In fact, that can become your superpower. Executives, in my experience, dislike when we throw ambiguity back at them. </p><p>We can never know when an attack will happen and on which entry point. All we do is relying on educated guesses, intuition, and hopefully some data to steer our decisions. Yet, we must never walk into a meeting with a blank slate. As GRC specialists, our value lies in how well we articulate the problems, solutions, tradeoffs. When someone brings up a vendor, we can’t afford to give soft, evasive opinions. “<em>We’ll have to do a risk assessment</em>” isn’t enough. We need to have a take. We need to know when a vendor is a ticking time bomb and when a risk is just noise.</p><p>No, we can’t predict exactly when a third party will get breached. But we can absolutely call out when a vendor sucks at security. And we can design guardrails that limit the blast radius when things go wrong. That’s our job. That’s what earns us a seat at the table. Clear, assertive guidance in the face of limited data, while not going into full bullshitter mode, is the line we thread.</p><p>If we own the ambiguity, we abstract the complexity. That’s how things actually move forward. That’s the difference between being a GRC advisor who shapes decisions, and being a “meeting facilitator” who just nods along and documents action items no one reads.</p><p>You don’t win influence with risk methodologies. You don’t win it with strategic alignment decks or quoting frameworks. Most of that is corporate filler. You win it by being useful, direct, and present. By understanding what’s at stake, what matters to the business, and how to move toward safer decisions without slowing people down.</p><p>That’s what I look for in great GRC work.</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>Presence Over Process</title>
                    <link>https://thisisgrc.com/presence-over-process/</link>
                    <pubDate>Wed, 06 Aug 2025 08:08:52 -0400
                    </pubDate>
                    <guid isPermaLink="false">681ad18afe554e0001c0e4ab</guid>
                    <category>
                        <![CDATA[ Leadership ]]>
                    </category>
                    <description>I believe the most valuable work a GRC specialist can do is becoming relevant. Not by pushing frameworks. Not by parroting requirements. But by helping the right people, at the right time, with the right insight so they can make better decisions faster.</description>
                    <content:encoded>
                        <![CDATA[ <p>I’ve spent most of my career in GRC trying to find ways to make security relevant. And somewhere along the way, I realized the difference isn’t in the frameworks. It’s not in the policies. It’s not even in how mature your risk register is.</p><p>It’s in whether people <em>want you in the room before the decision is made</em>.<br><em>That</em> has everything to do with how you show up.</p><p>For a long time, I thought influence came from structure and efficiency. If we build the right model, the right RACI, the right ticketing flow, the right automation, things would click. But structure doesn’t build trust. Presence does.</p><p>Presence is being there when the vendor is picked (not after the contract is signed).<br>Presence is being looped into a design document of the first draft of a new system.<br>Presence is saying, “Let me help you do this right,” before anyone even asks.</p><p>And I know how idealistic that sounds. GRC teams are stretched thin. We’re not always invited. We’re often chasing after projects already underway. I'm pretty sure I'm being overlooked in many projects despite everything that I do.  But that’s the work.  That’s the price to stop being the team of “no” and become the team that helps everyone move forward without stumbling.</p><hr><h2 id="frameworks-are-not-strategy">Frameworks Are Not Strategy</h2><p>I’ve written before about how we inherited too much of the checkbox mindset. We were taught that policies equal security. That evidence equals assurance. That frameworks are the work. But they’re not.</p><p>They’re sheet music.</p><p>They’re useful only if they help us understand the melody of the business. You can read sheet music but that's pretty boring unless you can play an instrument. The real work (a.k.a. playing that violin) is understanding how this company actually builds, ships, sells, and survives and helping it do that more securely.</p><p>That’s the difference between GRC as audit prep and GRC as business enablement. One is reactive. The other is embedded. I don't have a sheet music analogy to punctuate that.</p><p>When you stop leading with frameworks and start asking what people are trying to achieve, you start seeing the gaps differently. You’re not looking for non-compliance, your mind becomes solution-oriented (solution as in: software solution). This doesn’t mean you throw out compliance. It means you stop confusing it with action.</p><hr><h2 id="relevance-is-built-not-granted">Relevance Is Built, Not Granted</h2><p>I think a lot about how trust is built in hallway conversations or in the Slack thread where you helped someone unblock a process instead of quoting policy. Or in the architecture meeting where you asked a thoughtful question and didn’t talk over anyone. That kind of credibility compounds. Quietly. Then suddenly.</p><p>And it changes everything.</p><p>You stop being the person who files JIRA tickets after the fact. You become the person who gets pulled in <em>before</em> the build. </p><p>If you’re in GRC right now and you feel invisible, I see you (get it?). Sometimes, I still do, despite all my speeches about relevant GRC.</p><p>But I’ll say this: relevance is the ability to help people find the right ideas or information at the right time for the right need. Can you provide that, or are you simply shifting them your framework? </p><p>Try this: map the system down. On a whiteboard or piece of paper. Get a layer where you understand the business intent (this is how we sell this thing), the use case (this is how a human will use the software and get an experience out of it), THEN the tech. Gives you a multi-dimensional mind map of the whole forces at play. That's how I define relevance. If you see these patterns, you can act swiftly.<br></p><p>That’s how you become the advisor, not the afterthought.<br></p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>I believe the most valuable work a GRC specialist can do is becoming relevant. Not by pushing frameworks. Not by parroting requirements. But by helping the right people, at the right time, with the right insight so they can make better decisions faster.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>I’ve spent most of my career in GRC trying to find ways to make security relevant. And somewhere along the way, I realized the difference isn’t in the frameworks. It’s not in the policies. It’s not even in how mature your risk register is.</p><p>It’s in whether people <em>want you in the room before the decision is made</em>.<br><em>That</em> has everything to do with how you show up.</p><p>For a long time, I thought influence came from structure and efficiency. If we build the right model, the right RACI, the right ticketing flow, the right automation, things would click. But structure doesn’t build trust. Presence does.</p><p>Presence is being there when the vendor is picked (not after the contract is signed).<br>Presence is being looped into a design document of the first draft of a new system.<br>Presence is saying, “Let me help you do this right,” before anyone even asks.</p><p>And I know how idealistic that sounds. GRC teams are stretched thin. We’re not always invited. We’re often chasing after projects already underway. I'm pretty sure I'm being overlooked in many projects despite everything that I do.  But that’s the work.  That’s the price to stop being the team of “no” and become the team that helps everyone move forward without stumbling.</p><hr><h2 id="frameworks-are-not-strategy">Frameworks Are Not Strategy</h2><p>I’ve written before about how we inherited too much of the checkbox mindset. We were taught that policies equal security. That evidence equals assurance. That frameworks are the work. But they’re not.</p><p>They’re sheet music.</p><p>They’re useful only if they help us understand the melody of the business. You can read sheet music but that's pretty boring unless you can play an instrument. The real work (a.k.a. playing that violin) is understanding how this company actually builds, ships, sells, and survives and helping it do that more securely.</p><p>That’s the difference between GRC as audit prep and GRC as business enablement. One is reactive. The other is embedded. I don't have a sheet music analogy to punctuate that.</p><p>When you stop leading with frameworks and start asking what people are trying to achieve, you start seeing the gaps differently. You’re not looking for non-compliance, your mind becomes solution-oriented (solution as in: software solution). This doesn’t mean you throw out compliance. It means you stop confusing it with action.</p><hr><h2 id="relevance-is-built-not-granted">Relevance Is Built, Not Granted</h2><p>I think a lot about how trust is built in hallway conversations or in the Slack thread where you helped someone unblock a process instead of quoting policy. Or in the architecture meeting where you asked a thoughtful question and didn’t talk over anyone. That kind of credibility compounds. Quietly. Then suddenly.</p><p>And it changes everything.</p><p>You stop being the person who files JIRA tickets after the fact. You become the person who gets pulled in <em>before</em> the build. </p><p>If you’re in GRC right now and you feel invisible, I see you (get it?). Sometimes, I still do, despite all my speeches about relevant GRC.</p><p>But I’ll say this: relevance is the ability to help people find the right ideas or information at the right time for the right need. Can you provide that, or are you simply shifting them your framework? </p><p>Try this: map the system down. On a whiteboard or piece of paper. Get a layer where you understand the business intent (this is how we sell this thing), the use case (this is how a human will use the software and get an experience out of it), THEN the tech. Gives you a multi-dimensional mind map of the whole forces at play. That's how I define relevance. If you see these patterns, you can act swiftly.<br></p><p>That’s how you become the advisor, not the afterthought.<br></p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>The Future of GRC in the AI-Hype World</title>
                    <link>https://thisisgrc.com/the-future-of-grc-in-the-ai-hype-world/</link>
                    <pubDate>Wed, 23 Jul 2025 08:08:03 -0400
                    </pubDate>
                    <guid isPermaLink="false">681ac8ccfe554e0001c0e482</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>How AI will change our practices, for best (mostly) and for worse (a bit). Start embracing AI to automate the bureaucratic parts of GRC to focus on our human plus value.</description>
                    <content:encoded>
                        <![CDATA[ <p>If you work in GRC today, you've probably noticed the tsunami of AI hype washing over our industry. Every vendor, consultant, and LinkedIn influencer seems to be positioning themselves as an "AI security expert" (myself included). Webinars about "AI risk management frameworks" are multiplying faster than privacy laws.</p><p>My hot take: we are just rebranding the same security principles with shiny new buzzwords. AI systems remain hosted on an infrastructure governed by the same access management, data lifecycle and development principles as any other applications. But that doesn't mean that our discipline won't change. In fact, we're about to get rid of most of the drudgery to finally realize GRC as an advisory function. Let's dig into what's really changing and what remains foundational as we all try to make sense out of this.</p><h2 id="the-transformation-of-grc-in-the-ai-era">The Transformation of GRC in the AI Era</h2><p>Cross mappings, gap analysis, data classification, controls maturity assessments, policy writing, audits, reviews, procedures writing, guidelines drafting... all of these tasks will soon be outsourced to AI.</p><p>I've met CEOs and founders who are using AI to automate entire Third Party Risk Management processes. Think you can catch faster than an AI whenever your suppliers add some shady language in their privacy policy? This is the world that's coming... and it's amazing. The days of reactive, checkbox-driven GRC are numbered. </p><p>Picture this: AI agents reading our policies, monitoring risks in real time, embedding security standards directly into coding copilots or agents. The script is flipping. Where we were once primarily information gatherers, we're now becoming curators. AI will have all the answers but it won't be able to ask the right questions.</p><p>The routine boring stuff is GONE. No more Excel hell cross-mappings. No more manual risk reports. No more pointless questionnaires. No more bridge letters nobody reads. Only the work that actually matters remains.</p><p>AI will handle the vendor due diligence grind. Good. But vendor risk isn't just about filling out a questionnaire. It's about trust. About navigating the human dynamics AI doesn't understand through the whole lifecycle.</p><p>Autonomous compliance sounds nice until an LLM hallucinates your security policies into oblivion. We'll need to watch the watchers. Make sure AI isn't just fast but right.</p><h2 id="the-human-element-in-an-ai-powered-grc-world">The Human Element in an AI-Powered GRC World</h2><p>AI can crunch data, but it won't change how people think. Security awareness, culture, leadership are going to become our bread and butter.</p><p>A well-trained AI can spot anomalies. A well-trained human can make people care.</p><p>AI will still struggle with coordinating cross-functional teams with competing priorities. AI doesn't work well with the messy, unpredictable (by the way, this is why executive assistants' jobs are and will remain safe: too much chaos to deal with). Its decision-making will remain limited. AI will never replace a salesperson or a teacher, no matter how advanced it gets. Humans will always prefer human relationships; purchasing or learning are emotional decisions at their core.</p><p>The reality is that no security team can maintain all the information systems in an organization. The only solution is to ensure that the teams that operate these systems consider security.  There's no AI prompt for "Should we launch this product despite the security risks?" You need judgment. Experience. Conviction.</p><p>The challenge in security has always been about balancing incentives. The carrot approach of praising good behaviors and hyping small successes often struggles to maintain engagement. The stick approach is worse. Try slowing down a development team's sprint to address vulnerability SLAs and see how that goes over.</p><p>What remains is the risk-based approach and a great deal of opportunism. The most effective security changes happen when you're involved at the right moment with the right stakeholders. AI might help identify these moments, but it won't replace the human judgment needed to navigate them.</p><h2 id="the-reality-check-same-game-new-tools">The Reality Check: Same Game, New Tools</h2><p>When cloud computing arrived, we genuinely had to rethink EVERYTHING. The "network perimeter" became old fashioned. Shared responsibility models emerged. Infrastructure became code. Cloud facilitated containers, kubernetes, serverless...</p><p>But AI? Right now, AI remains essentially SOFTWARE running in the cloud processing larger datasets with non-deterministic outputs. It feels like magic but there's nothing magical about how it runs under the hood.</p><p>Every builder is embedding models into their products. Soon ALL software will have AI components. Yet somehow we're supposed to believe this requires a completely new security discipline? What will your "AI risk register" look like when every app you use has AI components? The distinction won't matter!</p><p>We need less of:</p><ul><li>Expensive "AI security" certifications</li><li>Consultants selling fear of "novel threats"</li><li>Rebranded "AI questionnaires" with security content</li></ul><p>And focus on the same challenges:</p><ul><li>Access control</li><li>Data protection</li><li>Supply chain security</li><li>Third-party risk management</li></ul><p>Let's face it: "training data poisoning" may be cool sounding but threat actors will still be breaching us with the same tired methods they've been using for 20 years: misplaced credentials, misconfigurations, old vulnerable components, phishing...</p><p>While everyone rushes to become an "AI risk management expert," the fundamentals of good governance haven't changed. Your existing frameworks likely already address most of what you need.</p><p>The question isn't if AI will change GRC. It's how we'll lead it. I believe the answer lies in the same human-centered approach that has always defined effective security work. AI will enable us to focus more on what truly matters: understanding context, making judgment calls that algorithms can't, and ultimately creating a security culture that's engaging.</p><p>Security and compliance will always be human problems at their core. The technology is just a tool. As we move forward, let's remember that our greatest value isn't in what we can automate, it's in what we can humanize (ok that was cheesy).</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>How AI will change our practices, for best (mostly) and for worse (a bit). Start embracing AI to automate the bureaucratic parts of GRC to focus on our human plus value.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>If you work in GRC today, you've probably noticed the tsunami of AI hype washing over our industry. Every vendor, consultant, and LinkedIn influencer seems to be positioning themselves as an "AI security expert" (myself included). Webinars about "AI risk management frameworks" are multiplying faster than privacy laws.</p><p>My hot take: we are just rebranding the same security principles with shiny new buzzwords. AI systems remain hosted on an infrastructure governed by the same access management, data lifecycle and development principles as any other applications. But that doesn't mean that our discipline won't change. In fact, we're about to get rid of most of the drudgery to finally realize GRC as an advisory function. Let's dig into what's really changing and what remains foundational as we all try to make sense out of this.</p><h2 id="the-transformation-of-grc-in-the-ai-era">The Transformation of GRC in the AI Era</h2><p>Cross mappings, gap analysis, data classification, controls maturity assessments, policy writing, audits, reviews, procedures writing, guidelines drafting... all of these tasks will soon be outsourced to AI.</p><p>I've met CEOs and founders who are using AI to automate entire Third Party Risk Management processes. Think you can catch faster than an AI whenever your suppliers add some shady language in their privacy policy? This is the world that's coming... and it's amazing. The days of reactive, checkbox-driven GRC are numbered. </p><p>Picture this: AI agents reading our policies, monitoring risks in real time, embedding security standards directly into coding copilots or agents. The script is flipping. Where we were once primarily information gatherers, we're now becoming curators. AI will have all the answers but it won't be able to ask the right questions.</p><p>The routine boring stuff is GONE. No more Excel hell cross-mappings. No more manual risk reports. No more pointless questionnaires. No more bridge letters nobody reads. Only the work that actually matters remains.</p><p>AI will handle the vendor due diligence grind. Good. But vendor risk isn't just about filling out a questionnaire. It's about trust. About navigating the human dynamics AI doesn't understand through the whole lifecycle.</p><p>Autonomous compliance sounds nice until an LLM hallucinates your security policies into oblivion. We'll need to watch the watchers. Make sure AI isn't just fast but right.</p><h2 id="the-human-element-in-an-ai-powered-grc-world">The Human Element in an AI-Powered GRC World</h2><p>AI can crunch data, but it won't change how people think. Security awareness, culture, leadership are going to become our bread and butter.</p><p>A well-trained AI can spot anomalies. A well-trained human can make people care.</p><p>AI will still struggle with coordinating cross-functional teams with competing priorities. AI doesn't work well with the messy, unpredictable (by the way, this is why executive assistants' jobs are and will remain safe: too much chaos to deal with). Its decision-making will remain limited. AI will never replace a salesperson or a teacher, no matter how advanced it gets. Humans will always prefer human relationships; purchasing or learning are emotional decisions at their core.</p><p>The reality is that no security team can maintain all the information systems in an organization. The only solution is to ensure that the teams that operate these systems consider security.  There's no AI prompt for "Should we launch this product despite the security risks?" You need judgment. Experience. Conviction.</p><p>The challenge in security has always been about balancing incentives. The carrot approach of praising good behaviors and hyping small successes often struggles to maintain engagement. The stick approach is worse. Try slowing down a development team's sprint to address vulnerability SLAs and see how that goes over.</p><p>What remains is the risk-based approach and a great deal of opportunism. The most effective security changes happen when you're involved at the right moment with the right stakeholders. AI might help identify these moments, but it won't replace the human judgment needed to navigate them.</p><h2 id="the-reality-check-same-game-new-tools">The Reality Check: Same Game, New Tools</h2><p>When cloud computing arrived, we genuinely had to rethink EVERYTHING. The "network perimeter" became old fashioned. Shared responsibility models emerged. Infrastructure became code. Cloud facilitated containers, kubernetes, serverless...</p><p>But AI? Right now, AI remains essentially SOFTWARE running in the cloud processing larger datasets with non-deterministic outputs. It feels like magic but there's nothing magical about how it runs under the hood.</p><p>Every builder is embedding models into their products. Soon ALL software will have AI components. Yet somehow we're supposed to believe this requires a completely new security discipline? What will your "AI risk register" look like when every app you use has AI components? The distinction won't matter!</p><p>We need less of:</p><ul><li>Expensive "AI security" certifications</li><li>Consultants selling fear of "novel threats"</li><li>Rebranded "AI questionnaires" with security content</li></ul><p>And focus on the same challenges:</p><ul><li>Access control</li><li>Data protection</li><li>Supply chain security</li><li>Third-party risk management</li></ul><p>Let's face it: "training data poisoning" may be cool sounding but threat actors will still be breaching us with the same tired methods they've been using for 20 years: misplaced credentials, misconfigurations, old vulnerable components, phishing...</p><p>While everyone rushes to become an "AI risk management expert," the fundamentals of good governance haven't changed. Your existing frameworks likely already address most of what you need.</p><p>The question isn't if AI will change GRC. It's how we'll lead it. I believe the answer lies in the same human-centered approach that has always defined effective security work. AI will enable us to focus more on what truly matters: understanding context, making judgment calls that algorithms can't, and ultimately creating a security culture that's engaging.</p><p>Security and compliance will always be human problems at their core. The technology is just a tool. As we move forward, let's remember that our greatest value isn't in what we can automate, it's in what we can humanize (ok that was cheesy).</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>Security Questionnaires Aren&#x27;t Making Anyone Safer</title>
                    <link>https://thisisgrc.com/security-questionnaires-arent-making-anyone-safer/</link>
                    <pubDate>Wed, 09 Jul 2025 08:08:21 -0400
                    </pubDate>
                    <guid isPermaLink="false">68181da186b8990001decfd3</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>Security questionnaires are a soul-crushing bureaucratic charade that wastes everyone&#x27;s time, kills business momentum, and miraculously manages to make security worse in the process.</description>
                    <content:encoded>
                        <![CDATA[ <p>Let's be brutally honest about what Third-Party Risk Management currently is: a soul-crushing bureaucratic charade that wastes everyone's time, kills business momentum, and miraculously manages to make security worse in the process.</p><p>If you're a TPRM professional who spends your days pushing Excel questionnaires and chasing vendors for "proper documentation", and "remediation" after applying a filter on the "No" questions (or perhaps you're playing chess with the vendor and some questions are formulated negatively, so "No" <em>is </em>the right answer, and you've got a macro doing the filtering!) this is your wake-up call. What you're doing isn't security. You're taking part in compliance theater.</p><p>And deep down, you already know it.</p><hr><h2 id="the-great-security-questionnaire-scam">The Great Security Questionnaire Scam</h2><p>Every day, thousands of security questionnaires fly across the internet like digital confetti. 300+ questions of mind-numbing minutiae, desperately trying to cram modern security practices into frameworks designed when client-server was cutting edge.</p><p>Here's the ugly truth about this process:</p><ul><li>Vendors constantly make liberal interpretations of the questions in order to check "YES" because they want the sale;</li><li>Customers demand perfection on controls they themselves haven't implemented;</li><li>Everyone involved knows it's theater, but nobody's brave enough to say it;</li><li>The entire process creates precisely zero actual security</li></ul><p>Meanwhile, actual business value gets strangled while people argue about whether a managed Kubernetes service meets the definition of "proper network segmentation."</p><hr><h2 id="the-questionnaire-time-capsule-still-asking-about-server-rooms-in-2025">The Questionnaire Time Capsule: Still Asking About Server Rooms in 2025</h2><p>Your template was probably written by consultants who haven't touched actual technology in a decade. That's why it's still asking questions like:</p><ul><li>"Do you enforce password rotation every 90 days?" while vendors are using hardware keys and biometrics with no passwords to rotate</li><li>"Describe your off-site backup process" to companies using active-active, multi-availability redundant data centers</li><li>"Detail your network segmentation" to teams running service meshes </li><li>"Outline your patch management schedule" to organizations using ephemeral infrastructure that redeploys automatically</li><li>"How do you secure remote access?" when they've implemented device certificates with continuous verification </li><li>"Describe your change approval board process" to teams shipping 1,000 microservice releases weekly with automated testing and instant rollbacks</li></ul><p>But instead of acknowledging this disconnect, we've created an entire ecosystem designed to perpetuate the lie. Vendors can't educate hundreds of customers on why these questions are irrelevant, so they just say "YES" and move on. Security teams can't admit their templates are outdated without feeling like they're compromising, so they keep demanding compliance with frameworks designed for a bygone era.</p><hr><h2 id="security-parent-syndrome">"Security Parent Syndrome"</h2><p>There are two parallel universes in B2B security. In one, providers build modern cloud infrastructure with zero-trust architectures, automated security controls, and continuous deployment. In the other, customers' security requirements still mandate physical audits of server rooms.</p><p>The fastest way to burn out a security professional? Force them to live at the intersection of these worlds.</p><p>This "Security Parent Syndrome" manifests in requirements that would be comical if they weren't so crippling:</p><ul><li>On-site security audits for cloud providers ("Just let me schedule that physical audit with AWS real quick...")</li><li>Full background checks for every employee ("Because clearly, our video producer is the weak link in our zero-trust architecture")</li><li>Source code escrow for microservices ("Let me just package up these distributed cloud functions...")</li><li>Mandatory security training using customer slides ("Nothing says 'modern security' quite like skimming through another PowerPoint")</li><li>Bans on virtualization and open source ("Let's pretend we can do better than everyone")</li><li>Manual log review requirements ("Who needs AI-powered threat detection when you have humans manually reviewing logs?")</li></ul><p>Add to that the truly bespoke requirements that don't scale for any vendor:</p><ul><li>Individualized disaster recovery testing</li><li>Specific naming conventions for firewall rules</li><li>Approval for every deployment and patch update</li></ul><p>It's not just outdated, it's dysfunctional. And it stems from fundamentally misaligned needs:</p><ul><li>Enterprises need standardization across vendors</li><li>SaaS providers need standardization across customers</li></ul><p>Just like security professionals burn out trying to protect everything perfectly, cloud providers burn out accommodating every enterprise's unique requirements. And nobody wins.</p><hr><h2 id="the-broken-incentives-that-keep-this-madness-going">The Broken Incentives That Keep This Madness Going</h2><p>The incentives in TPRM are so broken they would make an economist weep:</p><ul><li>Vendors are incentivized to find any tweak imaginable to a question's sense to get to a "YES" any "NO" means endless "remediation" calls, no matter the requirement</li><li>TPRM teams are incentivized to find problems to justify their existence</li><li>Business units are incentivized to hide vendor relationships until the last minute to avoid TPRM delays</li><li>Everyone is incentivized to pretend everything is "critical" because admitting otherwise means your program isn't taken seriously</li></ul><p>The result? A system where nobody can be opened. A vendor can't admit they do something differently than your template expects. A TPRM analyst can't admit some controls don't matter for certain types of vendors. A business can't admit they need this vendor regardless of security posture.</p><p>So we get what we've designed for: a broken system optimized for documentation rather than security.</p><hr><h2 id="breaking-free">Breaking Free </h2><p>Most companies get third-party risk management catastrophically wrong. Assessors send questionnaires, collect checkmarks, and turn every "no" into a finding demanding remediation. Vendors, on their side, misunderstand the whole thing by saying: "Our hosting provider is secure, we're good". The cycle repeats: pushing vendors to implement security controls that don't fit their business, forcing them to shift priorities, wasting time on what amounts to compliance theater. Instead of trying to fix every vendor weakness through endless remediation calls, look inward:</p><ul><li>If a vendor is terrible at security, don't waste months chasing them for improvements they'll never prioritize. Go to the business and tell them the truth: "This isn't a company we should build critical systems around. Don't tie your workflows to them. Don't invest too much energy here."</li><li>When a vendor is solid, shift the conversation: "Why aren't we using them more? If they're more secure than another tool we rely on, why not consolidate? Why not replace the weaker option and reduce our overall risk?"</li><li>And when you encounter a truly top-tier vendor, pay attention. Sometimes, the best security move isn't nitpicking... it's learning from them! What are they doing that you aren't? What can you bring into your own systems?</li></ul><p>Third-party risk isn't about making every vendor perfect. It's about knowing where to push, where to contain, and where to learn. And maybe, it's about having the HUMILITY to admit we don't have all the answers, that our requirements aren't the one-size-fits-all solution to every security problem.</p><hr><h2 id="a-better-way-forward">A Better Way Forward</h2><p>Security isn't one-size-fits-all. There's no single "RIGHT" way to do security. But you already know that.</p><p>The solution to our TPRM nightmare requires acknowledging some uncomfortable truths:</p><ol><li>Modern security is about architecture, not checklists</li><li>Scale requires standardization on both sides</li><li>Perfect security is impossible, resilience isn't</li><li>Trust comes from transparency, not control</li></ol><p>To enterprises: Stop trying to force cloud-native vendors into on-premises security models. To vendors: Stop "yessing" on questionnaires just to make the sale. To TPRM professionals: Stop pretending your Excel spreadsheet constitutes actual security.</p><p>Instead:</p><ul><li>Focus on the risks that actually matter for your business relationship</li><li>Adapt your requirements to modern architectures</li><li>Embrace the reality that different vendors solve security differently</li><li>Prioritize transparency over checkbox compliance</li></ul><p>The real skill in TPRM isn't collecting YESes and closing gaps on the NOs. It's understanding what truly needs protection and having the HUMILITY to admit there's more than one way to get there.</p><p>TPRM should be about managing actual risk, not manufacturing paperwork. It should help businesses make informed decisions about their vendor relationships, not obstruct those relationships with security theater.</p><p>Which kind of TPRM program are you running?</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Security questionnaires are a soul-crushing bureaucratic charade that wastes everyone&#x27;s time, kills business momentum, and miraculously manages to make security worse in the process.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>Let's be brutally honest about what Third-Party Risk Management currently is: a soul-crushing bureaucratic charade that wastes everyone's time, kills business momentum, and miraculously manages to make security worse in the process.</p><p>If you're a TPRM professional who spends your days pushing Excel questionnaires and chasing vendors for "proper documentation", and "remediation" after applying a filter on the "No" questions (or perhaps you're playing chess with the vendor and some questions are formulated negatively, so "No" <em>is </em>the right answer, and you've got a macro doing the filtering!) this is your wake-up call. What you're doing isn't security. You're taking part in compliance theater.</p><p>And deep down, you already know it.</p><hr><h2 id="the-great-security-questionnaire-scam">The Great Security Questionnaire Scam</h2><p>Every day, thousands of security questionnaires fly across the internet like digital confetti. 300+ questions of mind-numbing minutiae, desperately trying to cram modern security practices into frameworks designed when client-server was cutting edge.</p><p>Here's the ugly truth about this process:</p><ul><li>Vendors constantly make liberal interpretations of the questions in order to check "YES" because they want the sale;</li><li>Customers demand perfection on controls they themselves haven't implemented;</li><li>Everyone involved knows it's theater, but nobody's brave enough to say it;</li><li>The entire process creates precisely zero actual security</li></ul><p>Meanwhile, actual business value gets strangled while people argue about whether a managed Kubernetes service meets the definition of "proper network segmentation."</p><hr><h2 id="the-questionnaire-time-capsule-still-asking-about-server-rooms-in-2025">The Questionnaire Time Capsule: Still Asking About Server Rooms in 2025</h2><p>Your template was probably written by consultants who haven't touched actual technology in a decade. That's why it's still asking questions like:</p><ul><li>"Do you enforce password rotation every 90 days?" while vendors are using hardware keys and biometrics with no passwords to rotate</li><li>"Describe your off-site backup process" to companies using active-active, multi-availability redundant data centers</li><li>"Detail your network segmentation" to teams running service meshes </li><li>"Outline your patch management schedule" to organizations using ephemeral infrastructure that redeploys automatically</li><li>"How do you secure remote access?" when they've implemented device certificates with continuous verification </li><li>"Describe your change approval board process" to teams shipping 1,000 microservice releases weekly with automated testing and instant rollbacks</li></ul><p>But instead of acknowledging this disconnect, we've created an entire ecosystem designed to perpetuate the lie. Vendors can't educate hundreds of customers on why these questions are irrelevant, so they just say "YES" and move on. Security teams can't admit their templates are outdated without feeling like they're compromising, so they keep demanding compliance with frameworks designed for a bygone era.</p><hr><h2 id="security-parent-syndrome">"Security Parent Syndrome"</h2><p>There are two parallel universes in B2B security. In one, providers build modern cloud infrastructure with zero-trust architectures, automated security controls, and continuous deployment. In the other, customers' security requirements still mandate physical audits of server rooms.</p><p>The fastest way to burn out a security professional? Force them to live at the intersection of these worlds.</p><p>This "Security Parent Syndrome" manifests in requirements that would be comical if they weren't so crippling:</p><ul><li>On-site security audits for cloud providers ("Just let me schedule that physical audit with AWS real quick...")</li><li>Full background checks for every employee ("Because clearly, our video producer is the weak link in our zero-trust architecture")</li><li>Source code escrow for microservices ("Let me just package up these distributed cloud functions...")</li><li>Mandatory security training using customer slides ("Nothing says 'modern security' quite like skimming through another PowerPoint")</li><li>Bans on virtualization and open source ("Let's pretend we can do better than everyone")</li><li>Manual log review requirements ("Who needs AI-powered threat detection when you have humans manually reviewing logs?")</li></ul><p>Add to that the truly bespoke requirements that don't scale for any vendor:</p><ul><li>Individualized disaster recovery testing</li><li>Specific naming conventions for firewall rules</li><li>Approval for every deployment and patch update</li></ul><p>It's not just outdated, it's dysfunctional. And it stems from fundamentally misaligned needs:</p><ul><li>Enterprises need standardization across vendors</li><li>SaaS providers need standardization across customers</li></ul><p>Just like security professionals burn out trying to protect everything perfectly, cloud providers burn out accommodating every enterprise's unique requirements. And nobody wins.</p><hr><h2 id="the-broken-incentives-that-keep-this-madness-going">The Broken Incentives That Keep This Madness Going</h2><p>The incentives in TPRM are so broken they would make an economist weep:</p><ul><li>Vendors are incentivized to find any tweak imaginable to a question's sense to get to a "YES" any "NO" means endless "remediation" calls, no matter the requirement</li><li>TPRM teams are incentivized to find problems to justify their existence</li><li>Business units are incentivized to hide vendor relationships until the last minute to avoid TPRM delays</li><li>Everyone is incentivized to pretend everything is "critical" because admitting otherwise means your program isn't taken seriously</li></ul><p>The result? A system where nobody can be opened. A vendor can't admit they do something differently than your template expects. A TPRM analyst can't admit some controls don't matter for certain types of vendors. A business can't admit they need this vendor regardless of security posture.</p><p>So we get what we've designed for: a broken system optimized for documentation rather than security.</p><hr><h2 id="breaking-free">Breaking Free </h2><p>Most companies get third-party risk management catastrophically wrong. Assessors send questionnaires, collect checkmarks, and turn every "no" into a finding demanding remediation. Vendors, on their side, misunderstand the whole thing by saying: "Our hosting provider is secure, we're good". The cycle repeats: pushing vendors to implement security controls that don't fit their business, forcing them to shift priorities, wasting time on what amounts to compliance theater. Instead of trying to fix every vendor weakness through endless remediation calls, look inward:</p><ul><li>If a vendor is terrible at security, don't waste months chasing them for improvements they'll never prioritize. Go to the business and tell them the truth: "This isn't a company we should build critical systems around. Don't tie your workflows to them. Don't invest too much energy here."</li><li>When a vendor is solid, shift the conversation: "Why aren't we using them more? If they're more secure than another tool we rely on, why not consolidate? Why not replace the weaker option and reduce our overall risk?"</li><li>And when you encounter a truly top-tier vendor, pay attention. Sometimes, the best security move isn't nitpicking... it's learning from them! What are they doing that you aren't? What can you bring into your own systems?</li></ul><p>Third-party risk isn't about making every vendor perfect. It's about knowing where to push, where to contain, and where to learn. And maybe, it's about having the HUMILITY to admit we don't have all the answers, that our requirements aren't the one-size-fits-all solution to every security problem.</p><hr><h2 id="a-better-way-forward">A Better Way Forward</h2><p>Security isn't one-size-fits-all. There's no single "RIGHT" way to do security. But you already know that.</p><p>The solution to our TPRM nightmare requires acknowledging some uncomfortable truths:</p><ol><li>Modern security is about architecture, not checklists</li><li>Scale requires standardization on both sides</li><li>Perfect security is impossible, resilience isn't</li><li>Trust comes from transparency, not control</li></ol><p>To enterprises: Stop trying to force cloud-native vendors into on-premises security models. To vendors: Stop "yessing" on questionnaires just to make the sale. To TPRM professionals: Stop pretending your Excel spreadsheet constitutes actual security.</p><p>Instead:</p><ul><li>Focus on the risks that actually matter for your business relationship</li><li>Adapt your requirements to modern architectures</li><li>Embrace the reality that different vendors solve security differently</li><li>Prioritize transparency over checkbox compliance</li></ul><p>The real skill in TPRM isn't collecting YESes and closing gaps on the NOs. It's understanding what truly needs protection and having the HUMILITY to admit there's more than one way to get there.</p><p>TPRM should be about managing actual risk, not manufacturing paperwork. It should help businesses make informed decisions about their vendor relationships, not obstruct those relationships with security theater.</p><p>Which kind of TPRM program are you running?</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>Beyond Compliance</title>
                    <link>https://thisisgrc.com/beyond-compliance/</link>
                    <pubDate>Wed, 25 Jun 2025 08:08:31 -0400
                    </pubDate>
                    <guid isPermaLink="false">681818cf86b8990001decfa4</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>The ultimate rant against compliance theatre and checkmarkism.</description>
                    <content:encoded>
                        <![CDATA[ <p><strong>Your compliance program is about to be automated out of existence.</strong></p><p>Let that sink in for a minute.</p><p>While you're busy color-coding spreadsheets and perfecting your NIST mapping, AI is already writing better security policies than most GRC teams. It's automating framework mappings that took you weeks in seconds. And soon, it will be generating smarter risk assessments than your entire department.</p><p>If your GRC function is just a documentation factory or checkbox exercise, you're already obsolete. You just don't know it yet.</p><p>Think about these GRC professionals who build elaborate control frameworks and meticulous documentation while completely failing to move the security needle or gain allies. They're the first to complain that "nobody takes security seriously" while simultaneously making security the most painful, bureaucratic experience possible. They're a dying breed.</p><p>The compliance industrial complex is crumbling, and the only GRC professionals who will survive are those who recognize that security is about people, relationships, and delivering actual value. Not processes and paperwork.</p><h2 id="the-death-of-security-theatre">The Death of Security Theatre</h2><p>Let's be brutally honest: most of what passes for security and compliance today is just theater. Defined as: an elaborate performance designed to look good rather than actually protect anything.</p><p>As GRC professionals, we've become drunk on our own power. We make sure the company plays by the rules... our rules. We block projects. Stall deployments. Reject vendors. And we've convinced ourselves this is "doing security."</p><p>But here's the problem: when compliance teams go on auto-pilot, they stop enabling and start dictating. "To scale," "to standardize"... but let's call it what it really is: "to make our lives easier and justify our existence."</p><p>HR drowns in FBI-level background checks that make hiring impossible.</p><p>The Platform team suffocates under backup policies written by people who've never restored a system.</p><p>Engineers waste sprints implementing controls that protect against threats that don't exist.</p><p>Yes, we have the power to force this. That doesn't make it right. It makes us part of the problem. The most insidious part is that it's almost impossible to prove that a control has ZERO value. There's no ROI, but we're so afraid of "degrading our overall security posture" that we keep the charade going...</p><p>When we impose rules, people comply. But they don't own the outcome. As soon as you stop enforcing the rules, they will stop following them because you've provided them only with an extrinsic reason to act.</p><p>That's the difference between compliance and commitment, between following and owning.</p><hr><h2 id="the-signs-your-grc-program-is-failing">The Signs Your GRC Program Is Failing</h2><p>Look around your organization. If you see these symptoms, your program isn't just ineffective, it's actively harmful:</p><ul><li>Committees of people who don't do the work hold all the power, dragging decisions through endless cycles</li><li>Every minor risk gets escalated to leadership where it dies a slow death in perpetual limbo</li><li>You produce beautiful documentation that no one reads, references, or uses</li><li>Engineers receive controls to implement like homework assignments with zero context</li></ul><p>This isn't security. It's bureaucracy wearing a security badge.</p><p>GRC should solve problems, not manufacture them. But too often, we've become so infatuated with our processes that we've forgotten their purpose.</p><hr><h2 id="escape-your-compliance-prison-before-its-too-late">Escape Your Compliance Prison Before It's Too Late</h2><p>You were hired because the company needed someone to "deal with that compliance problem." But is that really all you're capable of? Is that why you got into security?</p><p>I doubt it. And in the age of AI, being just "the compliance person" is a career death sentence.</p><p>Here are 3 radical moves to transform yourself from cost center to strategic asset:</p><h3 id="infiltrate-the-sales-teams">Infiltrate the sales teams</h3><p>Stop hiding in your compliance corner. Get aggressive about understanding your company's sales process. Learn their partnerships and B2B sales motions like your career depends on it. Because it kind of does. Build them that security FAQ they desperately need. Even better? Build a GPT to answer those questions automatically. Watch how quickly you transform from bureaucratic blocker to revenue enabler.</p><h3 id="forge-an-alliance-with-legal">Forge an alliance with legal</h3><p>Here's one of my biggest career revelations: GRC and legal speak the same language of risk and ambiguity. This is your ticket to business relevance. This partnership will transform you from "compliance person" to "strategic advisor" practically overnight.</p><h3 id="make-finance-and-procurement-your-weapon">Make finance and procurement your weapon</h3><p>In our cloud-first world, every solution is quoted by user/month, and most companies are hemorrhaging money on unused licenses. Connect those dots between your purchasing teams and the IT folks managing IAM. Find those cost savings. Watch how quickly your Third-Party risk program becomes everyone's priority when you're saving them millions instead of just asking for documentation.</p><hr><h2 id="your-secret-weapon-connecting-the-dots">Your Secret Weapon: Connecting the Dots</h2><p>While the rest of the organization is siloed, GRC professionals have a superpower that nobody else has: institutional knowledge that spans the entire enterprise.</p><p>We see every layer of an organization. We collaborate with everybody because security needs to be embedded everywhere. We interact with procurement to strengthen supplier relationships. We're twins with privacy. We influence HR processes. We meet with IT daily. We audit engineering systems.</p><p>If you work in B2B companies, you also touch sales and marketing. For the curious mind, this is the opportunity to absorb domains of knowledge, new approaches, and emerging projects all while being positioned to create connections no one else can see.</p><p>This knowledge is a cheat code for organizational influence.</p><hr><h2 id="simplify-dont-complicate">Simplify, Don't Complicate</h2><p>Want to know why nobody cares about your fancy data classifications? Because they shouldn't have to.</p><p>I learned this the hard way after rolling out an intricate matrix of color-coded data classification levels and subcategories of confidential PII to our procurement team so they could do some screening for us. I thought it was brilliant.</p><p>Procurement's reaction? They looked at me like I was describing my weekend D&amp;D campaign.</p><p>Our fancy data classification matrix was nerd's gibberish. We were so deep in our expert bubble that we'd forgotten the cardinal rule: our job is to make complex ideas simple, not simple ideas complex.</p><p>Real GRC work isn't about creating spreadsheets to make academics proud. It's about translating risk into language that drives action. We don't need to prove how smart we are. We need to prove how valuable we can be.</p><p>Instead of dragging everyone through our complexity, we need to start with their reality. Our job is to abstract complexity, not create more of it.</p><hr><h2 id="frameworks-are-not-security">Frameworks Are Not Security</h2><p>Here's another inconvenient truth: memorizing every control in PCI, ISO, NIST, and SOC2 doesn't make you good at security. It makes you good at taking tests.</p><p>Think about music. You can learn to read sheet music perfectly, know every note, every chord progression. But if you never pick up an instrument, never play, never improvise, are you really a musician? Security is the same. Frameworks are the sheet music, but the real job is playing the music.</p><p>The real game in GRC is understanding the entire enterprise ecosystem: cloud, IAM, networking, DevOps, data, software architecture... and then cross-referencing these technical layers with business processes, market plans, and partnerships.</p><p>Sure, you can be "the PCI person." You can make a living off it. But if you lock in too early, you're pigeonholing yourself. You risk becoming a one-trick pony. Or worse, you get so tied to the framework that you stop thinking critically and just parrot whatever's in the document.</p><p>Frameworks are tools. They're not the job. The job is making security happen. And to do that, you need to go beyond the checklist and learn how it all fits together.</p><hr><h2 id="lead-with-solutions-not-process">Lead with Solutions, Not Process</h2><p>Want to actually fix security instead of just talking about it? Dare to walk the hard path. Lead with objectives, not checkboxes.</p><p>Instead of slamming down a compliance mandate, ask the real questions:</p><ul><li>What resilience scenario are we actually trying to address?</li><li>What's the real security risk we're trying to solve with this hiring practice?</li><li>How would this control actually prevent the attacks we're seeing?</li></ul><p>Then work backwards from the problem, NOT from a so-called "hard" requirement.</p><p>Here's a secret the certification bodies don't want you to know: Almost all frameworks have intentionally vague requirements because the people writing them know compliance is never one-size-fits-all. They're giving you room to adapt: use it!</p><p>GRC is at its best when it builds a culture where security is about attracting people to work together on objectives, not enforcing a fossilized list of controls dreamed up by textbook authors and long-gone consultants.</p><hr><h2 id="the-path-forward">The Path Forward</h2><p>If you want to survive the AI revolution in GRC, here's your new playbook:</p><ul><li>Only escalate risks when a fix requires multi-team resources. Sometimes all it takes is a few sprint points to make meaningful progress.</li><li>Stop forcing processes full of security jargon. Adapt to the team's existing workflows and tools instead of making everyone conform to your processes.</li><li>Treat frameworks as starting points, not gospels. Your job is to tailor them, absorb the complexity, and make them actionable. Otherwise, you're just an expensive parrot.</li><li>Eliminate approval cycles and committees that don't add value. They don't just delay decisions—they actively disempower teams and kill innovation.</li></ul><p>The compliance professionals who will thrive in the next decade aren't the ones with the most certifications or the strictest controls. They're the ones who build relationships, solve real problems, and connect dots that others can't even see.</p><p>The best GRC teams don't just follow documents—they lead with relevance and action.</p><p>Which one are you going to be?</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>The ultimate rant against compliance theatre and checkmarkism.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p><strong>Your compliance program is about to be automated out of existence.</strong></p><p>Let that sink in for a minute.</p><p>While you're busy color-coding spreadsheets and perfecting your NIST mapping, AI is already writing better security policies than most GRC teams. It's automating framework mappings that took you weeks in seconds. And soon, it will be generating smarter risk assessments than your entire department.</p><p>If your GRC function is just a documentation factory or checkbox exercise, you're already obsolete. You just don't know it yet.</p><p>Think about these GRC professionals who build elaborate control frameworks and meticulous documentation while completely failing to move the security needle or gain allies. They're the first to complain that "nobody takes security seriously" while simultaneously making security the most painful, bureaucratic experience possible. They're a dying breed.</p><p>The compliance industrial complex is crumbling, and the only GRC professionals who will survive are those who recognize that security is about people, relationships, and delivering actual value. Not processes and paperwork.</p><h2 id="the-death-of-security-theatre">The Death of Security Theatre</h2><p>Let's be brutally honest: most of what passes for security and compliance today is just theater. Defined as: an elaborate performance designed to look good rather than actually protect anything.</p><p>As GRC professionals, we've become drunk on our own power. We make sure the company plays by the rules... our rules. We block projects. Stall deployments. Reject vendors. And we've convinced ourselves this is "doing security."</p><p>But here's the problem: when compliance teams go on auto-pilot, they stop enabling and start dictating. "To scale," "to standardize"... but let's call it what it really is: "to make our lives easier and justify our existence."</p><p>HR drowns in FBI-level background checks that make hiring impossible.</p><p>The Platform team suffocates under backup policies written by people who've never restored a system.</p><p>Engineers waste sprints implementing controls that protect against threats that don't exist.</p><p>Yes, we have the power to force this. That doesn't make it right. It makes us part of the problem. The most insidious part is that it's almost impossible to prove that a control has ZERO value. There's no ROI, but we're so afraid of "degrading our overall security posture" that we keep the charade going...</p><p>When we impose rules, people comply. But they don't own the outcome. As soon as you stop enforcing the rules, they will stop following them because you've provided them only with an extrinsic reason to act.</p><p>That's the difference between compliance and commitment, between following and owning.</p><hr><h2 id="the-signs-your-grc-program-is-failing">The Signs Your GRC Program Is Failing</h2><p>Look around your organization. If you see these symptoms, your program isn't just ineffective, it's actively harmful:</p><ul><li>Committees of people who don't do the work hold all the power, dragging decisions through endless cycles</li><li>Every minor risk gets escalated to leadership where it dies a slow death in perpetual limbo</li><li>You produce beautiful documentation that no one reads, references, or uses</li><li>Engineers receive controls to implement like homework assignments with zero context</li></ul><p>This isn't security. It's bureaucracy wearing a security badge.</p><p>GRC should solve problems, not manufacture them. But too often, we've become so infatuated with our processes that we've forgotten their purpose.</p><hr><h2 id="escape-your-compliance-prison-before-its-too-late">Escape Your Compliance Prison Before It's Too Late</h2><p>You were hired because the company needed someone to "deal with that compliance problem." But is that really all you're capable of? Is that why you got into security?</p><p>I doubt it. And in the age of AI, being just "the compliance person" is a career death sentence.</p><p>Here are 3 radical moves to transform yourself from cost center to strategic asset:</p><h3 id="infiltrate-the-sales-teams">Infiltrate the sales teams</h3><p>Stop hiding in your compliance corner. Get aggressive about understanding your company's sales process. Learn their partnerships and B2B sales motions like your career depends on it. Because it kind of does. Build them that security FAQ they desperately need. Even better? Build a GPT to answer those questions automatically. Watch how quickly you transform from bureaucratic blocker to revenue enabler.</p><h3 id="forge-an-alliance-with-legal">Forge an alliance with legal</h3><p>Here's one of my biggest career revelations: GRC and legal speak the same language of risk and ambiguity. This is your ticket to business relevance. This partnership will transform you from "compliance person" to "strategic advisor" practically overnight.</p><h3 id="make-finance-and-procurement-your-weapon">Make finance and procurement your weapon</h3><p>In our cloud-first world, every solution is quoted by user/month, and most companies are hemorrhaging money on unused licenses. Connect those dots between your purchasing teams and the IT folks managing IAM. Find those cost savings. Watch how quickly your Third-Party risk program becomes everyone's priority when you're saving them millions instead of just asking for documentation.</p><hr><h2 id="your-secret-weapon-connecting-the-dots">Your Secret Weapon: Connecting the Dots</h2><p>While the rest of the organization is siloed, GRC professionals have a superpower that nobody else has: institutional knowledge that spans the entire enterprise.</p><p>We see every layer of an organization. We collaborate with everybody because security needs to be embedded everywhere. We interact with procurement to strengthen supplier relationships. We're twins with privacy. We influence HR processes. We meet with IT daily. We audit engineering systems.</p><p>If you work in B2B companies, you also touch sales and marketing. For the curious mind, this is the opportunity to absorb domains of knowledge, new approaches, and emerging projects all while being positioned to create connections no one else can see.</p><p>This knowledge is a cheat code for organizational influence.</p><hr><h2 id="simplify-dont-complicate">Simplify, Don't Complicate</h2><p>Want to know why nobody cares about your fancy data classifications? Because they shouldn't have to.</p><p>I learned this the hard way after rolling out an intricate matrix of color-coded data classification levels and subcategories of confidential PII to our procurement team so they could do some screening for us. I thought it was brilliant.</p><p>Procurement's reaction? They looked at me like I was describing my weekend D&amp;D campaign.</p><p>Our fancy data classification matrix was nerd's gibberish. We were so deep in our expert bubble that we'd forgotten the cardinal rule: our job is to make complex ideas simple, not simple ideas complex.</p><p>Real GRC work isn't about creating spreadsheets to make academics proud. It's about translating risk into language that drives action. We don't need to prove how smart we are. We need to prove how valuable we can be.</p><p>Instead of dragging everyone through our complexity, we need to start with their reality. Our job is to abstract complexity, not create more of it.</p><hr><h2 id="frameworks-are-not-security">Frameworks Are Not Security</h2><p>Here's another inconvenient truth: memorizing every control in PCI, ISO, NIST, and SOC2 doesn't make you good at security. It makes you good at taking tests.</p><p>Think about music. You can learn to read sheet music perfectly, know every note, every chord progression. But if you never pick up an instrument, never play, never improvise, are you really a musician? Security is the same. Frameworks are the sheet music, but the real job is playing the music.</p><p>The real game in GRC is understanding the entire enterprise ecosystem: cloud, IAM, networking, DevOps, data, software architecture... and then cross-referencing these technical layers with business processes, market plans, and partnerships.</p><p>Sure, you can be "the PCI person." You can make a living off it. But if you lock in too early, you're pigeonholing yourself. You risk becoming a one-trick pony. Or worse, you get so tied to the framework that you stop thinking critically and just parrot whatever's in the document.</p><p>Frameworks are tools. They're not the job. The job is making security happen. And to do that, you need to go beyond the checklist and learn how it all fits together.</p><hr><h2 id="lead-with-solutions-not-process">Lead with Solutions, Not Process</h2><p>Want to actually fix security instead of just talking about it? Dare to walk the hard path. Lead with objectives, not checkboxes.</p><p>Instead of slamming down a compliance mandate, ask the real questions:</p><ul><li>What resilience scenario are we actually trying to address?</li><li>What's the real security risk we're trying to solve with this hiring practice?</li><li>How would this control actually prevent the attacks we're seeing?</li></ul><p>Then work backwards from the problem, NOT from a so-called "hard" requirement.</p><p>Here's a secret the certification bodies don't want you to know: Almost all frameworks have intentionally vague requirements because the people writing them know compliance is never one-size-fits-all. They're giving you room to adapt: use it!</p><p>GRC is at its best when it builds a culture where security is about attracting people to work together on objectives, not enforcing a fossilized list of controls dreamed up by textbook authors and long-gone consultants.</p><hr><h2 id="the-path-forward">The Path Forward</h2><p>If you want to survive the AI revolution in GRC, here's your new playbook:</p><ul><li>Only escalate risks when a fix requires multi-team resources. Sometimes all it takes is a few sprint points to make meaningful progress.</li><li>Stop forcing processes full of security jargon. Adapt to the team's existing workflows and tools instead of making everyone conform to your processes.</li><li>Treat frameworks as starting points, not gospels. Your job is to tailor them, absorb the complexity, and make them actionable. Otherwise, you're just an expensive parrot.</li><li>Eliminate approval cycles and committees that don't add value. They don't just delay decisions—they actively disempower teams and kill innovation.</li></ul><p>The compliance professionals who will thrive in the next decade aren't the ones with the most certifications or the strictest controls. They're the ones who build relationships, solve real problems, and connect dots that others can't even see.</p><p>The best GRC teams don't just follow documents—they lead with relevance and action.</p><p>Which one are you going to be?</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>The Communications Game</title>
                    <link>https://thisisgrc.com/the-communications-game/</link>
                    <pubDate>Wed, 11 Jun 2025 08:08:08 -0400
                    </pubDate>
                    <guid isPermaLink="false">6818045c86b8990001decdbe</guid>
                    <category>
                        <![CDATA[ Leadership ]]>
                    </category>
                    <description>Most security failures aren&#x27;t about missing &quot;controls&quot; in &quot;frameworks&quot;. They&#x27;re about missed connections. In GRC, your ability to listen, translate, and build trust matters more than any framework. Here&#x27;s what I learned the hard way.</description>
                    <content:encoded>
                        <![CDATA[ <p>Here's my story about how I became convinced that communications was the core skill to master when working in GRC. Now, let me preface this with a big caveat: I'm not going to pretend I'm some mastermind here. I'm still figuring this out, learning the hard way. Still, I've come a long way. Might as well share my experiences.</p><p>Most security initiatives fail because of inapt communications. Sometimes it’s resistance to change that we fail to address. Other times it’s confusion because you just put a cereal box of acronyms in everybody's plates. Occasionally, it’s straight-up indifference. And each time, someone on the team says, “<em>We explained it clearly. Why don’t they get it?</em>”</p><p>But communication isn’t clarity. It’s relevance.</p><p>Whether you're talking to an engineer who wants you out of the way, or a business lead with a target to hit, they can tell if you're reciting a playbook. They can tell if you care about their work, or just want them to care about yours.</p><p>And yes, it feels like selling... because it is.</p><p>Security doesn’t work without trust. It doesn’t scale without buy-in. And it doesn’t stick if people are only complying out of fear. Our job isn’t just to enforce controls. It’s to embed security in the way the organization thinks, builds, and grows.</p><p>But communication is also relentless repetition.</p><p>I’ve sat through countless “communications plans” that boil down to a policy update and an all-hands email. That’s not strategy, that’s broadcast. Message in a bottle. The average office worker gets <a href="https://prosperitymedia.com.au/how-many-emails-are-sent-per-day-in-2025/?ref=thisisgrc.com#:~:text=Statistics%20show%20that%2040%20emails,day%20by%20the%20average%20person!" rel="noreferrer">121 emails</a> per day. A good newsletter "open rate" is <a href="https://www.campaignmonitor.com/resources/knowledge-base/what-are-good-email-metrics/?ref=thisisgrc.com" rel="noreferrer">25%</a>.  Getting people to care about security requires influence, not information. It means stepping out of our expertise and stepping into theirs. And even if you make the message as tailored and relevant as you can: expect to repeat yourself a hundred times.</p><hr><h2 id="culture-isn%E2%80%99t-built-with-slogans">Culture Isn’t Built with Slogans</h2><p>I’ve never liked the phrase “security is everyone’s responsibility.” It sounds inclusive, but it isn’t. What it actually does is spread accountability so thin that no one feels it anymore.</p><p>If you work in GRC, you know how this plays out. We tell ourselves we’re enabling a “culture of security,” but we end up recycling awareness training slides and wondering why nobody reads them.</p><p>Meanwhile, engineers roll their eyes. Salespeople avoid us. IT does their thing.</p><p>Then we get frustrated. “<em>They don’t care about security until a breach occurs!</em>” </p><p>But they do. They care enough to pay someone to think about it <em>for them</em>! That someone is you. It may not be the <em>right</em> way to think about security, but it is what it is. Management, when they hire you, buy peace of mind for that "security problem". "<em>But they're just putting security in a little box and expecting us to fix everything, that makes no sense!</em>" I know, but this is what they're buying. Expecting developers or product managers to prioritize security risks over deliverables is a fantasy. It’s our job to translate risk into relevance. That means understanding how their incentives work and meeting them there.</p><hr><h2 id="when-we-pretend-we-lose">When We Pretend, We Lose</h2><p>One of the worst mistakes I ever made was showing up to a data lake project like I already had the answers.</p><p>Data lakes are messy. It's basically your old Android phone's photo archive version of database storage. And in security, that makes us anxious. We want controls. Owners. Retention schedules. But walking in with a checklist and a smirk doesn’t win you credibility. Maybe you'll get a backlog ticket that never gets touched.</p><p>That project taught me something I try not to forget: Sometimes the most secure thing you can do is shut up and listen first. Today, when someone pitches a new AI-powered system or a new data-sharing tool, my instinct is still to scan for risk.</p><p>But I’ve learned to start with:</p><blockquote>“Interesting idea. Let’s see how we can make this work safely.”</blockquote><p>I mean, at our heart, we must remain excited by all of technology's possibilities. If we can't get excited, the spark is gone, and this is where we become bitter and old.</p><p>I’ve seen GRC teams lose influence simply because they showed up like lecturers. Like the engineers needed a reminder of the rules. That posture doesn’t just fail internally by the way. It poisons vendor relationships, too. Nobody wants to be told what to do by someone who doesn't understand what they do. I've suffered this countless times working for a SaaS vendor. </p><hr><h2 id="your-heatmaps-don%E2%80%99t-speak-for-themselves">Your Heatmaps Don’t Speak for Themselves</h2><p>We like to think our data will do the talking. That if we come armed with the right risk matrix or classification schemes, people will just get it.</p><p>They won’t.</p><p>Most people don’t think in frameworks. They think in outcomes. If you walk into a planning meeting talking about ISO controls and pseudonymized PII, you’ll lose them in five minutes.</p><p>What they want to know is:<em> "How much will this slow us down? And what's in it for me?" </em>Sure, there are plenty of people who want to do things right and will champion your initiatives. But assume they aren't. </p><p>If you can answer those things first, then you’ve earned the space to talk about audit logs, access tiers, or backup policies. </p><p>Don’t get me wrong. I’m not saying throw away your frameworks. But they’re your "infrastructure". Not your product.</p><hr><h2 id="if-they-fail-we-get-fired">If They Fail, We Get Fired</h2><p>If people don’t care about security, that’s not their failure, it’s ours. We can’t keep blaming the business for not listening. We need to ask how we’re speaking. Because when the breach comes, no one’s pointing fingers at the ones who missed the webinar.</p><p>They’re looking at us.</p><p>Shared responsibility doesn’t mean shared consequences. It never has. It's unfair, and stressful. <a href="https://www.kaspersky.com/about/press-releases/job-done-nearly-every-third-corporate-data-breach-gets-employees-fired?ref=thisisgrc.com" rel="noreferrer">Kaspersky</a> noted that in 31% of breaches, employees get fired; 45% of them are senior security employees.  </p><p>So if we want security to succeed, we need to stop expecting everyone else to <em>get it</em>, and start making it <em>impossible not to</em>. That means selling security, not yelling about mandatory checkboxes.</p><p>Because in the end, nobody remembers the policies you wrote. They remember whether you helped, whether you listened, and whether they could build with you in the room.</p><p>And that’s the kind of GRC that actually gets things done.</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Most security failures aren&#x27;t about missing &quot;controls&quot; in &quot;frameworks&quot;. They&#x27;re about missed connections. In GRC, your ability to listen, translate, and build trust matters more than any framework. Here&#x27;s what I learned the hard way.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>Here's my story about how I became convinced that communications was the core skill to master when working in GRC. Now, let me preface this with a big caveat: I'm not going to pretend I'm some mastermind here. I'm still figuring this out, learning the hard way. Still, I've come a long way. Might as well share my experiences.</p><p>Most security initiatives fail because of inapt communications. Sometimes it’s resistance to change that we fail to address. Other times it’s confusion because you just put a cereal box of acronyms in everybody's plates. Occasionally, it’s straight-up indifference. And each time, someone on the team says, “<em>We explained it clearly. Why don’t they get it?</em>”</p><p>But communication isn’t clarity. It’s relevance.</p><p>Whether you're talking to an engineer who wants you out of the way, or a business lead with a target to hit, they can tell if you're reciting a playbook. They can tell if you care about their work, or just want them to care about yours.</p><p>And yes, it feels like selling... because it is.</p><p>Security doesn’t work without trust. It doesn’t scale without buy-in. And it doesn’t stick if people are only complying out of fear. Our job isn’t just to enforce controls. It’s to embed security in the way the organization thinks, builds, and grows.</p><p>But communication is also relentless repetition.</p><p>I’ve sat through countless “communications plans” that boil down to a policy update and an all-hands email. That’s not strategy, that’s broadcast. Message in a bottle. The average office worker gets <a href="https://prosperitymedia.com.au/how-many-emails-are-sent-per-day-in-2025/?ref=thisisgrc.com#:~:text=Statistics%20show%20that%2040%20emails,day%20by%20the%20average%20person!" rel="noreferrer">121 emails</a> per day. A good newsletter "open rate" is <a href="https://www.campaignmonitor.com/resources/knowledge-base/what-are-good-email-metrics/?ref=thisisgrc.com" rel="noreferrer">25%</a>.  Getting people to care about security requires influence, not information. It means stepping out of our expertise and stepping into theirs. And even if you make the message as tailored and relevant as you can: expect to repeat yourself a hundred times.</p><hr><h2 id="culture-isn%E2%80%99t-built-with-slogans">Culture Isn’t Built with Slogans</h2><p>I’ve never liked the phrase “security is everyone’s responsibility.” It sounds inclusive, but it isn’t. What it actually does is spread accountability so thin that no one feels it anymore.</p><p>If you work in GRC, you know how this plays out. We tell ourselves we’re enabling a “culture of security,” but we end up recycling awareness training slides and wondering why nobody reads them.</p><p>Meanwhile, engineers roll their eyes. Salespeople avoid us. IT does their thing.</p><p>Then we get frustrated. “<em>They don’t care about security until a breach occurs!</em>” </p><p>But they do. They care enough to pay someone to think about it <em>for them</em>! That someone is you. It may not be the <em>right</em> way to think about security, but it is what it is. Management, when they hire you, buy peace of mind for that "security problem". "<em>But they're just putting security in a little box and expecting us to fix everything, that makes no sense!</em>" I know, but this is what they're buying. Expecting developers or product managers to prioritize security risks over deliverables is a fantasy. It’s our job to translate risk into relevance. That means understanding how their incentives work and meeting them there.</p><hr><h2 id="when-we-pretend-we-lose">When We Pretend, We Lose</h2><p>One of the worst mistakes I ever made was showing up to a data lake project like I already had the answers.</p><p>Data lakes are messy. It's basically your old Android phone's photo archive version of database storage. And in security, that makes us anxious. We want controls. Owners. Retention schedules. But walking in with a checklist and a smirk doesn’t win you credibility. Maybe you'll get a backlog ticket that never gets touched.</p><p>That project taught me something I try not to forget: Sometimes the most secure thing you can do is shut up and listen first. Today, when someone pitches a new AI-powered system or a new data-sharing tool, my instinct is still to scan for risk.</p><p>But I’ve learned to start with:</p><blockquote>“Interesting idea. Let’s see how we can make this work safely.”</blockquote><p>I mean, at our heart, we must remain excited by all of technology's possibilities. If we can't get excited, the spark is gone, and this is where we become bitter and old.</p><p>I’ve seen GRC teams lose influence simply because they showed up like lecturers. Like the engineers needed a reminder of the rules. That posture doesn’t just fail internally by the way. It poisons vendor relationships, too. Nobody wants to be told what to do by someone who doesn't understand what they do. I've suffered this countless times working for a SaaS vendor. </p><hr><h2 id="your-heatmaps-don%E2%80%99t-speak-for-themselves">Your Heatmaps Don’t Speak for Themselves</h2><p>We like to think our data will do the talking. That if we come armed with the right risk matrix or classification schemes, people will just get it.</p><p>They won’t.</p><p>Most people don’t think in frameworks. They think in outcomes. If you walk into a planning meeting talking about ISO controls and pseudonymized PII, you’ll lose them in five minutes.</p><p>What they want to know is:<em> "How much will this slow us down? And what's in it for me?" </em>Sure, there are plenty of people who want to do things right and will champion your initiatives. But assume they aren't. </p><p>If you can answer those things first, then you’ve earned the space to talk about audit logs, access tiers, or backup policies. </p><p>Don’t get me wrong. I’m not saying throw away your frameworks. But they’re your "infrastructure". Not your product.</p><hr><h2 id="if-they-fail-we-get-fired">If They Fail, We Get Fired</h2><p>If people don’t care about security, that’s not their failure, it’s ours. We can’t keep blaming the business for not listening. We need to ask how we’re speaking. Because when the breach comes, no one’s pointing fingers at the ones who missed the webinar.</p><p>They’re looking at us.</p><p>Shared responsibility doesn’t mean shared consequences. It never has. It's unfair, and stressful. <a href="https://www.kaspersky.com/about/press-releases/job-done-nearly-every-third-corporate-data-breach-gets-employees-fired?ref=thisisgrc.com" rel="noreferrer">Kaspersky</a> noted that in 31% of breaches, employees get fired; 45% of them are senior security employees.  </p><p>So if we want security to succeed, we need to stop expecting everyone else to <em>get it</em>, and start making it <em>impossible not to</em>. That means selling security, not yelling about mandatory checkboxes.</p><p>Because in the end, nobody remembers the policies you wrote. They remember whether you helped, whether you listened, and whether they could build with you in the room.</p><p>And that’s the kind of GRC that actually gets things done.</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>We Can’t Train Everyone. That’s the Truth.</title>
                    <link>https://thisisgrc.com/we-cant-train-everyone-thats-the-truth/</link>
                    <pubDate>Wed, 28 May 2025 08:08:03 -0400
                    </pubDate>
                    <guid isPermaLink="false">6816c0dd8c3c94000105ba22</guid>
                    <category>
                        <![CDATA[ Break In GRC ]]>
                    </category>
                    <description>You were told there were millions of six-figure cybersecurity jobs waiting. The industry lied, and now you’re angry. Here’s the perspective of a hiring manager and trainer.</description>
                    <content:encoded>
                        <![CDATA[ <p>Every few weeks, I come across another frustrated post from someone trying to "break into cybersecurity". The story is familiar: they’ve done the bootcamp, passed a certification, maybe even applied to a dozen roles. They’re not getting traction. So they vent. About how unfair the system is. About how companies are greedy. About how the industry talks a big game about the talent shortage but refuses to give newcomers a chance.</p><p>I understand where the frustration comes from. But the reality is more complicated than they realize.</p><p>Because the truth is this: <strong>we can’t scale training the way they think we can</strong>. Not because we don’t want to. Not because we’re gatekeeping. But because this work is deep, complex, and requires time, trust, and repetition to learn properly. On-the-job security and GRC training isn’t something you can just do in front of an auditorium of people taking notes. No matter how motivated the person is, we can't just "toss them to the wolves" like that: they need to be relevant to our stakeholders.</p><hr><h2 id="i%E2%80%99m-a-trainer-and-i%E2%80%99m-at-capacity">I’m a Trainer. And I’m at Capacity.</h2><p>I’ve been training people in cybersecurity for years. Yet, I can count on my two hands the number of people that I took from "making coffee at Starbucks" to "autonomous security analyst". I care deeply about helping others grow into this field. I wouldn't be a part-time teacher if I didn't (most of the pay goes to taxes). And now, as a manager and coach, I have even more responsibility—not just for outcomes, but for developing others.</p><p>But here's the honest truth: <strong>I can only properly train 1 to 2 people at a time</strong>. That’s it.</p><p>Not because I lack the will. But because real coaching isn’t about quick tips or dropping a few links in a Slack channel. It’s about building judgment, not just knowledge. It’s about having regular feedback loops, guiding someone through complex ambiguity, and letting them fail safely so they can learn from it. You can’t rush that. And you definitely can’t scale it beyond a handful of people at once,  especially when you're still delivering on business objectives.</p><p>I say this because people often assume that trainers like me are holding back. That we could be doing more. That if we really wanted to help, we’d open the floodgates.</p><p>But what they don’t see is how much time and effort goes into mentoring even one person well. I feel I’m doing my part. And I know many others who are too.</p><hr><h2 id="grc-isn%E2%80%99t-a-soft-landing-spot">GRC Isn’t a Soft Landing Spot</h2><p>One of the reasons expectations get skewed is the way GRC is presented to newcomers. It’s often sold as the “easy” on-ramp into cybersecurity: “Don’t like command-line? Not into pentesting? No problem, try GRC: no coding necessary!”</p><p>This does real damage.</p><p>Because GRC isn’t beginner work. It’s advisory. It's "business oriented". It requires a strong grasp of technical systems and the ability to interface with legal, engineering, procurement, and executives, often in the same meeting. It’s not enough to know a framework or memorize controls. You have to interpret risk in context, communicate clearly under pressure, and manage tradeoffs that don’t have a right answer.</p><p>I see a lot of people show up with enthusiasm but little understanding of the depth involved. They believe that because they’re willing to learn, someone should be ready to train them. But GRC doesn’t work that way. It takes time. It takes repetition. It takes coaching.</p><p>And again: coaching doesn’t scale.</p><hr><h2 id="we%E2%80%99re-not-hoarding-knowledge">We’re Not Hoarding Knowledge</h2><p>I think there’s a quiet resentment building in some corners of cybersecurity toward people already in the field. As if we owe knowledge to anyone who asks for it. As if we should just “give back” more.</p><p>But here’s the thing: <strong>you can’t transfer mastery in a few coffee chats</strong>.</p><p>You don’t learn how to think like a GRC advisor from a thread or a training video. You learn it by drawing diagrams, failing reviews, getting grilled in engineering meetings, and by making judgment calls (and living with the results).</p><p>That’s why GRC isn’t a “just teach me” field. It’s a “teach me how to think” field. And that kind of mentorship is precious, slow, and intimate.</p><p>So no, we’re not hoarding knowledge to keep the supply of security specialists low in order to maintain high wages. The skill level is just much, much higher to reach than what it looks like on the outside.</p><div class="kg-card kg-header-card kg-v2 kg-width-regular kg-style-accent" data-background-color="accent">
            
            <div class="kg-header-card-content">
                
                <div class="kg-header-card-text kg-align-center">
                    <h2 id="the-good-news-about-cybersecurity-jobs-is-that-anyone-can-get-in-without-credentials-or-apprenticeships-the-bad-news-is-anyone-can-get-in-without-credentials-or-apprenticeships" class="kg-header-card-heading" style="color: #FFFFFF;" data-text-color="#FFFFFF"><span style="white-space: pre-wrap;">The good news about cybersecurity jobs is that anyone can get in without credentials or apprenticeships. The bad news is, anyone can get in without credentials or apprenticeships.</span></h2>
                    
                    
                </div>
            </div>
        </div><hr><h2 id="the-millions-of-open-jobs-are-a-myth">The Millions of Open Jobs Are a Myth</h2><p>Let’s address the other elephant in the room: the idea that there are “millions” of cybersecurity jobs and that all you need is a good attitude to land one.</p><p>There are jobs, yes, but the entry-level ones are few and the ones that include structured training are even rarer. Pete Strouse, a cybersecurity talent advisor, tells on LinkedIn regularly about these stories of communities and bootcamps 20,000 strong who are fighting for 500 open jobs. <strong>Here's an uncomfortable truth: are you in the top 2.5%?</strong></p><p>Most security programs are underwater, just trying to keep up with audits, assessments, vendor reviews, architecture changes, and incident response. They don’t have the time to build apprenticeships from scratch. Not because they’re evil. Because they’re barely staffed to meet current demands.</p><p>This is especially true in GRC, where training someone requires deep understanding of the relationships between technology and business and building relationships where compliance adds <em>plus value</em>. That kind of growth can’t be offloaded or outsourced. Picture this: if I tell a trainee to make sure a new product complies with the frameworks they've studied, how can this person do their work without just handing the piece of paper to the lead engineer to fill out for them? I've seen <em>seniors </em>do this... </p><hr><h2 id="what-i-tell-people-who-still-want-in">What I Tell People Who Still Want In</h2><p>If you’re serious about building a career in GRC, you need to understand the terrain.</p><p>Here’s what I usually tell people who come to me for advice:</p><ul><li><strong>Start with understanding modern systems</strong>: IAM, cloud, SaaS, vendor ecosystems. GRC sits on top of real tech. Get hands-on.</li><li><strong>Build mental models</strong>: Learn to diagram how systems work and where risks emerge. If you can’t draw it, you don’t know it.</li><li><strong>Learn to translate</strong>: GRC is about turning tech talk into risk narratives and turning policies into real controls and somehow wrapping this up in dollars.</li><li><strong>Stop expecting it to be easy</strong>: It’s not. This is a tough field. And that’s okay. Hard things are worth doing.</li></ul><p>Now layer on top of that the "people skills": exceptional communication, high emotional intelligence and strong negotiation aptitudes. The bad news here is that both of these are extremely hard to train. Be honest with yourself: is that what you really want? Selling people on doing things that add no business value and that often degrade user experience? Security is not cool, remember. Your job is to be the buzzkill when HR and marketing want to bring AI bots to do phone calls and when engineering uses Cursor with a personal license.</p><hr><h2 id="the-path-is-long-but-it%E2%80%99s-real">The Path Is Long, But It’s Real</h2><p>To the people venting online, I say this with compassion:</p><p>You’re not wrong that it’s hard to break in. You’re not wrong that the market is tough. But don’t confuse that with a moral failure on the part of the people already here.</p><p>The failure is on the media and certification bodies selling you a pipe dream, that there's a whole industry waiting for you with open arms, ready to shower you with money. </p><p>Many of us are doing our best. Building teams, coaching juniors, fair hiring practices. But we also know the limits of what real development takes.</p><p>Building a career in cybersecurity isn’t about shortcuts or quick wins. The industry doesn’t owe you a job or an easy path. What it takes is deep, sustained effort, and the willingness to learn from failure.</p><p>So before blaming the system, recognize that real growth doesn’t happen overnight. It’s about developing real skills, judgment, and resilience, and those things take time.</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>You were told there were millions of six-figure cybersecurity jobs waiting. The industry lied, and now you’re angry. Here’s the perspective of a hiring manager and trainer.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>Every few weeks, I come across another frustrated post from someone trying to "break into cybersecurity". The story is familiar: they’ve done the bootcamp, passed a certification, maybe even applied to a dozen roles. They’re not getting traction. So they vent. About how unfair the system is. About how companies are greedy. About how the industry talks a big game about the talent shortage but refuses to give newcomers a chance.</p><p>I understand where the frustration comes from. But the reality is more complicated than they realize.</p><p>Because the truth is this: <strong>we can’t scale training the way they think we can</strong>. Not because we don’t want to. Not because we’re gatekeeping. But because this work is deep, complex, and requires time, trust, and repetition to learn properly. On-the-job security and GRC training isn’t something you can just do in front of an auditorium of people taking notes. No matter how motivated the person is, we can't just "toss them to the wolves" like that: they need to be relevant to our stakeholders.</p><hr><h2 id="i%E2%80%99m-a-trainer-and-i%E2%80%99m-at-capacity">I’m a Trainer. And I’m at Capacity.</h2><p>I’ve been training people in cybersecurity for years. Yet, I can count on my two hands the number of people that I took from "making coffee at Starbucks" to "autonomous security analyst". I care deeply about helping others grow into this field. I wouldn't be a part-time teacher if I didn't (most of the pay goes to taxes). And now, as a manager and coach, I have even more responsibility—not just for outcomes, but for developing others.</p><p>But here's the honest truth: <strong>I can only properly train 1 to 2 people at a time</strong>. That’s it.</p><p>Not because I lack the will. But because real coaching isn’t about quick tips or dropping a few links in a Slack channel. It’s about building judgment, not just knowledge. It’s about having regular feedback loops, guiding someone through complex ambiguity, and letting them fail safely so they can learn from it. You can’t rush that. And you definitely can’t scale it beyond a handful of people at once,  especially when you're still delivering on business objectives.</p><p>I say this because people often assume that trainers like me are holding back. That we could be doing more. That if we really wanted to help, we’d open the floodgates.</p><p>But what they don’t see is how much time and effort goes into mentoring even one person well. I feel I’m doing my part. And I know many others who are too.</p><hr><h2 id="grc-isn%E2%80%99t-a-soft-landing-spot">GRC Isn’t a Soft Landing Spot</h2><p>One of the reasons expectations get skewed is the way GRC is presented to newcomers. It’s often sold as the “easy” on-ramp into cybersecurity: “Don’t like command-line? Not into pentesting? No problem, try GRC: no coding necessary!”</p><p>This does real damage.</p><p>Because GRC isn’t beginner work. It’s advisory. It's "business oriented". It requires a strong grasp of technical systems and the ability to interface with legal, engineering, procurement, and executives, often in the same meeting. It’s not enough to know a framework or memorize controls. You have to interpret risk in context, communicate clearly under pressure, and manage tradeoffs that don’t have a right answer.</p><p>I see a lot of people show up with enthusiasm but little understanding of the depth involved. They believe that because they’re willing to learn, someone should be ready to train them. But GRC doesn’t work that way. It takes time. It takes repetition. It takes coaching.</p><p>And again: coaching doesn’t scale.</p><hr><h2 id="we%E2%80%99re-not-hoarding-knowledge">We’re Not Hoarding Knowledge</h2><p>I think there’s a quiet resentment building in some corners of cybersecurity toward people already in the field. As if we owe knowledge to anyone who asks for it. As if we should just “give back” more.</p><p>But here’s the thing: <strong>you can’t transfer mastery in a few coffee chats</strong>.</p><p>You don’t learn how to think like a GRC advisor from a thread or a training video. You learn it by drawing diagrams, failing reviews, getting grilled in engineering meetings, and by making judgment calls (and living with the results).</p><p>That’s why GRC isn’t a “just teach me” field. It’s a “teach me how to think” field. And that kind of mentorship is precious, slow, and intimate.</p><p>So no, we’re not hoarding knowledge to keep the supply of security specialists low in order to maintain high wages. The skill level is just much, much higher to reach than what it looks like on the outside.</p><div class="kg-card kg-header-card kg-v2 kg-width-regular kg-style-accent" data-background-color="accent">
            
            <div class="kg-header-card-content">
                
                <div class="kg-header-card-text kg-align-center">
                    <h2 id="the-good-news-about-cybersecurity-jobs-is-that-anyone-can-get-in-without-credentials-or-apprenticeships-the-bad-news-is-anyone-can-get-in-without-credentials-or-apprenticeships" class="kg-header-card-heading" style="color: #FFFFFF;" data-text-color="#FFFFFF"><span style="white-space: pre-wrap;">The good news about cybersecurity jobs is that anyone can get in without credentials or apprenticeships. The bad news is, anyone can get in without credentials or apprenticeships.</span></h2>
                    
                    
                </div>
            </div>
        </div><hr><h2 id="the-millions-of-open-jobs-are-a-myth">The Millions of Open Jobs Are a Myth</h2><p>Let’s address the other elephant in the room: the idea that there are “millions” of cybersecurity jobs and that all you need is a good attitude to land one.</p><p>There are jobs, yes, but the entry-level ones are few and the ones that include structured training are even rarer. Pete Strouse, a cybersecurity talent advisor, tells on LinkedIn regularly about these stories of communities and bootcamps 20,000 strong who are fighting for 500 open jobs. <strong>Here's an uncomfortable truth: are you in the top 2.5%?</strong></p><p>Most security programs are underwater, just trying to keep up with audits, assessments, vendor reviews, architecture changes, and incident response. They don’t have the time to build apprenticeships from scratch. Not because they’re evil. Because they’re barely staffed to meet current demands.</p><p>This is especially true in GRC, where training someone requires deep understanding of the relationships between technology and business and building relationships where compliance adds <em>plus value</em>. That kind of growth can’t be offloaded or outsourced. Picture this: if I tell a trainee to make sure a new product complies with the frameworks they've studied, how can this person do their work without just handing the piece of paper to the lead engineer to fill out for them? I've seen <em>seniors </em>do this... </p><hr><h2 id="what-i-tell-people-who-still-want-in">What I Tell People Who Still Want In</h2><p>If you’re serious about building a career in GRC, you need to understand the terrain.</p><p>Here’s what I usually tell people who come to me for advice:</p><ul><li><strong>Start with understanding modern systems</strong>: IAM, cloud, SaaS, vendor ecosystems. GRC sits on top of real tech. Get hands-on.</li><li><strong>Build mental models</strong>: Learn to diagram how systems work and where risks emerge. If you can’t draw it, you don’t know it.</li><li><strong>Learn to translate</strong>: GRC is about turning tech talk into risk narratives and turning policies into real controls and somehow wrapping this up in dollars.</li><li><strong>Stop expecting it to be easy</strong>: It’s not. This is a tough field. And that’s okay. Hard things are worth doing.</li></ul><p>Now layer on top of that the "people skills": exceptional communication, high emotional intelligence and strong negotiation aptitudes. The bad news here is that both of these are extremely hard to train. Be honest with yourself: is that what you really want? Selling people on doing things that add no business value and that often degrade user experience? Security is not cool, remember. Your job is to be the buzzkill when HR and marketing want to bring AI bots to do phone calls and when engineering uses Cursor with a personal license.</p><hr><h2 id="the-path-is-long-but-it%E2%80%99s-real">The Path Is Long, But It’s Real</h2><p>To the people venting online, I say this with compassion:</p><p>You’re not wrong that it’s hard to break in. You’re not wrong that the market is tough. But don’t confuse that with a moral failure on the part of the people already here.</p><p>The failure is on the media and certification bodies selling you a pipe dream, that there's a whole industry waiting for you with open arms, ready to shower you with money. </p><p>Many of us are doing our best. Building teams, coaching juniors, fair hiring practices. But we also know the limits of what real development takes.</p><p>Building a career in cybersecurity isn’t about shortcuts or quick wins. The industry doesn’t owe you a job or an easy path. What it takes is deep, sustained effort, and the willingness to learn from failure.</p><p>So before blaming the system, recognize that real growth doesn’t happen overnight. It’s about developing real skills, judgment, and resilience, and those things take time.</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>GRC is Technical</title>
                    <link>https://thisisgrc.com/grc-is-technical/</link>
                    <pubDate>Wed, 14 May 2025 08:08:32 -0400
                    </pubDate>
                    <guid isPermaLink="false">6814339e642a980001f8a2e1</guid>
                    <category>
                        <![CDATA[ Break In GRC ]]>
                    </category>
                    <description></description>
                    <content:encoded>
                        <![CDATA[ <p>I’m about to say something that might ruffle some feathers. But here it is: <strong>GRC is technical </strong>and it’s high time we stopped pretending it’s not.</p><p>This may sound harsh to some. I’ve heard the whispers: “Oh, GRC is for people who aren’t technical. It’s for those who don’t want to mess with the complex stuff like coding or systems engineering.” 🙄</p><p>But here’s the reality: If that’s how you see GRC, you’re missing the point. I'd go as far as to say you’re actively hindering your own growth.</p><p>I’m not here to sugarcoat things. <strong>GRC requires depth</strong> and <strong>it requires technical fluency</strong>. Let's see how.</p><p>Yes, your relationships and human skills are fundamental. People skills is what make you successful in the long run. But the technical skills get you there faster and protect you from hitting a glass ceiling. </p><hr><h3 id="the-myth-of-%E2%80%9Cnon-technical%E2%80%9D-grc-%F0%9F%9A%AB%F0%9F%92%A1">The Myth of “Non-Technical” GRC 🚫💡</h3><p>For years, there’s been this persistent myth that <strong>GRC is a non-technical career</strong>. The idea that it’s the “easier” path, one that lets you bypass the heavy lifting required in deep technical areas like coding, system architecture, or data protection. There are way too many influencers selling their bootcamps carrying this narrative, and it has to stop.</p><p><strong>Risk management in the cloud</strong> is a perfect case in point. A GRC professional who doesn’t understand the intricacies of cloud-native services like <strong>serverless computing</strong>, <strong>IAM configurations</strong>, or <strong>shared responsibility models </strong>won’t be able to accurately assess risks, let alone mitigate them. You can't just drop a risk on an engineers plate and ask them to do all of the heavy lifting. You must abstract the complexity, use their own language and mental models. That's what I have in mind when thinking about "people skills": being relevant to individuals. And you can't be relevant without a certain amount of knowledge yourself.</p><p>If you’re sitting in a meeting discussing risk mitigation and someone casually mentions an AWS service, do you know how to ask about the implications of <strong>IAM policy misconfigurations</strong>? Or how a specific <strong>container security vulnerability</strong> might impact your infrastructure?</p><p>You can’t just rely on frameworks. <strong>You need to understand the systems</strong>. Otherwise, you are bringing complexity, not clarity. The GRC discipline is complex, we cannot afford to just relay messages and hope people understand our language. What use do we have if we do this?</p><hr><h3 id="frameworks-aren%E2%80%99t-enough-you-need-to-understand-the-tech-%E2%9A%A0%EF%B8%8F%F0%9F%93%9A">Frameworks Aren’t Enough: You Need to Understand the Tech ⚠️📚</h3><p>Here’s a harsh reality I want you to sit with: <strong>Frameworks are only half the story.</strong> They provide a structure, but that’s about it. If you’re just parroting control after control from ISO 27001, NIST, or SOC 2 without knowing the technical components driving those controls, you’re not doing GRC justice.</p><p>As an interviewer, I'll never care about whether people can name you the ISO 8.34 Control or the NIST RMF steps. I'll care much more about problem-solving, making the secure thing matter over the compliant thing, and marrying the two.</p><p>For instance, consider vulnerability management in a DevOps environment. It’s one thing to say you need to track vulnerabilities. It’s another thing entirely to understand the CI/CD pipeline, how static analysis tools integrate, or the shift-left strategy where vulnerabilities are caught during development, before they ever reach production. Frameworks don’t protect against the vulnerability introduced by failing to scan container images for outdated dependencies. Only technical understanding can.</p><hr><h3 id="you-can%E2%80%99t-protect-what-you-don%E2%80%99t-understand-%F0%9F%92%94%F0%9F%94%90">You Can’t Protect What You Don’t Understand 💔🔐</h3><p>This principle is fundamental. <strong>You can’t protect what you don’t understand.</strong></p><p>I’ll tell you from experience: <strong>this is a constant learning process</strong>. You’re always going to feel like you’re out of your depth at some point. Imagine: nobody knew about most generative AI models before ChatGPT went mainstream. MCP servers were invented weeks ago. We are building the plane as we fly it. Can you catch up fast enough?</p><p>When I first started in GRC, I thought I could rely solely on framework knowledge. But soon, I was in meetings with engineers discussing API security and zero-trust models, and I realized that if I didn’t understand the tech, I was doing everyone a disservice.</p><p>Real GRC isn’t about just following rules or ticking off boxes. It’s about <strong>understanding how risk manifests in the context of your organization’s technology stack</strong>.</p><hr><h3 id="the-parrot-vs-architect-analogy-%F0%9F%A6%9C%F0%9F%8F%97%EF%B8%8F">The Parrot vs. Architect Analogy 🦜🏗️</h3><p>I have to drive this point home because it’s crucial: <strong>If you’re just parroting the language of frameworks without understanding the underlying tech, you’re missing the point</strong>.</p><p>A parrot repeats what it hears, it doesn’t understand the <strong>why</strong>. But an architect builds with intention and expertise, making <strong>strategic decisions</strong> based on a deep understanding of the environment. Real GRC professionals are <strong>architects</strong>. They aren’t simply filling out checkboxes, they’re actively constructing, advising, and ensuring that the security controls are effective in real-world systems.</p><p>I’ve been there. I’ve walked into a room where the engineers were discussing concepts I didn’t fully understand. Years in and I still don't fully grasp Kubernetes. (Yes, engineers often make thing much more complicated than they should, but that's another rant). In the end, I chose to lean into those uncomfortable moments. I asked questions, even if I felt I looked stupid. Don't make this a calculated thing, just be curious, be interested. The rest will follow.</p><hr><h3 id="conclusion-real-grc-means-understanding-the-tech-stack-%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB%F0%9F%94%8D">Conclusion: Real GRC Means Understanding the Tech Stack 👨‍💻🔍</h3><p>GRC isn’t just a role for process geeks or people who can recite standards by heart. <strong>It’s a technical discipline that requires active engagement with technology</strong>. </p><p>The good news is this: <strong>you don’t need to become a full-fledged engineer</strong>, but you do need to become fluent in the language of the systems that keep your organization running. </p><p>So, let’s stop pretending. GRC is technical, and it requires us to engage with the <strong>deep, messy, complex</strong> world of modern technology. If we want to be effective, we need to stop playing it safe and dive deep into the very systems we’re trying to secure. Only then can we truly <strong>protect what matters</strong>.</p><hr><h3 id="final-thought-%F0%9F%92%AD">Final Thought 💭</h3><p>If this resonates with you, I want you to know: <strong>you are not alone</strong>. Every GRC professional has felt that discomfort when faced with technical challenges. But <strong>embrace it</strong>. That’s where you learn, where you grow, and where you start making a true impact.</p><p>GRC is hard work. It requires vulnerability. But the depth of expertise you gain in the process will make all the difference when the stakes are high.</p><p>We’re in this together. 🌍💪</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle></itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>I’m about to say something that might ruffle some feathers. But here it is: <strong>GRC is technical </strong>and it’s high time we stopped pretending it’s not.</p><p>This may sound harsh to some. I’ve heard the whispers: “Oh, GRC is for people who aren’t technical. It’s for those who don’t want to mess with the complex stuff like coding or systems engineering.” 🙄</p><p>But here’s the reality: If that’s how you see GRC, you’re missing the point. I'd go as far as to say you’re actively hindering your own growth.</p><p>I’m not here to sugarcoat things. <strong>GRC requires depth</strong> and <strong>it requires technical fluency</strong>. Let's see how.</p><p>Yes, your relationships and human skills are fundamental. People skills is what make you successful in the long run. But the technical skills get you there faster and protect you from hitting a glass ceiling. </p><hr><h3 id="the-myth-of-%E2%80%9Cnon-technical%E2%80%9D-grc-%F0%9F%9A%AB%F0%9F%92%A1">The Myth of “Non-Technical” GRC 🚫💡</h3><p>For years, there’s been this persistent myth that <strong>GRC is a non-technical career</strong>. The idea that it’s the “easier” path, one that lets you bypass the heavy lifting required in deep technical areas like coding, system architecture, or data protection. There are way too many influencers selling their bootcamps carrying this narrative, and it has to stop.</p><p><strong>Risk management in the cloud</strong> is a perfect case in point. A GRC professional who doesn’t understand the intricacies of cloud-native services like <strong>serverless computing</strong>, <strong>IAM configurations</strong>, or <strong>shared responsibility models </strong>won’t be able to accurately assess risks, let alone mitigate them. You can't just drop a risk on an engineers plate and ask them to do all of the heavy lifting. You must abstract the complexity, use their own language and mental models. That's what I have in mind when thinking about "people skills": being relevant to individuals. And you can't be relevant without a certain amount of knowledge yourself.</p><p>If you’re sitting in a meeting discussing risk mitigation and someone casually mentions an AWS service, do you know how to ask about the implications of <strong>IAM policy misconfigurations</strong>? Or how a specific <strong>container security vulnerability</strong> might impact your infrastructure?</p><p>You can’t just rely on frameworks. <strong>You need to understand the systems</strong>. Otherwise, you are bringing complexity, not clarity. The GRC discipline is complex, we cannot afford to just relay messages and hope people understand our language. What use do we have if we do this?</p><hr><h3 id="frameworks-aren%E2%80%99t-enough-you-need-to-understand-the-tech-%E2%9A%A0%EF%B8%8F%F0%9F%93%9A">Frameworks Aren’t Enough: You Need to Understand the Tech ⚠️📚</h3><p>Here’s a harsh reality I want you to sit with: <strong>Frameworks are only half the story.</strong> They provide a structure, but that’s about it. If you’re just parroting control after control from ISO 27001, NIST, or SOC 2 without knowing the technical components driving those controls, you’re not doing GRC justice.</p><p>As an interviewer, I'll never care about whether people can name you the ISO 8.34 Control or the NIST RMF steps. I'll care much more about problem-solving, making the secure thing matter over the compliant thing, and marrying the two.</p><p>For instance, consider vulnerability management in a DevOps environment. It’s one thing to say you need to track vulnerabilities. It’s another thing entirely to understand the CI/CD pipeline, how static analysis tools integrate, or the shift-left strategy where vulnerabilities are caught during development, before they ever reach production. Frameworks don’t protect against the vulnerability introduced by failing to scan container images for outdated dependencies. Only technical understanding can.</p><hr><h3 id="you-can%E2%80%99t-protect-what-you-don%E2%80%99t-understand-%F0%9F%92%94%F0%9F%94%90">You Can’t Protect What You Don’t Understand 💔🔐</h3><p>This principle is fundamental. <strong>You can’t protect what you don’t understand.</strong></p><p>I’ll tell you from experience: <strong>this is a constant learning process</strong>. You’re always going to feel like you’re out of your depth at some point. Imagine: nobody knew about most generative AI models before ChatGPT went mainstream. MCP servers were invented weeks ago. We are building the plane as we fly it. Can you catch up fast enough?</p><p>When I first started in GRC, I thought I could rely solely on framework knowledge. But soon, I was in meetings with engineers discussing API security and zero-trust models, and I realized that if I didn’t understand the tech, I was doing everyone a disservice.</p><p>Real GRC isn’t about just following rules or ticking off boxes. It’s about <strong>understanding how risk manifests in the context of your organization’s technology stack</strong>.</p><hr><h3 id="the-parrot-vs-architect-analogy-%F0%9F%A6%9C%F0%9F%8F%97%EF%B8%8F">The Parrot vs. Architect Analogy 🦜🏗️</h3><p>I have to drive this point home because it’s crucial: <strong>If you’re just parroting the language of frameworks without understanding the underlying tech, you’re missing the point</strong>.</p><p>A parrot repeats what it hears, it doesn’t understand the <strong>why</strong>. But an architect builds with intention and expertise, making <strong>strategic decisions</strong> based on a deep understanding of the environment. Real GRC professionals are <strong>architects</strong>. They aren’t simply filling out checkboxes, they’re actively constructing, advising, and ensuring that the security controls are effective in real-world systems.</p><p>I’ve been there. I’ve walked into a room where the engineers were discussing concepts I didn’t fully understand. Years in and I still don't fully grasp Kubernetes. (Yes, engineers often make thing much more complicated than they should, but that's another rant). In the end, I chose to lean into those uncomfortable moments. I asked questions, even if I felt I looked stupid. Don't make this a calculated thing, just be curious, be interested. The rest will follow.</p><hr><h3 id="conclusion-real-grc-means-understanding-the-tech-stack-%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB%F0%9F%94%8D">Conclusion: Real GRC Means Understanding the Tech Stack 👨‍💻🔍</h3><p>GRC isn’t just a role for process geeks or people who can recite standards by heart. <strong>It’s a technical discipline that requires active engagement with technology</strong>. </p><p>The good news is this: <strong>you don’t need to become a full-fledged engineer</strong>, but you do need to become fluent in the language of the systems that keep your organization running. </p><p>So, let’s stop pretending. GRC is technical, and it requires us to engage with the <strong>deep, messy, complex</strong> world of modern technology. If we want to be effective, we need to stop playing it safe and dive deep into the very systems we’re trying to secure. Only then can we truly <strong>protect what matters</strong>.</p><hr><h3 id="final-thought-%F0%9F%92%AD">Final Thought 💭</h3><p>If this resonates with you, I want you to know: <strong>you are not alone</strong>. Every GRC professional has felt that discomfort when faced with technical challenges. But <strong>embrace it</strong>. That’s where you learn, where you grow, and where you start making a true impact.</p><p>GRC is hard work. It requires vulnerability. But the depth of expertise you gain in the process will make all the difference when the stakes are high.</p><p>We’re in this together. 🌍💪</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>When I Stopped Needing to Be Right</title>
                    <link>https://thisisgrc.com/when-i-stopped-needing-to-be-right/</link>
                    <pubDate>Thu, 01 May 2025 08:08:48 -0400
                    </pubDate>
                    <guid isPermaLink="false">6812db2f89e30b0001826f8d</guid>
                    <category>
                        <![CDATA[ Leadership ]]>
                    </category>
                    <description>GRC isn’t just about frameworks and risk matrices: it’s fundamentally about people. And people are anything but simple.</description>
                    <content:encoded>
                        <![CDATA[ <p>When I first got into GRC, I thought my job was to be the expert. You know, the one with the right answers. The one who had the policy ready. The one who flagged the risk before it exploded.</p><p>I equated being right with being effective. I believed that if I could clearly outline the risk, supported by the right policy, compliance would naturally follow. I thought my role was to win arguments. But the truth is, winning doesn’t build trust. And trust is where real, sustainable change is born.</p><p>This is why success in security is a question of human skills. Not the vague “soft skills” mentioned at conferences, but the deeply personal work: managing your emotions, your ego, your expectations; to negotiate effectively and to lead the way with practical solutions, not buzzwords, not spreadsheets and heat maps.</p><hr><h3 id="you-will-not-always-be-heard">You Will Not Always Be Heard</h3><p>I once believed that passion was synonymous with raising my voice, standing my ground, and making the risk <em>impossible</em> to ignore.</p><p>Here's what's <em>actually</em> impossible to ignore: speed, urgency, and hype. If you’re emotionally attached to your recommendations being executed exactly as you’ve outlined, you’re setting yourself up for burnout.</p><p>I’ve witnessed the destructive power of anger within a team. One angry GRC professional can drain a room of any willingness to collaborate. People avoid you. They stop sharing ideas early in the process. They stop reaching out for help.</p><p>A long time ago, I saw a colleague throw fits of rage over their considerations being brushed off by engineers. But here's the kicker: a few years later, all of the compliance considerations were handled. Sometimes, all you need is patience. The irony? When you let go of anger, people start listening.</p><p>You need to learn to play the long game. Sometimes that means staying silent, even when you’re right. Waiting for the right moment to reintroduce a risk. Returning not with “I told you so,” but with well-considered solutions.</p><p>It’s humbling, even frustrating at times. You might not look like the hero in the moment, but it’s how you build credibility that lasts.</p><p>I’ve sent snarky emails. I’ve escalated. I’ve dropped the “compliance requires this” hammer more times than I care to admit. It never worked long term.</p><p>Here’s the thing no one tells you about GRC: If your ego is wrapped up in being right, this job will break you.</p><p>We’re not here to win. We’re here to build.</p><hr><h2 id="the-true-battle-lies-within">The True Battle Lies Within</h2><p>The most difficult lesson I’ve learned in GRC is that emotional detachment isn’t a sign of coldness, it’s a necessary survival mechanism. It’s the ability to show up, day after day, even when you feel overlooked or dismissed. Not to preach or reprimand, but to genuinely <em>help</em>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" class="kg-image" alt="time lapse photography of street during nighttime" loading="lazy" width="3992" height="2242" srcset="https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=600 600w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1000 1000w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1600 1600w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2400 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Photo by </span><a href="https://unsplash.com/@sidverma?ref=thisisgrc.com"><span style="white-space: pre-wrap;">Sid Verma</span></a><span style="white-space: pre-wrap;"> / </span><a href="https://unsplash.com/?utm_source=ghost&utm_medium=referral&utm_campaign=api-credit"><span style="white-space: pre-wrap;">Unsplash</span></a></figcaption></figure><p>In security, there’s no endpoint. No final victory. So stop treating every discussion like it’s a battle to win. Let the waves of disagreement wash over you. Be the calm in a storm of shifting priorities and competing demands.</p><hr><h3 id="relationships-outlast-processes">Relationships Outlast Processes</h3><p>Some of my most significant wins in GRC didn’t come from meticulously crafted risk registers or perfectly executed policy updates. They came from informal hallway conversations, a quick DM, or those spontaneous “Hey, security person” exchanges where trust is forged in the smallest of moments.</p><p>These moments won’t show up on your OKRs, but they are often the reason you’re invited into the room <em>before</em> a contract is signed, not after.</p><p>You don’t earn these moments by being efficient. You earn them by being human.</p><hr><h3 id="this-is-the-work">This Is the Work</h3><p>Being relevant in GRC goes far beyond frameworks or tools. It’s about emotional intelligence. It’s about knowing when to push and when to pause. When to explain and when to simply be present.</p><p>Security is about trust. And trust isn’t built through policies or risk matrices. It’s built through people.</p><p>If you’re in GRC and this resonates with you, know that you’re not alone. This is the true work, the invisible, human-centered part that separates checkbox fillers from trusted advisors.</p><p>Let’s work together to build security cultures that people genuinely want to be part of.</p><p>Let’s do the human work.</p><hr><p><em>This post is adapted from a collection of my most personal reflections on emotional intelligence in GRC. If this resonates with you, I encourage you to share your own journey or challenges in the comments.</em></p><p><em>Because someone in GRC needs to hear it from you.</em></p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>GRC isn’t just about frameworks and risk matrices: it’s fundamentally about people. And people are anything but simple.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>When I first got into GRC, I thought my job was to be the expert. You know, the one with the right answers. The one who had the policy ready. The one who flagged the risk before it exploded.</p><p>I equated being right with being effective. I believed that if I could clearly outline the risk, supported by the right policy, compliance would naturally follow. I thought my role was to win arguments. But the truth is, winning doesn’t build trust. And trust is where real, sustainable change is born.</p><p>This is why success in security is a question of human skills. Not the vague “soft skills” mentioned at conferences, but the deeply personal work: managing your emotions, your ego, your expectations; to negotiate effectively and to lead the way with practical solutions, not buzzwords, not spreadsheets and heat maps.</p><hr><h3 id="you-will-not-always-be-heard">You Will Not Always Be Heard</h3><p>I once believed that passion was synonymous with raising my voice, standing my ground, and making the risk <em>impossible</em> to ignore.</p><p>Here's what's <em>actually</em> impossible to ignore: speed, urgency, and hype. If you’re emotionally attached to your recommendations being executed exactly as you’ve outlined, you’re setting yourself up for burnout.</p><p>I’ve witnessed the destructive power of anger within a team. One angry GRC professional can drain a room of any willingness to collaborate. People avoid you. They stop sharing ideas early in the process. They stop reaching out for help.</p><p>A long time ago, I saw a colleague throw fits of rage over their considerations being brushed off by engineers. But here's the kicker: a few years later, all of the compliance considerations were handled. Sometimes, all you need is patience. The irony? When you let go of anger, people start listening.</p><p>You need to learn to play the long game. Sometimes that means staying silent, even when you’re right. Waiting for the right moment to reintroduce a risk. Returning not with “I told you so,” but with well-considered solutions.</p><p>It’s humbling, even frustrating at times. You might not look like the hero in the moment, but it’s how you build credibility that lasts.</p><p>I’ve sent snarky emails. I’ve escalated. I’ve dropped the “compliance requires this” hammer more times than I care to admit. It never worked long term.</p><p>Here’s the thing no one tells you about GRC: If your ego is wrapped up in being right, this job will break you.</p><p>We’re not here to win. We’re here to build.</p><hr><h2 id="the-true-battle-lies-within">The True Battle Lies Within</h2><p>The most difficult lesson I’ve learned in GRC is that emotional detachment isn’t a sign of coldness, it’s a necessary survival mechanism. It’s the ability to show up, day after day, even when you feel overlooked or dismissed. Not to preach or reprimand, but to genuinely <em>help</em>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" class="kg-image" alt="time lapse photography of street during nighttime" loading="lazy" width="3992" height="2242" srcset="https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=600 600w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1000 1000w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1600 1600w, https://images.unsplash.com/photo-1511674968077-376b4dea87d5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEwfHxpbmZpbml0eXxlbnwwfHx8fDE3NDYwNjc1NDR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2400 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Photo by </span><a href="https://unsplash.com/@sidverma?ref=thisisgrc.com"><span style="white-space: pre-wrap;">Sid Verma</span></a><span style="white-space: pre-wrap;"> / </span><a href="https://unsplash.com/?utm_source=ghost&utm_medium=referral&utm_campaign=api-credit"><span style="white-space: pre-wrap;">Unsplash</span></a></figcaption></figure><p>In security, there’s no endpoint. No final victory. So stop treating every discussion like it’s a battle to win. Let the waves of disagreement wash over you. Be the calm in a storm of shifting priorities and competing demands.</p><hr><h3 id="relationships-outlast-processes">Relationships Outlast Processes</h3><p>Some of my most significant wins in GRC didn’t come from meticulously crafted risk registers or perfectly executed policy updates. They came from informal hallway conversations, a quick DM, or those spontaneous “Hey, security person” exchanges where trust is forged in the smallest of moments.</p><p>These moments won’t show up on your OKRs, but they are often the reason you’re invited into the room <em>before</em> a contract is signed, not after.</p><p>You don’t earn these moments by being efficient. You earn them by being human.</p><hr><h3 id="this-is-the-work">This Is the Work</h3><p>Being relevant in GRC goes far beyond frameworks or tools. It’s about emotional intelligence. It’s about knowing when to push and when to pause. When to explain and when to simply be present.</p><p>Security is about trust. And trust isn’t built through policies or risk matrices. It’s built through people.</p><p>If you’re in GRC and this resonates with you, know that you’re not alone. This is the true work, the invisible, human-centered part that separates checkbox fillers from trusted advisors.</p><p>Let’s work together to build security cultures that people genuinely want to be part of.</p><p>Let’s do the human work.</p><hr><p><em>This post is adapted from a collection of my most personal reflections on emotional intelligence in GRC. If this resonates with you, I encourage you to share your own journey or challenges in the comments.</em></p><p><em>Because someone in GRC needs to hear it from you.</em></p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>We can&#x27;t build the cybersecurity workforce on passion alone</title>
                    <link>https://thisisgrc.com/we-cant-build-the-cybersecurity-workforce-on-passion-alone/</link>
                    <pubDate>Wed, 10 Jul 2024 08:08:36 -0400
                    </pubDate>
                    <guid isPermaLink="false">66528d8fb9889f0001dabdd8</guid>
                    <category>
                        <![CDATA[ Break In GRC ]]>
                    </category>
                    <description>Envisioning the transition of cybersecurity from a passion and skill-driven activity to a casual business profession.</description>
                    <content:encoded>
                        <![CDATA[ <p>I recently attended the NSec, a major offensive security event in Montreal. I enjoyed myself. The event had free beer, a good DJ, and a bunch of security nerds willing to share their latest tinkering feats. This is hacking at its core: folks who love to figure out how stuff works and use it in unintended ways. It's the type of conference that ignites passion. It's also bleeding money.</p><p>It was my first time attending, and I went with a networking mindset. There were about 10 booths, many of which were government or government-backed startups. In other words: institutions that are not profit-motivated. None of the common security companies attended. The message struck me: there's no business in passion.</p><p>I can't think of a better illustration of how I've grown to see the cybersecurity profession: <strong>passion is fun but it's not the solution to protect our infrastructures from devious actors at scale</strong>.</p><p>Longtime readers know my story: I got into security thanks to the Mr Robot TV show: hacking for revolution at its finest! Nowadays, I write compliance reports and risk assessments for executives. In the future, I see this transition as the norm.</p><p><strong>This is where I get provocative: I believe the future of security lies more in an accountant paradigm than in hacking. Slow and boring.</strong></p><hr><h2 id="pentest-is-overrated">Pentest is overrated</h2><p>A pentest job posting will garner more than double the applications for an incident responder role. And half of the candidates for the incident response role want to pivot into pentesting. </p><p>My TikTok videos about tractor hacking and deepfake sextortion were amongst my biggest hits. Hacking is cool! <strong>These feats rely on rare skill sets akin to magic for most people. Over the years, it's permeated pop culture and grown its aesthetics. </strong>Hoodies, anyone?</p><p>However, hacking suffers from a major flaw: <strong>it finds issues way too late in a product's lifecycle</strong>. </p><p> To me, <strong>pentesting should be a branch of compliance</strong>. </p><p>Let me explain: software building should follow guidelines, which create auditable artifacts. These artifacts then get correlated. Compliance systems trigger alerts upon guidelines being overstepped, leading to scrutiny. Pentests can become the ultimate weapon to verify application builders who trick compliance checks. </p><p>All companies, even small ones, must master their balance sheets and pay taxes. Accounting is a necessary hurdle. Wise accountants become business advisors. No decision gets taken without making sure that rules are followed. Pop culture also features these tasks as white-collar boring jobs.  </p><p>I understand how accounting being tied to money gives it more gravitas than cybersecurity ever will. However, when I'm looking at how accounting firms are gobbling up all cybersecurity firms, I'm rooting for them.</p><p>Cybersecurity should not rely solely on the skills of a happy few. It should base itself on rules and, yes, <strong>red tape</strong>. I said it.</p><hr><h2 id="what-does-boring-security-mean-in-practice">What does boring security mean in practice?</h2><p>The cybersecurity of the future will look more like underwriting and accounting. Based on actuarial tables, cyber insurance systems will connect to companies' infrastructures to assess risks, based on the automated compliance data collection I've described above. </p><p>The systems will evaluate a given product's risks and based on a company's tolerance level, decide on whether additional scrutiny or resources must be allocated.</p><p>I also envision a security rating output, not dissimilar to the <a href="https://en.wikipedia.org/wiki/Common_Criteria?ref=thisisgrc.com" rel="noreferrer">Common Criteria</a>, but dynamic. </p><p>Humans will determine business outcomes, report findings, and fine-tune the systems.</p><hr><h2 id="how-will-we-hook-people-to-cybersecurity-then">How will we hook people to cybersecurity, then?</h2><p>Keeping up with the accounting analogy, I believe young people should seek this area because it's good, steady, office jobs. It's awful boring, but it works. </p><p>Computer scientists and software engineers would remain crucial to build the artifacts-gathering systems, but all analysis could be carried out by people who are not experts in the underlying computing systems. </p><p>Perhaps this is it: we wouldn't <em>need to hook them</em> because the entry does not require extreme skills.</p><p>Most of the security specialists, <em>myself included</em>, spend their weekends and evenings thinking about security matters. This is such a staple of the job that I warn my students that this discipline will trample them if they expect a simple 9-to-5 and think about anything else the rest of the time. That doesn't scale.  We need to lower the barrier. </p><hr><h2 id="what-about-advances-in-technology">What about advances in technology?</h2><p>The rapid-evolving threats and technologies do weaken my comparison with accounting. Yes, tax codes change annually. But rarely do they force paradigm shifts similar to mobile devices, cloud computing, and large-learning models. </p><p> Advocating for a more standardized, rules-driven security means the rules must adapt. The aggregated compliance systems, the automated dynamic security rating and the adaptative actuarial tables I'm dreaming of cannot be built out of thin air. It's impossible to assess the quantifiable risks of LLM incidents. Social engineering, as a branch of cyber fraud, does not rely on flawed systems: it's a human problem. Maybe it shouldn't be a worry of cybersecurity anymore?</p><p>Technology moves fast, and cyber threat actors will always adapt more quickly than complex enterprise systems. </p><p>Does this mean this vision of a boring security will collapse? Do such limitations make sacrificing passion for white-collarism worthwhile? Time will tell. </p><p>What do you think? Tell us in the comments, or reply to this email to debate with me! </p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Envisioning the transition of cybersecurity from a passion and skill-driven activity to a casual business profession.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>I recently attended the NSec, a major offensive security event in Montreal. I enjoyed myself. The event had free beer, a good DJ, and a bunch of security nerds willing to share their latest tinkering feats. This is hacking at its core: folks who love to figure out how stuff works and use it in unintended ways. It's the type of conference that ignites passion. It's also bleeding money.</p><p>It was my first time attending, and I went with a networking mindset. There were about 10 booths, many of which were government or government-backed startups. In other words: institutions that are not profit-motivated. None of the common security companies attended. The message struck me: there's no business in passion.</p><p>I can't think of a better illustration of how I've grown to see the cybersecurity profession: <strong>passion is fun but it's not the solution to protect our infrastructures from devious actors at scale</strong>.</p><p>Longtime readers know my story: I got into security thanks to the Mr Robot TV show: hacking for revolution at its finest! Nowadays, I write compliance reports and risk assessments for executives. In the future, I see this transition as the norm.</p><p><strong>This is where I get provocative: I believe the future of security lies more in an accountant paradigm than in hacking. Slow and boring.</strong></p><hr><h2 id="pentest-is-overrated">Pentest is overrated</h2><p>A pentest job posting will garner more than double the applications for an incident responder role. And half of the candidates for the incident response role want to pivot into pentesting. </p><p>My TikTok videos about tractor hacking and deepfake sextortion were amongst my biggest hits. Hacking is cool! <strong>These feats rely on rare skill sets akin to magic for most people. Over the years, it's permeated pop culture and grown its aesthetics. </strong>Hoodies, anyone?</p><p>However, hacking suffers from a major flaw: <strong>it finds issues way too late in a product's lifecycle</strong>. </p><p> To me, <strong>pentesting should be a branch of compliance</strong>. </p><p>Let me explain: software building should follow guidelines, which create auditable artifacts. These artifacts then get correlated. Compliance systems trigger alerts upon guidelines being overstepped, leading to scrutiny. Pentests can become the ultimate weapon to verify application builders who trick compliance checks. </p><p>All companies, even small ones, must master their balance sheets and pay taxes. Accounting is a necessary hurdle. Wise accountants become business advisors. No decision gets taken without making sure that rules are followed. Pop culture also features these tasks as white-collar boring jobs.  </p><p>I understand how accounting being tied to money gives it more gravitas than cybersecurity ever will. However, when I'm looking at how accounting firms are gobbling up all cybersecurity firms, I'm rooting for them.</p><p>Cybersecurity should not rely solely on the skills of a happy few. It should base itself on rules and, yes, <strong>red tape</strong>. I said it.</p><hr><h2 id="what-does-boring-security-mean-in-practice">What does boring security mean in practice?</h2><p>The cybersecurity of the future will look more like underwriting and accounting. Based on actuarial tables, cyber insurance systems will connect to companies' infrastructures to assess risks, based on the automated compliance data collection I've described above. </p><p>The systems will evaluate a given product's risks and based on a company's tolerance level, decide on whether additional scrutiny or resources must be allocated.</p><p>I also envision a security rating output, not dissimilar to the <a href="https://en.wikipedia.org/wiki/Common_Criteria?ref=thisisgrc.com" rel="noreferrer">Common Criteria</a>, but dynamic. </p><p>Humans will determine business outcomes, report findings, and fine-tune the systems.</p><hr><h2 id="how-will-we-hook-people-to-cybersecurity-then">How will we hook people to cybersecurity, then?</h2><p>Keeping up with the accounting analogy, I believe young people should seek this area because it's good, steady, office jobs. It's awful boring, but it works. </p><p>Computer scientists and software engineers would remain crucial to build the artifacts-gathering systems, but all analysis could be carried out by people who are not experts in the underlying computing systems. </p><p>Perhaps this is it: we wouldn't <em>need to hook them</em> because the entry does not require extreme skills.</p><p>Most of the security specialists, <em>myself included</em>, spend their weekends and evenings thinking about security matters. This is such a staple of the job that I warn my students that this discipline will trample them if they expect a simple 9-to-5 and think about anything else the rest of the time. That doesn't scale.  We need to lower the barrier. </p><hr><h2 id="what-about-advances-in-technology">What about advances in technology?</h2><p>The rapid-evolving threats and technologies do weaken my comparison with accounting. Yes, tax codes change annually. But rarely do they force paradigm shifts similar to mobile devices, cloud computing, and large-learning models. </p><p> Advocating for a more standardized, rules-driven security means the rules must adapt. The aggregated compliance systems, the automated dynamic security rating and the adaptative actuarial tables I'm dreaming of cannot be built out of thin air. It's impossible to assess the quantifiable risks of LLM incidents. Social engineering, as a branch of cyber fraud, does not rely on flawed systems: it's a human problem. Maybe it shouldn't be a worry of cybersecurity anymore?</p><p>Technology moves fast, and cyber threat actors will always adapt more quickly than complex enterprise systems. </p><p>Does this mean this vision of a boring security will collapse? Do such limitations make sacrificing passion for white-collarism worthwhile? Time will tell. </p><p>What do you think? Tell us in the comments, or reply to this email to debate with me! </p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>Security Needs Data: Insights From the Data Breach Investigation Report</title>
                    <link>https://thisisgrc.com/security-needs-data-insights-from-the-data-breach-investigation-report/</link>
                    <pubDate>Wed, 22 May 2024 08:08:12 -0400
                    </pubDate>
                    <guid isPermaLink="false">664aafebc05c610001c9de5e</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>Reviewing the Data Breach Investigation Report for a source of credible data about the real cyber threats we must worry about.</description>
                    <content:encoded>
                        <![CDATA[ <p>In security, numbers can deceive. If somebody tells you a given risk has an "82% chance of materializing", run. Nobody possesses such reliable data. Numbers give pseudo-experts a varnish of credibility. Anybody can fudge them.</p><p><a href="https://www.verizon.com/business/resources/T5b7/reports/2024-dbir-data-breach-investigations-report.pdf?ref=thisisgrc.com" rel="noreferrer">Verizon's Data Breach Investigation Report (DBIR)</a> is one of the few reliable public data sources about the current state of security breaches and their associated costs.   I expect cyber insurers to build detailed actuarial tables in the long term. Still, that enterprise may prove impossible: cyber threats are much more unpredictable than natural disasters, theft or vandalism.  </p><p>We must cling to reliable sources such as the DBIR as an intellectual self-defence mechanism. Vendors and influencers need your fear to sell their salad. Just yesterday, a security awareness training vendor alarmed me on LinkedIn about "<em>90%+ of breaches being due to humans</em>", without providing a source...</p><p>I read the 2024 DBIR and here are some reliable conclusions.</p><hr><h2 id="vulnerabilities-are-back-in-style">Vulnerabilities Are Back In Style</h2><p>I made the hack of the MOVEIt "secure" file transfer solution ppfosec's <a href="https://thisisgrc.com/top-security-stories-of-2023/" rel="noreferrer">top story of 2023</a>. The DBIR's data backs it up. Criminals could break into MOVEIt's software with a simple code injection, which made their lives easy. As a result, software vulnerabilities became a popular entry point, as much as stolen credentials. </p><p>Two conclusions emerge:</p><ul><li>Criminals will always use the path of least effort;</li><li>You can't tunnel vision only on authentication.</li></ul><p>Imagine if, next year, a piece of malware can execute on PDFs without macros, or a Javascript malware can break out of Chrome, or a WhatsApp WAV attachment can read into an iPhone's file system. Malware would become the #1 trend! </p><p>As much as I hate to write this: <strong>securing your systems based on headlines is not a stupid strategy</strong>. Yes, its reactive nature puts you behind, and I'd much rather build a comprehensive strategy around known standard practices. But still, I can't help but feel a quick fix for a small-medium enterprise is to follow the advice from reputable news sources!</p><p>Speaking of trends...</p><hr><h2 id="generative-ai-is-nowhere-to-be-seen">Generative AI is Nowhere to be Seen</h2><p>In March 2023, I wrote <a href="https://thisisgrc.com/how-criminals-will-use-generative-ai-to-scam-us/" rel="noreferrer">How criminals will use generative AI to scam us</a>. Looking back, I'm proud of this article because I still see publications, podcasts, journalists, and vendors throttling out the same ideas. I was ahead of the curve, yay! Or was I?</p><p>In the DBIR, the only mention of generative AI is ChatGPT Teams account selling on the black market. None of the scenarios such as AI-powered password guessing, ultra-realistic deepfake frauds, and well-written phishing emails happened.</p><p>Generative AI use in cyber criminality may be a cool academic discussion, but for now, it's speculative. In fact, given what we see in the data, I will be using this question as a "B.S. detector"... </p><p>Where the methods <em>did</em> change is...</p><hr><h2 id="criminals-are-seeking-new-revenue-streams">Criminals Are Seeking New Revenue Streams</h2><p>It might not appear so, but we are slowly winning against traditional ransomware. Based on FBI data, only 4% of ransomware victims paid the ransom (down from 7%), for a median loss of $46,000 (up from $26,000). Roughly speaking: the number of victims halved, and the criminals compensated by raising the price twofold. This reminds me of cable TV in the 2010s: twice the ads, twice the price, half the content. </p><p>The bad news is that hackers are finding less technical ways to extract revenue: fraud and extortion. Business email compromise (the biggest issue in the 2023 report) is still trending around 25% of breaches. Extortion is part of 10% of breaches, from 0% in 2023. </p><p>What is the difference between ransomware and "pure extortion"? Criminals will still breach your systems, lock you out, and ask for ransom. But they will also threaten to dox you to your regulators, customers, investors, and the media. </p><p><a href="https://www.bleepingcomputer.com/news/security/clop-gang-to-earn-over-75-million-from-moveit-extortion-attacks/?ref=thisisgrc.com" rel="noreferrer">Initial analysis</a> suggests extortion will have a short shelf life. Companies have become educated about security breaches and they'd rather face the music than pay the ransoms. </p><p>In short: we are getting better. And it's not the end...</p><p></p><hr><h2 id="authorities-initiatives-are-working">Authorities' Initiatives Are Working</h2><p>The DBIR feeds of FBI datasets. It gathers collaboration from the Cybersecurity and Infrastructure Security Agency (CISA). The CISA's catalog of "known exploited vulnerabilities (KEV) is built off a nationwide set of honeypots that can alert organizations of urgent threats within days, sometimes hours, of exploitation. The FBI took down the Qakbot ransomware, the Lockbit and Hive groups. I may be "drinking the Kool-Aid", but I'm forced to conclude that US institutions are performing both in awareness and enforcement, to tangible benefits.</p><p>It's not sexy to say we're winning, for obvious reasons. Security is infinite, and the minute we lower our attention, everything needs to be redone. We can't afford to sit on our laurels. Imagine if the trends I've observed this year end up reversing downwards for 2025. My optimistic take would look foolish. And that's the beauty of the doomsayers... if we call them out on their exaggeration, they could always say "better safe than sorry; hope for the best, prepare for the worst!" So I dare to believe our efforts are making a positive impact globally. </p><p>Do you share my optimism? Let us know in the comments!</p><hr><div class="kg-card kg-callout-card kg-callout-card-purple"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">If you want to read ppfosec's breakdown of the 2023 DBIR, click below.</div></div><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://thisisgrc.com/know-what-youre-up-against-insights-from-the-2023-data-breach-investigations-report/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Know What You’re Up Against: Insights from the 2023 Data Breach Investigations Report</div><div class="kg-bookmark-description">Verizon’s Data Breach Investigation Report (DBIR) is a must-read resource to gain insight into current cybersecurity breaches. The 2023 version is out, and I read the whole 90 pages of it so you don’t have to!</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/size/w256h256/format/jpeg/2022/06/icone.jpg" alt=""><span class="kg-bookmark-author">ppfosec</span><span class="kg-bookmark-publisher">Pierre-Paul Ferland</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/size/w1200/2023/06/newsletter-2023-06-14.png" alt="" onerror="this.style.display = 'none'"></div></a></figure> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Reviewing the Data Breach Investigation Report for a source of credible data about the real cyber threats we must worry about.</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>In security, numbers can deceive. If somebody tells you a given risk has an "82% chance of materializing", run. Nobody possesses such reliable data. Numbers give pseudo-experts a varnish of credibility. Anybody can fudge them.</p><p><a href="https://www.verizon.com/business/resources/T5b7/reports/2024-dbir-data-breach-investigations-report.pdf?ref=thisisgrc.com" rel="noreferrer">Verizon's Data Breach Investigation Report (DBIR)</a> is one of the few reliable public data sources about the current state of security breaches and their associated costs.   I expect cyber insurers to build detailed actuarial tables in the long term. Still, that enterprise may prove impossible: cyber threats are much more unpredictable than natural disasters, theft or vandalism.  </p><p>We must cling to reliable sources such as the DBIR as an intellectual self-defence mechanism. Vendors and influencers need your fear to sell their salad. Just yesterday, a security awareness training vendor alarmed me on LinkedIn about "<em>90%+ of breaches being due to humans</em>", without providing a source...</p><p>I read the 2024 DBIR and here are some reliable conclusions.</p><hr><h2 id="vulnerabilities-are-back-in-style">Vulnerabilities Are Back In Style</h2><p>I made the hack of the MOVEIt "secure" file transfer solution ppfosec's <a href="https://thisisgrc.com/top-security-stories-of-2023/" rel="noreferrer">top story of 2023</a>. The DBIR's data backs it up. Criminals could break into MOVEIt's software with a simple code injection, which made their lives easy. As a result, software vulnerabilities became a popular entry point, as much as stolen credentials. </p><p>Two conclusions emerge:</p><ul><li>Criminals will always use the path of least effort;</li><li>You can't tunnel vision only on authentication.</li></ul><p>Imagine if, next year, a piece of malware can execute on PDFs without macros, or a Javascript malware can break out of Chrome, or a WhatsApp WAV attachment can read into an iPhone's file system. Malware would become the #1 trend! </p><p>As much as I hate to write this: <strong>securing your systems based on headlines is not a stupid strategy</strong>. Yes, its reactive nature puts you behind, and I'd much rather build a comprehensive strategy around known standard practices. But still, I can't help but feel a quick fix for a small-medium enterprise is to follow the advice from reputable news sources!</p><p>Speaking of trends...</p><hr><h2 id="generative-ai-is-nowhere-to-be-seen">Generative AI is Nowhere to be Seen</h2><p>In March 2023, I wrote <a href="https://thisisgrc.com/how-criminals-will-use-generative-ai-to-scam-us/" rel="noreferrer">How criminals will use generative AI to scam us</a>. Looking back, I'm proud of this article because I still see publications, podcasts, journalists, and vendors throttling out the same ideas. I was ahead of the curve, yay! Or was I?</p><p>In the DBIR, the only mention of generative AI is ChatGPT Teams account selling on the black market. None of the scenarios such as AI-powered password guessing, ultra-realistic deepfake frauds, and well-written phishing emails happened.</p><p>Generative AI use in cyber criminality may be a cool academic discussion, but for now, it's speculative. In fact, given what we see in the data, I will be using this question as a "B.S. detector"... </p><p>Where the methods <em>did</em> change is...</p><hr><h2 id="criminals-are-seeking-new-revenue-streams">Criminals Are Seeking New Revenue Streams</h2><p>It might not appear so, but we are slowly winning against traditional ransomware. Based on FBI data, only 4% of ransomware victims paid the ransom (down from 7%), for a median loss of $46,000 (up from $26,000). Roughly speaking: the number of victims halved, and the criminals compensated by raising the price twofold. This reminds me of cable TV in the 2010s: twice the ads, twice the price, half the content. </p><p>The bad news is that hackers are finding less technical ways to extract revenue: fraud and extortion. Business email compromise (the biggest issue in the 2023 report) is still trending around 25% of breaches. Extortion is part of 10% of breaches, from 0% in 2023. </p><p>What is the difference between ransomware and "pure extortion"? Criminals will still breach your systems, lock you out, and ask for ransom. But they will also threaten to dox you to your regulators, customers, investors, and the media. </p><p><a href="https://www.bleepingcomputer.com/news/security/clop-gang-to-earn-over-75-million-from-moveit-extortion-attacks/?ref=thisisgrc.com" rel="noreferrer">Initial analysis</a> suggests extortion will have a short shelf life. Companies have become educated about security breaches and they'd rather face the music than pay the ransoms. </p><p>In short: we are getting better. And it's not the end...</p><p></p><hr><h2 id="authorities-initiatives-are-working">Authorities' Initiatives Are Working</h2><p>The DBIR feeds of FBI datasets. It gathers collaboration from the Cybersecurity and Infrastructure Security Agency (CISA). The CISA's catalog of "known exploited vulnerabilities (KEV) is built off a nationwide set of honeypots that can alert organizations of urgent threats within days, sometimes hours, of exploitation. The FBI took down the Qakbot ransomware, the Lockbit and Hive groups. I may be "drinking the Kool-Aid", but I'm forced to conclude that US institutions are performing both in awareness and enforcement, to tangible benefits.</p><p>It's not sexy to say we're winning, for obvious reasons. Security is infinite, and the minute we lower our attention, everything needs to be redone. We can't afford to sit on our laurels. Imagine if the trends I've observed this year end up reversing downwards for 2025. My optimistic take would look foolish. And that's the beauty of the doomsayers... if we call them out on their exaggeration, they could always say "better safe than sorry; hope for the best, prepare for the worst!" So I dare to believe our efforts are making a positive impact globally. </p><p>Do you share my optimism? Let us know in the comments!</p><hr><div class="kg-card kg-callout-card kg-callout-card-purple"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">If you want to read ppfosec's breakdown of the 2023 DBIR, click below.</div></div><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://thisisgrc.com/know-what-youre-up-against-insights-from-the-2023-data-breach-investigations-report/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Know What You’re Up Against: Insights from the 2023 Data Breach Investigations Report</div><div class="kg-bookmark-description">Verizon’s Data Breach Investigation Report (DBIR) is a must-read resource to gain insight into current cybersecurity breaches. The 2023 version is out, and I read the whole 90 pages of it so you don’t have to!</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/size/w256h256/format/jpeg/2022/06/icone.jpg" alt=""><span class="kg-bookmark-author">ppfosec</span><span class="kg-bookmark-publisher">Pierre-Paul Ferland</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://storage.ghost.io/c/75/2d/752de7e0-940e-42a2-8399-ae9bb4bc1762/content/images/size/w1200/2023/06/newsletter-2023-06-14.png" alt="" onerror="this.style.display = 'none'"></div></a></figure> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>Why Training Every Employee is Security Theater (And What Actually Works)</title>
                    <link>https://thisisgrc.com/how-could-senior-management-training-revolutionize/</link>
                    <pubDate>Wed, 01 May 2024 08:08:54 -0400
                    </pubDate>
                    <guid isPermaLink="false">662da81c443d4c00013f04b5</guid>
                    <category>
                        <![CDATA[ Leadership ]]>
                    </category>
                    <description>Building an efficient information security management system is not just about policies, data, and metrics. We must influence leadership to build secure organizations. The secret ingredient? Security people&#x27;s innate sense of community. </description>
                    <content:encoded>
                        <![CDATA[ <p>You know that person everyone goes to with their tech problems? "My laptop's acting weird!" "Is this email sketchy?" "Help, I can't access the shared drive!"</p><p>I love these interactions. As a GRC leader, I'm constantly hunting for these natural security champions in our organization. But here's what drives me absolutely nuts: the executives who shrug off security concerns like they're suggestions for the office Christmas party theme.</p><p>If you're reading this, I bet you've felt that same frustration.</p><p><strong>There's a security consulting firm in my city with the slogan "Changing the world of security, one user at a time." It's inspiring. It's also complete nonsense.</strong></p><p>For years, I've been obsessed with getting individuals to adopt secure behaviors. I've tried everything: security-by-default configurations, behavioral nudges, gamification, slick video content, community building... you name it. Notice what's missing from that list? Traditional security awareness training.</p><p>Why? Because "we need more security training" is usually what people say when they have nothing useful to contribute. It materializes usually in those mandatory corporate videos we all pretend to watch while answering emails. It's compliance theater at its finest. </p><p>But recently, I stumbled across an approach that completely flipped my thinking: What if instead of trying to educate thousands of employees, we focused laser-sharp on transforming the handful of executives who actually run the show?</p><hr><h2 id="the-leadership-leverage-point">The Leadership Leverage Point</h2><p>Here's the uncomfortable truth about organizational change: metrics don't matter if nobody uses them.</p><p>I spent the last year building beautiful risk dashboards, compliance tracking systems, and predictive analytics tools. The data was clean, the insights were actionable, and the initial results looked fantastic. Then leadership changed, and the new executives defaulted to what they knew. All that progress? Gone.</p><p>I recently listened to an episode of <em>Freakonomics</em> podcast which tells the story of how the University of Chicago tried to fix policing by improving data and processes across thousands of departments. It failed because they couldn't scale leadership understanding. When trained managers left, their replacements had no clue how to use the new systems.</p><p>Sound familiar?</p><p>The breakthrough came when organizations shifted focus: instead of training 400,000 officers to follow procedures, they invested in training 4,000 leaders to build better departments.</p><hr><h2 id="security-leadership-isnt-what-you-think">Security Leadership Isn't What You Think</h2><p>The most successful security transformations I've seen don't happen because executives memorize the NIST framework or learn to spot phishing emails. They happen when leaders genuinely understand how security connects to business outcomes.</p><p>The magic happens in three areas:</p><p><strong>Strategic Integration</strong>: Executives who see security as a competitive advantage, not a cost center. They naturally weave risk considerations into hiring, project planning, and vendor decisions.</p><p><strong>Cultural Psychology</strong>: Leaders who understand that lasting behavior change comes from motivation, not mandates. They know how to make security feel like a shared mission rather than imposed restrictions.</p><p><strong>Peer Networks</strong>: When security becomes part of executive identity, it spreads horizontally through leadership ranks faster than any training program ever could.</p><hr><h2 id="the-multiplier-effect-is-real">The Multiplier Effect is Real</h2><p>When a VP genuinely cares about security, their entire organization feels it. Not because they send stern emails about password policies, but because security considerations naturally flow into every decision they influence.</p><p>Their hiring managers start asking about security experience. Their project managers build risk assessment into planning cycles. Their teams actually engage with security requirements instead of treating them as roadblocks.</p><p>Security stops being "the team that says no" and becomes part of how business gets done.</p><hr><h2 id="heres-my-challenge-to-you">Here's My Challenge to You</h2><p>We have limited time and energy. The question isn't whether we can educate every employee—it's whether we're focusing our efforts where they'll create the biggest impact.</p><p>I'm convinced that converting a dozen senior leaders into security advocates beats running awareness campaigns for a thousand employees. The intimacy of executive education allows for deeper conversations, more sophisticated thinking, and sustainable behavioral change.</p><p>Plus, let's be honest: most awareness training is forgettable corporate content. Executive sessions can be engaging, strategic discussions that leaders actually want to attend.</p><p>What's your experience? Have you successfully converted senior executives into genuine security advocates? Or are you still stuck in the "awareness training for everyone" mindset?</p><p>The future of organizational security might depend on our ability to think smaller and aim higher.</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Building an efficient information security management system is not just about policies, data, and metrics. We must influence leadership to build secure organizations. The secret ingredient? Security people&#x27;s innate sense of community. </itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>You know that person everyone goes to with their tech problems? "My laptop's acting weird!" "Is this email sketchy?" "Help, I can't access the shared drive!"</p><p>I love these interactions. As a GRC leader, I'm constantly hunting for these natural security champions in our organization. But here's what drives me absolutely nuts: the executives who shrug off security concerns like they're suggestions for the office Christmas party theme.</p><p>If you're reading this, I bet you've felt that same frustration.</p><p><strong>There's a security consulting firm in my city with the slogan "Changing the world of security, one user at a time." It's inspiring. It's also complete nonsense.</strong></p><p>For years, I've been obsessed with getting individuals to adopt secure behaviors. I've tried everything: security-by-default configurations, behavioral nudges, gamification, slick video content, community building... you name it. Notice what's missing from that list? Traditional security awareness training.</p><p>Why? Because "we need more security training" is usually what people say when they have nothing useful to contribute. It materializes usually in those mandatory corporate videos we all pretend to watch while answering emails. It's compliance theater at its finest. </p><p>But recently, I stumbled across an approach that completely flipped my thinking: What if instead of trying to educate thousands of employees, we focused laser-sharp on transforming the handful of executives who actually run the show?</p><hr><h2 id="the-leadership-leverage-point">The Leadership Leverage Point</h2><p>Here's the uncomfortable truth about organizational change: metrics don't matter if nobody uses them.</p><p>I spent the last year building beautiful risk dashboards, compliance tracking systems, and predictive analytics tools. The data was clean, the insights were actionable, and the initial results looked fantastic. Then leadership changed, and the new executives defaulted to what they knew. All that progress? Gone.</p><p>I recently listened to an episode of <em>Freakonomics</em> podcast which tells the story of how the University of Chicago tried to fix policing by improving data and processes across thousands of departments. It failed because they couldn't scale leadership understanding. When trained managers left, their replacements had no clue how to use the new systems.</p><p>Sound familiar?</p><p>The breakthrough came when organizations shifted focus: instead of training 400,000 officers to follow procedures, they invested in training 4,000 leaders to build better departments.</p><hr><h2 id="security-leadership-isnt-what-you-think">Security Leadership Isn't What You Think</h2><p>The most successful security transformations I've seen don't happen because executives memorize the NIST framework or learn to spot phishing emails. They happen when leaders genuinely understand how security connects to business outcomes.</p><p>The magic happens in three areas:</p><p><strong>Strategic Integration</strong>: Executives who see security as a competitive advantage, not a cost center. They naturally weave risk considerations into hiring, project planning, and vendor decisions.</p><p><strong>Cultural Psychology</strong>: Leaders who understand that lasting behavior change comes from motivation, not mandates. They know how to make security feel like a shared mission rather than imposed restrictions.</p><p><strong>Peer Networks</strong>: When security becomes part of executive identity, it spreads horizontally through leadership ranks faster than any training program ever could.</p><hr><h2 id="the-multiplier-effect-is-real">The Multiplier Effect is Real</h2><p>When a VP genuinely cares about security, their entire organization feels it. Not because they send stern emails about password policies, but because security considerations naturally flow into every decision they influence.</p><p>Their hiring managers start asking about security experience. Their project managers build risk assessment into planning cycles. Their teams actually engage with security requirements instead of treating them as roadblocks.</p><p>Security stops being "the team that says no" and becomes part of how business gets done.</p><hr><h2 id="heres-my-challenge-to-you">Here's My Challenge to You</h2><p>We have limited time and energy. The question isn't whether we can educate every employee—it's whether we're focusing our efforts where they'll create the biggest impact.</p><p>I'm convinced that converting a dozen senior leaders into security advocates beats running awareness campaigns for a thousand employees. The intimacy of executive education allows for deeper conversations, more sophisticated thinking, and sustainable behavioral change.</p><p>Plus, let's be honest: most awareness training is forgettable corporate content. Executive sessions can be engaging, strategic discussions that leaders actually want to attend.</p><p>What's your experience? Have you successfully converted senior executives into genuine security advocates? Or are you still stuck in the "awareness training for everyone" mindset?</p><p>The future of organizational security might depend on our ability to think smaller and aim higher.</p> ]]>
                    </itunes:summary>
                </item>
                <item>
                    <title>How to Balance Security and Privacy?</title>
                    <link>https://thisisgrc.com/how-to-balance-security-and-privacy/</link>
                    <pubDate>Wed, 24 Apr 2024 08:08:12 -0400
                    </pubDate>
                    <guid isPermaLink="false">66272287bc8f8700010d6f49</guid>
                    <category>
                        <![CDATA[ GRC in Practice ]]>
                    </category>
                    <description>Considering the balance between security and privacy, my experience as a security specialist has taught me that privacy is now integral to building effective security tools</description>
                    <content:encoded>
                        <![CDATA[ <p>Can your employer's IT department view your internet browsing? Can they spy on you with their mobile device management software? I get this question commonly. The answer depends on the software, but I often say, "<em>Assume they can</em>". </p><p><strong>Balancing security and privacy is tricky.</strong> I work daily with the privacy compliance team, to the point we call each other cousins. So how can we get at odds? To detect malicious activity, we need<em> full visibility</em> into network activity. On the other hand, privacy is about giving people the right to consent for what purposes they will allow companies to track them online. Whose interest takes priority?</p><p>The frustrating answer is "it depends". <strong>Based on the problems I've met, I adopted contradictory stances</strong>. Ambiguity is terrible for efficient communication but remains necessary.</p><p>Let me share some anecdotes where I encountered this tension. Hopefully, you'll be better prepared if you run into them in your work.</p><hr><h2 id="privacy-is-not-about-hiding">Privacy is not about "hiding"</h2><p>One of the smartest security analysts I've met was a privacy freak. Before they were cool, he'd use a Librem privacy phone and many privacy gadgets such as TailsOS and Telegram.</p><p>He wasn't involved in hacktivism or citizen research. So why go to these lengths? To an extent, it was a game for him: how to defeat all these trackers?</p><p>On the other side of the spectrum, you have people shrugging off when they learn about Meta's extensive data collection and misuse: "<em>I got nothing to hide</em>". I bet you've heard that one.</p><p> I believe both extremes look at the problem in a "binary" fashion. </p><p>I see privacy as a choice rather than a "right to hide". Individuals must be informed of tracking and they must consent. That's not an entitlement to anonymity, nor an open bar for the trackers. I'll explore two cases to show you what I mean.</p><h3 id="mobile-device-management">Mobile device management</h3><p>I am still debating the merits of MDM software capabilities. As a security specialist, I need to enforce configurations such as password protection and block certain malicious apps and files from executing, plus perform a remote wipe of the device is stolen. This is "Mobile Security 101".</p><p>On the flip side, users are entitled to a degree of personal use of their machines. I would be deeply uncomfortable if a manager asked security for MDM logs to justify a firing. I'm not even delving into the legality behind these questions!</p><p><strong>Can MDM protection be achieved without unintended consequences? This is where data about actual attacks can ultimately inform the decision</strong>: </p><ul><li>are cyberattackers delivering mobile-based malware or social engineering?</li><li>could the MDM stop them?</li><li>what does the organization stand to lose if nothing is done?</li></ul><p>In the end, I have yet to fully flesh out a cost-benefit assessment to make up my mind about the current MDM solutions. Despite being a fundamental defensive measure, the privacy tradeoffs forced me to put into question the expected value I wanted to derive from an MDM.</p><h3 id="audit-logs">Audit logs</h3><p>As a security specialist, I need systems to generate logs of events that allow forensics investigators to establish events behind a cyber attack. Their conclusions must hold up in Court and follow the chain of custody.</p><p>As far as I'm concerned, <strong>cybercriminals do not have the right to be forgotten</strong>.</p><p>Nevertheless, it can appear silly to collect everything about users' activity on your public website for security purposes while your engineering team is building privacy-enhancing technologies to effectively avoid IP address collection in website analytics! As an engineer pointed out: "<em>How long until the ads team figures out they've got all the data unmasked in the security logs?</em>"</p><p>My conclusion remains to keep the breadth of the audit logs and compensate for privacy invasiveness with access controls, data lifecycle and monitoring. But I found the exercise of wondering about logs fascinating: <strong>as a security individual, I took for granted that audit logs were necessary</strong>. I had never wondered about them being used for other purposes!</p><hr><h2 id="privacy-is-a-supplemental-cost-of-security-measures">Privacy is a supplemental cost of security measures</h2><p>When teaching, I must deliver content about the ethics of cybersecurity. The crux of my interventions revolves around the idea that security analysts gain privileged access to sensitive information.</p><p>What I learned from being interested in privacy is that this "sensitive" information is not merely generated by "business applications" (customer records, trade secrets intellectual property, etc.) but that <em>our security tooling </em>could be <em>misused for privacy invasiveness.</em></p><p>Another takeaway is the necessity to inform users. Sure, one cannot consent to being part of an audit log, this would defeat the purpose. However, users can know which measures we are implementing to guarantee minimum access. </p><p>Perhaps I could measure the rise of overall security transparency by the number of times people ask me worryingly if their employer is reading their SMS.</p><p>What do you think about the tradeoffs between security tooling and privacy? Tell us in the comments!</p> ]]>
                    </content:encoded>
                    <enclosure url="" length="0"
                        type="audio/mpeg" />
                    <itunes:subtitle>Considering the balance between security and privacy, my experience as a security specialist has taught me that privacy is now integral to building effective security tools</itunes:subtitle>
                    <itunes:summary>
                        <![CDATA[ <p>Can your employer's IT department view your internet browsing? Can they spy on you with their mobile device management software? I get this question commonly. The answer depends on the software, but I often say, "<em>Assume they can</em>". </p><p><strong>Balancing security and privacy is tricky.</strong> I work daily with the privacy compliance team, to the point we call each other cousins. So how can we get at odds? To detect malicious activity, we need<em> full visibility</em> into network activity. On the other hand, privacy is about giving people the right to consent for what purposes they will allow companies to track them online. Whose interest takes priority?</p><p>The frustrating answer is "it depends". <strong>Based on the problems I've met, I adopted contradictory stances</strong>. Ambiguity is terrible for efficient communication but remains necessary.</p><p>Let me share some anecdotes where I encountered this tension. Hopefully, you'll be better prepared if you run into them in your work.</p><hr><h2 id="privacy-is-not-about-hiding">Privacy is not about "hiding"</h2><p>One of the smartest security analysts I've met was a privacy freak. Before they were cool, he'd use a Librem privacy phone and many privacy gadgets such as TailsOS and Telegram.</p><p>He wasn't involved in hacktivism or citizen research. So why go to these lengths? To an extent, it was a game for him: how to defeat all these trackers?</p><p>On the other side of the spectrum, you have people shrugging off when they learn about Meta's extensive data collection and misuse: "<em>I got nothing to hide</em>". I bet you've heard that one.</p><p> I believe both extremes look at the problem in a "binary" fashion. </p><p>I see privacy as a choice rather than a "right to hide". Individuals must be informed of tracking and they must consent. That's not an entitlement to anonymity, nor an open bar for the trackers. I'll explore two cases to show you what I mean.</p><h3 id="mobile-device-management">Mobile device management</h3><p>I am still debating the merits of MDM software capabilities. As a security specialist, I need to enforce configurations such as password protection and block certain malicious apps and files from executing, plus perform a remote wipe of the device is stolen. This is "Mobile Security 101".</p><p>On the flip side, users are entitled to a degree of personal use of their machines. I would be deeply uncomfortable if a manager asked security for MDM logs to justify a firing. I'm not even delving into the legality behind these questions!</p><p><strong>Can MDM protection be achieved without unintended consequences? This is where data about actual attacks can ultimately inform the decision</strong>: </p><ul><li>are cyberattackers delivering mobile-based malware or social engineering?</li><li>could the MDM stop them?</li><li>what does the organization stand to lose if nothing is done?</li></ul><p>In the end, I have yet to fully flesh out a cost-benefit assessment to make up my mind about the current MDM solutions. Despite being a fundamental defensive measure, the privacy tradeoffs forced me to put into question the expected value I wanted to derive from an MDM.</p><h3 id="audit-logs">Audit logs</h3><p>As a security specialist, I need systems to generate logs of events that allow forensics investigators to establish events behind a cyber attack. Their conclusions must hold up in Court and follow the chain of custody.</p><p>As far as I'm concerned, <strong>cybercriminals do not have the right to be forgotten</strong>.</p><p>Nevertheless, it can appear silly to collect everything about users' activity on your public website for security purposes while your engineering team is building privacy-enhancing technologies to effectively avoid IP address collection in website analytics! As an engineer pointed out: "<em>How long until the ads team figures out they've got all the data unmasked in the security logs?</em>"</p><p>My conclusion remains to keep the breadth of the audit logs and compensate for privacy invasiveness with access controls, data lifecycle and monitoring. But I found the exercise of wondering about logs fascinating: <strong>as a security individual, I took for granted that audit logs were necessary</strong>. I had never wondered about them being used for other purposes!</p><hr><h2 id="privacy-is-a-supplemental-cost-of-security-measures">Privacy is a supplemental cost of security measures</h2><p>When teaching, I must deliver content about the ethics of cybersecurity. The crux of my interventions revolves around the idea that security analysts gain privileged access to sensitive information.</p><p>What I learned from being interested in privacy is that this "sensitive" information is not merely generated by "business applications" (customer records, trade secrets intellectual property, etc.) but that <em>our security tooling </em>could be <em>misused for privacy invasiveness.</em></p><p>Another takeaway is the necessity to inform users. Sure, one cannot consent to being part of an audit log, this would defeat the purpose. However, users can know which measures we are implementing to guarantee minimum access. </p><p>Perhaps I could measure the rise of overall security transparency by the number of times people ask me worryingly if their employer is reading their SMS.</p><p>What do you think about the tradeoffs between security tooling and privacy? Tell us in the comments!</p> ]]>
                    </itunes:summary>
                </item>
    </channel>
</rss>