Episodes

  • Episode Summary

    Are you curious about the ever-changing landscape of data security? In this episode, we are joined by Danny Allan, the newly appointed Chief Technology Officer at Snyk, to delve into the evolving landscape of data security. In our conversation, we discussed his professional background and how he went from hacking security systems at university to becoming a security expert at Snyk. Hear about his experience in dynamic application security testing and the challenges and opportunities of working for large companies. We unpack how controlling human actions can reduce security vulnerabilities, the nuances of running cloud-hosted services, and how the techniques used for static application security testing have changed. Danny explains the importance of considering security aspects during the early stages of software development and how governance has integrated into data security measures. Gain valuable insights into the ever-changing landscape of data security, AI’s potential role in revolutionizing security practices, and much more.

    Show Notes

    In this episode, Guy Podjarny is joined by Danny Allan, the new CTO at Snyk. Danny shares his fascinating career journey that has taken him in and out of the application security space over the past 20+ years.

    They discuss how application security practices like static analysis (SAST) and dynamic scanning (DAST) have evolved, with SAST becoming much faster and easier to integrate earlier in the development cycle. Danny reflects on what has changed and what has surprisingly stayed the same since his earlier days in AppSec.

    The conversation digs into the intersections between application security, data security, cloud security, and how these domains are becoming more interconnected as the same teams take on responsibilities across these areas. Danny draws insights from his recent experience at Veeam, highlighting how practices like data immutability and multi-person authorization grew in importance to combat ransomware threats.

    Looking ahead, Danny and Guy explore the potential impact of AI/ML on application security. From automating threat modeling to personalizing vulnerability findings based on developer interests to generating rules and fixes, Danny sees AI unlocking many opportunities to transform AppSec practices.

    Overall, this episode provides a unique perspective spanning Danny's 20+ year career in security. His experiences illustrate the evolution of AppSec tooling and processes, the blurring of domains like app/data/cloud security, and how AI could radically reshape the future of application security.

    Links

    VMwareVeeamSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Missing episodes?

    Click here to refresh the feed.

  • Episode Summary

    Explore the role of consolidated platforms in software development with our guest, John Delmare, Global Application and Cloud Security Lead of Accenture. This episode dives into the growing complexity in the developer space and how these platforms streamline processes and foster collaboration among distributed teams. We discuss balancing application and cloud security, the financial and time-saving benefits of integrated platforms, and the role of best-of-breed technology in an evolving tooling landscape. Tune in for a preview of future secure development practices and practical advice on navigating this dynamic space.

    Show Notes

    In this engaging episode of The Secure Developer, host Simon Maple chats with John Delmare, Managing Director of Accenture and Global Application and Cloud Security Lead, about the movement towards platform consolidation in the field of DevSecOps.

    They dive into an in-depth exploration of the potential advantages and barriers that emerge from the reduction of tool sprawl. Using his extensive experience and insights, Delmare sheds light on how this development can enhance efficiency for developers and, at the same time, benefit companies by making processes more streamlined, cost-efficient, and effective.

    Not losing sight of the role of best-of-breed tools, the conversation takes a turn into how such tools fare in the current scenario, whether they still hold relevance, or if the consolidation trend is set to overshadow them. More intriguingly, Delmare and Maple delve into the potential implications of emerging technologies like General Artificial Intelligence (GenAI) on the strategies for security tooling.

    Further enriching the conversation, they emphasize the critical need for a common ground between security and development teams. Platform consolidation comes into play here by offering shared data views and aligning the teams towards unified goals, making the perfect case for seamless DevSecOps practices.

    This episode is packed with insights that would cater to developers, security professionals, and decision-makers in the IT industry, offering them a clearer view of the current trends and allowing them to make strategically sound decisions. Tune in to be part of this insightful conversation.

    Links

    AccentureSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    In this episode of The Secure Developer, Guy Podjarny and guest Sean Catlett discuss the shift from traditional to engineering-first security practices. They delve into the importance of empathy and understanding business operations for enforcing better security. Catlett emphasizes utilizing AI for generic tasks to focus on crafting customized security strategies.

    Show Notes

    In this episode of The Secure Developer, host Guy Podjarny chats with experienced CISO Sean Catlett about transforming traditional security cultures into a more modern, engineering-first approach. Together, they delve into the intricacies of this paradigm shift and the resulting impact on organizational dynamics and leadership perspectives.

    Starting with exploring how an empathetic understanding of a business's operational model can significantly strengthen security paradigms, the discussion progresses toward the importance of creating specialized security protocols per unique business needs. They stress that using AI and other technologies for generic tasks can free up teams to concentrate on building tailored security solutions, thereby amplifying their efficiency and impact on the company's growth.

    In the latter part of the show, Catlett and Podjarny investigate AI's prospective role within modern security teams and lay out some potential challenges. Recognizing the rapid evolutionary pace of such technologies, they believe keeping up with AI advancements is crucial for capitalizing on its benefits and pre-empting potential pain points.

    AI-curious listeners will find this episode brimming with valuable insights as Catlett and Podjarny demystify the complexities and highlight the opportunities of the current security landscape. Tune in to learn, grow, and transform your security strategy.

    Links

    SlackFedRAMPGitHub CopilotChatGPTSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    In this special episode, our guest host, Liran Tal, interviews Snyk's Staff Security Researcher, Rory McNamara, about newly discovered high-impact container breakout vulnerabilities. Liran and Rory go deep into the vulnerabilities and cover everything you need to know, how the vulnerabilities were discovered, and much more.

    Show Notes

    In this informative episode of The Secure Developer, guest host Liran Tal chats with Snyk security researcher Rory McNamara about his ground-breaking discoveries related to Docker vulnerabilities. McNamara's diligent investigations have spotlighted significant container breakout weaknesses, prompting a deep-dive exploration of the complexities of Docker’s security scene.

    Refreshingly candid about the intricacies involved in tracking down these vulnerabilities, McNamara shares the detective-like processes he uses to trace the connections between key components and functionalities. As they discuss the eye-opening potential for exploitation, Rory highlights how using strace helped him decode the problematic underbelly of Docker.

    Listening to this episode opens up a world of understanding about software supply chain security and the wider implications of these emerging vulnerabilities. Ideal for both security leaders wanting to stay on the cutting edge and developers interested in the nitty-gritty, this conversation not only reveals the problems but also offers solutions. McNamara drives home the importance of timely updates, adopting the principle of least privilege, and layering security measures for optimal protection. This is a must-listen for anyone wanting to deepen their understanding of today's vital security challenges.

    Links

    Leaky Vessels Blog PostDockerKubernetesOWASP Top 10FirecrackerSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    Laura Bell Main, CEO at SafeStack, discusses the two-fold implications of AI for threat modeling in DevSecOps. She highlights challenges in integrating AI systems, the importance of data verifiability, and the potential efficiencies AI tools can introduce. With guidance, she suggests it's possible to manage the complexities and ensure the responsible utilization of AI.

    Show Notes

    In this intriguing episode of The Secure Developer, listen in as Laura Bell Main, CEO at SafeStack, dives into the intricate world of AI and its bearing on threat modeling. Laura provides a comprehensive glimpse into the dynamic landscape of application security, addressing its complexities and the pivotal role of artificial intelligence.

    Laura elucidates how AI has the potential to analyze vulnerabilities, identify risks, and make repetitive tasks efficient. As she delves deeper, she explores how AI can facilitate processes and significantly enhance security measures within the DevSecOps pipeline. She also highlights a crucial aspect - AI is not just an enabler but should be seen as a partner in achieving your security objectives.

    However, integrating AI into existing systems is not without its hurdles. Laura illustrates the complexities of utilizing third-party AI models, the vital importance of data verifiability, and the possible pitfalls of over-reliance on an LLM.

    As the conversation advances, Laura provides insightful advice to tackle these challenges head-on. She underscores the importance of due diligence, the effective management of AI integration, and the necessity of checks and balances. With proactive measures and responsible use, she affirms that AI has the potential to transform threat modeling.

    Don't miss this episode as Laura provides a thoughtful overview of the intersection of AI and threat modeling, offering important insights for anyone navigating the evolving landscape of DevSecOps. Whether you're a developer, a security enthusiast, or a tech leader, this episode is packed with valuable takeaways.

    Links

    Agile Application SecuritySecurity for EveryoneMicrosoft STRIDEOWASP Top 10 for Large Language Model ApplicationsSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • In this engaging episode, hosts Simon Maple and Guy Podjarny delve into the transformative role of AI in software development and its implications for security practices. The discussion starts with a retrospective look at 2023, highlighting key trends and developments in the tech world. In particular, they discuss how generative AI is reshaping the landscape, altering the traditional roles of developers and necessitating a shift in security paradigms.

    Simon and Guy explore AI-generated code challenges and opportunities, emphasizing the need for innovative security strategies to keep pace with this rapidly evolving technology. They dissect the various aspects of AI in development, from data security concerns to integrating AI tools in software creation. The conversation is rich with insights on how companies adapt to these changes, with real-world examples illustrating the growing reliance on AI in the tech industry.

    This episode is a must-listen for anyone interested in the future of software development and security. Simon and Guy's expertise provides listeners with a comprehensive understanding of AI's current development state and offers predictions on how these trends will continue to shape the industry in 2024. Their analysis highlights the technical aspects and delves into the broader implications for developers and security professionals navigating this new AI-driven era.

    Links

    Github CopilotNIST (National Institute of Standards and Technology)CNCF (Cloud Native Computing Foundation)Backstage by SpotifyRoadie (Backstage Hosting)Snyk AI FixingSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    Guy explores AI security challenges with Salesforce's VP of Security, Henrik Smith. They discuss the fine line between authentic and manipulated AI content, stressing the need for strong operational processes and collaborative, proactive security measures to safeguard data and support secure innovation.

    Show Notes

    In this episode, host Guy Podjarny sits down with Henrik Smith, VP of Security at Salesforce, to delve into the intricacies of AI and its impact on security. As the lines between real and artificially generated data become increasingly blurred, they explore the current trends shaping the AI landscape, particularly in voice impersonation and automated decision-making.

    During the conversation, Smith articulates the pitfalls organizations face as AI grows easier to access and misuse, potentially bypassing security checks in the rush to leverage new capabilities. He urges listeners to consider the importance of established processes and the responsible use of AI, especially regarding sensitive data and upholding data governance policies.

    The episode also dives into security as a facilitator rather than an inhibitor within the development process. Smith shares his experiences and strategies for fostering cross-departmental collaboration at Salesforce, underscoring the value of shifting left and fixing issues at their source. He highlights how security can and should act as an enabling service within organizations, striving to resolve systemic risks and promoting a culture of secure innovation.

    Whether an experienced security professional or a tech enthusiast intrigued by AI, this episode promises to offer valuable insights into managing AI's security challenges and harnessing its potential responsibly.

    Links

    Snyk's 2023 AI-Generated Code Security ReportSalesforceSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    In this episode of The Secure Developer, our co-hosts Simon Maple and Guy Podjarny discuss the rise of AI in code generation. Drawing from Snyk's 2023 AI Code Security Report, they examine developers' concerns about security and the importance of auditing and automated controls for AI-generated code.

    Show Notes

    In this compelling episode of The Secure Developer, hosts Simon Maple and Guy Podjarny delve into the fascinating and fast-paced world of artificial intelligence (AI) in code generation. Drawing insights from Snyk's 2023 AI Code Security Report, the hosts discuss the exponential rise in the adoption of AI code generation tools and the impact this has on the software development landscape.

    Simon and Guy reveal alarming statistics showing that most developers believe AI-generated code is inherently more secure than human-written code, but they also express deep-seated concerns about security and data privacy. This dichotomy sets the stage for a stimulating discussion about the potential risks and rewards of integrating AI within the coding process.

    A significant point of discussion revolves around the need for more stringent auditing for AI-generated code and much tighter automated security controls. The hosts echo the industry’s growing sentiment about the importance of verification and quality assurance, regardless of the perceived assurance of AI security.

    This episode challenges conventional thinking and provides critical insights into software development's rapidly evolving AI realm. It's an insightful listen for anyone interested in understanding the interplay of AI code generation, developer behaviors, and security landscapes.

    Links

    Snyk's 2023 AI-Generated Code Security ReportGitHub CopilotSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    In this episode, Tomasz Tunguz of Theory Ventures discusses the intersection of AI, technology, and security. We explore how AI is revolutionizing software development, data management challenges, and security's vital role in this dynamic landscape.

    Show Notes

    In this episode of The Secure Developer, Guy Podjarny engages in a deep and insightful conversation with Tomasz Tunguz, founding partner of Theory Ventures. They delve into the fascinating world of AI security and its burgeoning impact on the software development landscape. Tomasz brings a unique investor's lens to the discussion, shedding light on how early-stage software companies are leveraging AI to revolutionize market strategies.

    The conversation navigates through the complexities of AI in the realm of security. Tomasz highlights key trends such as data loss prevention, categorization of AI-related companies, and the significant security challenges in this dynamic space. The episode also touches on the critical role of data governance and compliance in the age of AI, exploring how these elements are becoming increasingly intertwined with security concerns.

    A significant part of the discussion is dedicated to the future of AI-powered software development. Guy and Tomasz ponder the evolution of coding, predicting a shift towards higher levels of abstraction and the potential challenges this may pose for security. They speculate on the profound changes AI could bring, transforming how software is developed and the implications for developers and security professionals.

    This episode provides a comprehensive look into the intersection of AI, technology, and security. It's a must-listen for anyone interested in understanding AI's current and future landscape in the tech world, especially from a security standpoint. The insights and predictions offered by Tomasz Tunguz make it an engaging and informative session, perfect for professionals and enthusiasts alike who are keen to stay ahead.

    Links

    Theory VenturesOpenAIGitHubAmazon Web Services (AWS)Google CloudMicrosoft AzureMonte CarloGableSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • Episode Summary

    In this episode, Dr. Christina Liaghati discusses incorporating diverse perspectives, early security measures, and continuous risk evaluations in AI system development. She underscores the importance of collaboration and shares resources to help tackle AI-related risks.

    Show Notes

    In this enlightening episode of The Secure Developer, Dr. Christina Liaghati of MITRE offers valuable insights on the necessity of integrating security considerations right from the design phase in AI system development. She underscores the fact that cybersecurity issues can’t be fixed solely at the end of the development process; rather, understanding and mitigating vulnerabilities require continual iterative discovery and investigation throughout the system's lifecycle.

    Dr. Liaghati emphasizes the need for incorporating diverse perspectives into the process, specifically highlighting the value of expertise from fields like psychology and human-centered design to grasp the socio-technical issues associated with AI use fully. She sounds a cautionary note about the inherent risks when AI is applied in critical sectors like healthcare and transportation, which calls for thorough discussions about these deployments.

    Additionally, she introduces listeners to MITRE's ATLAS project, a community-focused initiative that seeks to holistically address the challenges posed by AI, drawing lessons from past experiences in cybersecurity. She points out the ATLAS project as a resource for learning about adversarial machine learning, particularly useful for those coming from a traditional cybersecurity environment or the traditional AI side.

    Importantly, she talks about the potential of AI technology as a tool to improve day-to-day activities, exemplified by email management. These discussions underscore the importance of knowledgeable and informed debates about integrating AI into various aspects of our society and industries. The episode serves as a useful guide for anyone venturing into the world of AI security, offering a balanced perspective on the potential challenges and opportunities involved.

    Links

    MITRE ATLAS ProjectArsenal CALDERA Plugin for Adversary EmulationIBM's Adversarial Robustness Toolbox (ART)Microsoft's Counterfit ToolMIT AI 101 Course (free)Women in CyberSecurity (WiCyS)MITRE's Twitter AccountMITRE's LinkedIn PageSnyk - The Developer Security Company

    Follow Us

    Our WebsiteOur LinkedIn
  • This week, we're rewinding to play one of our favorite episodes from the archive! We'll be back with a brand-new episode in two weeks!

    Today’s guest is someone we have wanted to have on the show for a long time, and we are so happy to finally welcome him. Dev Akhawe is the Head of Security at Figma, the first state-of-the-art interface design tool that runs entirely in your browser. Before that, Dev worked at Dropbox, as Director of Security Engineering, leading application security, infrastructure security, and abuse prevention for the Dropbox products. He also holds a Ph.D. in Computer Science from UC Berkeley, where his thesis focused on web application security. In this episode, Dev pulls back the curtain and gives us a look at what security at Figma looks like. The relatively small organization has a culture where the security team earns their trust and works openly. This has resulted in far greater cohesion between the security team and developers. We also hear about Dev’s time at Dropbox, and how working on an application with many products exposed him to the gamut of security issues that companies can face. Along with this, we discuss some of the positive changes in how startups are thinking about security, the value of exposing people to different parts of an organization, the place of security champions, and having a curious mindset as a security professional. Dev's approach to security is empathetic, collaborative, and solution-driven, and if you would like to hear more, be sure to tune in today!

  • As AI adoption continues to grow, it's important that effective risk management strategies and industry security standards evolve along with it. To discuss this, we are joined by Royal Hansen, the VP of Engineering for Privacy, Safety, and Security at Google, where he drives the overall information security strategy for the company’s technical infrastructure (and keeps billions of people safe online).

    Royal cut his teeth as a software developer for Sapient before building a cyber-security practice in the financial services industry at @stake, American Express, Goldman Sachs, and Morgan Stanley. In this episode, he explains why adhering to a bold and responsible framework is critical as AI capabilities are integrated into products worldwide and provides an overview of Google’s Secure AI Framework (SAIF), designed to help mitigate risks specific to AI systems. Royal unpacks each of the six core elements of SAIF, emphasizes the importance of collaboration, shares how he uses AI in his personal life, and much more.

    Today’s conversation outlines a practical approach to addressing top-of-mind AI security concerns for consumers and security and risk professionals alike, so be sure to tune in!

  • Security is changing quickly in the fast-paced world of AI. During this episode, we explore AI safety and security with the help of David Haber, who co-founded Lakera.ai. David is also the creator of Gandalf, an AI tool that makes Large Language Models (LLMs) accessible to everyone. Join us as we dive into the world of prompt injections, AI behavior, and its corresponding risks and vulnerabilities. We discuss questions about data poisoning and protections and explore David’s motivation to create Gandalf and how he has used it to gain vital insights into the complex topic of LLM security. This episode also includes a foray into the two approaches to informing an LLM about sensitive data and the pros and cons of each. Lastly, David emphasises the importance of considering what is known about each model on a case-by-case basis and using that as a starting point. Tune in to hear all this and more about AI safety, security, and play from a veritable expert in the field, David Haber!

  • On episode 126 of The Secure Developer we had a fascinating conversation with Guy Rosen, who is the current CISO at Meta. In our chat, we are able to mine Guy's vast experience, expertise, and perspective on what being CISO at a huge tech company in today's climate requires, focusing on how security and integrity concerns come together and play out. In his role at Meta, Guy oversees both of these areas, and listeners will get to hear how he distinguishes the two worlds, and also where they overlap and intersect. We spend some time talking about human and technological resources for these fields, how Guy thinks about skills and hiring, and of course the impact of AI on the field right now. We also hear from our guest about issues such as privacy, account takeover, and the complexity of the policies that govern online abuse. So join us to catch it all in this great conversation!

  • Artificial Intelligence is innovating at a faster than ever before. Could there be a better response than fear? Sam Curry is the VP and Chief Information Security Officer at Zscaler, and he joins us to share his perspective on what AI means for cyber security. Tune in to hear how AI is advancing cybersecurity and the potential threats it poses to data and metadata protection.

    Sam delves into the nature of fearmongering and a more appropriate response to technological development before revealing the process behind AI integration at Zscaler, why many companies are opting to build internal AI systems, and the three buckets of AI in the security world. Sam shares his opinion on eliminating the offensive use of AI, touches on how AI uses mechanical twerks to get around security checks, and discusses the preparation of InfoSec cycles. After we explore the possibility of deception in a DevOps context, Sam reveals his concerns for the malicious use of AI and stresses the importance of advancing in alignment with technological progress. Tune in to hear all this and much more!

  • At the rate at which AI is infiltrating operations around the globe, AI regulation and security is becoming an increasingly pressing topic. As external regulations are put in place, it’s important to ensure that your internal compliance measures are up to scratch and your systems are safe. Joining us today to discuss the security of ML systems and AI applications is Ian Swanson, the Co-Founder and CEO of Protect AI. In this episode, Ian breaks down the five pillars of ML SecOps: supply chain vulnerabilities, model provenance, GRC (governance, risk, and compliance), trusted AI, and adversarial machine learning. We learn the key differences between software development and machine learning development lifecycles, and thus the difference between DevSecOps and ML SecOps. Ian identifies the risks and threats posed to different AI classifications and explains how to level up your GRC practice and why it’s essential to do so! Given the unnatural rate of adoption of AI and the dynamic nature of machine learning, ML SecOps is essential, particularly with the new regulations and third-party auditing that is predicted to grow as an industry. Tune in as we investigate all things ML SecOps and protecting your AI!

  • In this episode of The Secure Developer, we delve into the subject of supply chain security across various ecosystems and languages, guided by industry experts Liran Tal and Roy Ram from Snyk. Liran is the Director of Developer Advocacy at Snyk and has a background working particularly in Node.js and JavaScript. Roy is a Senior Product Manager serving as part of the product team for Snyk Code, and has a background in cybersecurity and a solid understanding of C++. With a 20-year background in Java, host Simon Maple moderates the conversation. We discuss the challenges and differences between ecosystems, such as the use of third-party libraries and issues with typosquatting and malicious packages. We also talk about the volume of dependencies that each of our ecosystems pull in, whether you should stay on the latest version or pin to a version, and the importance of software bill of materials (SBOMs). For valuable advice on securing your supply chain in different languages and ecosystems, tune in today!

  • No one wants to fall prey to a security breach, but in the event that it does occur, it’s important to have systems in place to manage it. In episode 132 of The Secure Developer, we are joined by the CTO of CircleCI, Rob Zuber to discuss the security incident CircleCI announced on January 4th. Rob shares insight into what CircleCI does, how the incident affected customers, and how they communicated it to the public. We find out how the industry responded and adapted to the incident, as well as how it was dealt with internally at CircleCI. Rob opens up about what he learned in the process and shares advice for others facing a security breach. Tune in to find out how best to prevent and manage a security incident, should this happen to you.

  • In episode 131 of The Secure Developer, you’ll hear from former TikTok CISO Roland Cloutier about the realities of securing user-generated content at scale and his belief that we need to take a strictly data-centric approach rather than a humanistic one to solve many of these privacy-related issues. Tuning in, you’ll gain some insight into what it takes to oversee a social media company's cybersecurity, data protection, and crisis management, and find out why Roland believes that an innate understanding of company culture is key to building a large and fast-growing security team in an increasingly virtual world. We also touch on some of the challenges of user identity management, the need for user-driven authentication methods, increased state-level security regulations in the data space, and more, so don’t miss today’s fascinating conversation with cyber security expert and industry veteran, Roland Cloutier!